I have often been asked to explain what I am seeking when using this system and how it works. The reasons seem pretty clear to me what I am seeking, but of course, as it is research, I am not anticipating knowing now what the exact nature of the outcome will be. In any case, I will try explain a few thoughts about this system here and as needed, will continue to explain in subsequent blog posts.
Points:
Audio/visual synthesis using Pixivisor is part of the Saundaryalahari process, firstly, through its use of non-verbal/non-textural based interaction.
I am seeking a system in which I can get as close as possible to linking the auditory realms and the visual realms together in the creative process.
Qualitatively, I do not want to differentiate between theoretical and actual sound and image. In other words, I want to include distortion, phase, dissonance and accident in the process.
The fact that an audio file can be stored as an animated image file (gif.) and vice-versa, while simultaneously counting both formats (playable within software systems such as pixivisor) is poignant to this project.
It is not simply my purpose to “process” natural images (for example, by manipulating sound between two visual morphs of pixivisor, but to seek outcomes within the geometrical and visual structural elements of live pictures/images while creating unique sound/music. There is no difference in the processes in my opinion, and they can occur simultaneously and inform each other.
Human interaction with sound and image can occur simultaneously - like above - but moving image in particular can inform different aspects of performance in realtime than audio does to the ear. With an intuitive reciprocal system in place, creative improvisation has unique new ways to interact with itself.
Analogue, real-time processes are more important than algorithmic non-real-time systems and are better at formulating creative diversity. This is subjective.
More next time…