Unity experiments 24.04.2020

I am spending this next few days working on experimentation with sending various gif files, hopefully those outputted from the more successful and pertinent pixivisor sessions, to Unity - eventually to be imported into the Teenage Engineering OP-Z for sequencing visuals. So far, it is a bit frustrating to match the various versions of Unity with the ones that will work properly with the OP-Z app software; it seems it will be easier to make compatible with the iOS versions of the app than the OSX versions. I have followed all the instructions to create the so-called sprite sheets for Synthpak’s GIF Looper template and even gotten so far as to have demonstrations work properly within Unity. Unfortunately, I’ve not managed to get them to work properly with the motion sequencer.

These experimentations, however, get me to thinking that perhaps I need to be thinking more about what I want to achieve with visuals - especially in regards to examining the reactive relationship between sound and image. I am sure that I don’t want to end up creating a visual system that merely responds to sound or plays “in synch” as this would be too literal of a relationship between sound and image.

For me, I am more interested to find a relationship between the reactive elements to both image and sound; something that even changes the meaning of these synethstetic relationships when they shift, interplay and “dance around” each other more than search for a literal, linear representation of both.

After the frustrating work last night struggling to get the Unity Video Pak to work correctly, I spent this afternoon (with what little time I have to compose and think at home) looking once again at the work of Alexander Zolotov aka Night Radio. He is the Russian electronic visual music artist that created Pixivisor, but he also has developed the Virtual ANS Synth and other visual music software I often use to demonstrate the Saundaryalahari concept and reciprocal audio visual synthesis ideas. Besides the coding work in Pixilang and Unity, his emphasis seems to be on the linear representation of sound. Phonopaper and Virtual ANS are excellent synths and work very well with visualising music and vice-versa, but they translate almost always literally, and in a straight line or something like that.

I have two main issues with this approach. The first is that it is often a static approach to visualise sound/image. I feel like the way we perceive time and image is not static - sometimes we pay attention to something and sometimes we lose focus or space out, but time almost always does not remain constant in our consciousness. If we are tripped-up on a sound or an interesting image, it may be milliseconds or seconds before we realise we are hanging on to the feeling of that perception, so that our stream of consciousness is uneven, like rafting down a river and getting stuck momentarily on the rocks. I think that this must be somehow accepted and measured in a sound/image system and the best way I can think of to do that is to make sure that the system is reciprocal, so that it is constantly producing movement in all directions. Even stasis is represented in this way. That’s interesting to me. The other issue I have with linear representations in an sound/image translation system is that it mostly always results in something that sounds similar, and - even if it is an advantage sometimes - is fairly easy to recreate, because the system that is tracking the incoming signal responds predictably. In nature, things are rarely always received the same (even if science wants us to believe things like the elements are predictable) and every input signal is based on a movement of an almost infinite amount of variables created by the multiplication of nature. Thats not a disadvantage to the creative system - it is actually an advantage - something that can be observed like a tool for how things always FEEL different even if they are the same. That’s important to me in a creative process.

I’ll have to have more of a think about the reciprocal system of the Saundaryalahari, though, and see what other technical limitations will come up as I continue to work with these systems. I had imagined I would work with an algorithmic aka Unity based sequence system as well as an analogue visualisation system like Pixivisor - maybe even overlay them together - but now that seems sort of silly. We will see what comes next…