My initial purpose was to create an environment that juxtaposed sounds that reminded me of emptiness with sounds that reminded me of saturated environments. The idea was to consider the transition between our moments of boredom and our hyperstimulating environments, and how one potentially leads to the other. However after recording the sounds for “emptiness” my focus shifted from a boring, deserted landscape into a richer space for meditative reflection.
The sounds were recorded with the Tascam and the waves represent this headspace of meditative oscillations and ideas. These sounds also reminded me of natural waves, perhaps the sea or maybe a desert with moving sands, and so I wanted to create an experience where the user would immerse himself or herself into these oscillations, both by making the audio more captivating and by creating a visual representation of the landscape.
I edited the audio both with Adobe Audition and with Reaper, adding mainly delays and reverbs. This is the final version:
Because I wanted to create a VR experience, I worked on Unity to develop a sound visualisation. I used a FFT to map the frequencies and display them as moving objects. Particularly I set a series of spheres around a circle because I intended them to be like clouds, and I intended the viewer to stand on the middle. I built a scene with mountains and manipulated the camera so that it could be rotated around and created a ‘game object’ which facilitates the movement of the viewer and the camera.
I also explored the Steam Audio asset, which is a plugin that can be added to give physical qualities to sound, such as making it bounce against walls or creating binaural scenes. However because the space that I was using was not confined and it didn’t have many elements for the sound to bounce I did not use it on my final version because I could not hear any difference.
For the VR implementation I used the Tango SDK for Unity to make the camera move following motion but I had a hard time uploading it to my phone because the mountain ‘asset’ that I used was too big and there’s a bug that keeps showing ‘5/11 clustering’ for large terrains. I tried fixing the lights and setting other options to low quality to minimise memory usage but I had to remove the mountain terrain altogether in order to upload it. The second issue that I experienced was that the motion tracker was not working on my phone even though it worked on the Tango google phones, hence I used another script to control camera movement by swiping thorough the screen. I also tried building a website but I was unable to due to an Emscripten error.
This how the scene looked on Unity before converting to APK. The difference was that the APK did not have any mountains or other terrains; it was just an empty space with the sky and the ground and the oscillating spheres.
I learnt a lot from this project; I explored a new software and a new technique to create media, both of which were completely new to me. With regards to the sounds, this class in general has helped me become more aware of the sounds that surround me and this is the reason why I was able to find (and hopefully recreate) this sonic environment on a bunch of blankets. I have also noticed that I pay more attention to sound editing on video, in particular in nature documentaries, and this project has helped me explore the power of foley sound effects combined with editing. However I recognise that my audio is still very raw, and perhaps I should have spent more time editing and manipulating the file to both better understand how DAWs work and to better create the effect that I was looking for.