3.JPG

The Sonorous World Around Us

Actively creating and fading, the sonorous world is evolving. The wave phenomenon of sound consists of an inherent temporal and spatial quality, which dictates its reverberant and spectral characteristics. It is also essential to consider not only the position of the sound origin but that of the listener, without which a sound is left unheard and thus leaves no impact or trace. For instance, if a tree falls in a forest with no one to hear, it will leave no sonic memory or trace of its fall. According to Francois J. Bonnet, sound is only sonorous once it leaves a trace, which like a parasite needs a host to harbor it and continue its manifestation after the sound is gone.$^1$ “The trace is a residue, a supplement to that which has sounded, a sort of phenomenal hysteresis. A sound, in order to be audible must leave a trace, A sound that no one hears, that no one perceives, or that no one manages to grasp, is not entirely a sound. Whereas a perceived sound, a sound that leaves a trace, is already somewhat more than a sound.” $^2$

*More than a sound.* The memories of sound one experiences are linking them to the time and origin of its creation, leaving a sonic imprint. “The imprint is… the ‘moulding’, the testimony via contact, of a past presence. In order to exist, therefore, a sonorous imprint must have been in contact with a sound that has since vanished, and must harbour within itself the energetic characteristics of that sound.” $^3$  It is this imprint/trace formed by one’s perception that allows such powerful memories to be conveyed through sound. It is when the listener allows themselves to fully focus on this phenomenon in combination with the temporal and spatial nature of sound that they practice what is known as deep listening, a meditation on sound. Deep listening allows the listener to focus on the full scope of the sonorous world around them, from the present sounds to background noise, allowing more sonic phenomena to leave a trace. As Pauline Oliveros puts it in *Some Sound Observations*, deep listening is when you adhere to the sounds of yourself and your environment. $^4$ Throughout this project I explore sound’s relationship to perception, immersion, and memory through the use of field recordings, specialization, and sonic augmentation. 

$^1$ Francois J. Bonnet, Essay. In The Order of Sounds a Sonorous Archipelago, 7. Urbanomic Media Ltd, 2016.

$^2$ Francois J. Bonnet, Essay. In The Order of Sounds a Sonorous Archipelago, 31. Urbanomic Media Ltd, 2016.

$^3$ Francois J. Bonnet, Essay. In The Order of Sounds a Sonorous Archipelago, 34. Urbanomic Media Ltd, 2016.

$^4$ Oliveros, Pauline. “Chapter 19: Some Sound Observations .” Essay. In Audio Culture: Readings in Modern Music, edited by Daniel Warner, 102. Bloomsbury Academic, An imprint of Bloomsbury Publishing Plc, 2015.

1: A Failed Attempt with PureData and MobMuPlat

Originally the idea of the experiment was to take full advantage of the vast sonic world.  When experiencing any naturally occurring soundscape, there is an abundance of sounds propagating throughout the air which originate at different unique locations. How could these countless sounds be manipulated through re-spatialization and augmentation to draw attention to the listeners perception of sound and immerse the listener in their sonic environment? I wanted to make listening an eye-opening experience, through an accessible format that brings attention to daily spaces and the sounds that inhabit them. While focusing on listening, the listener opens their perception to be moulded by the sonic traces and impressions of their environment. These augmented soundscapes would create an immersive experience for the listener allowing for reflection on their past, present, and future through the passing of time in sound, a characteristic of listening that I have always found overwhelming.

1.1 Sound Based on Space

In order to bring attention to the sonic identity of daily spaces and the spatial quality of audio, I wanted to make sure that all sounds, augmented or not augmented, belonged to their place of origin. This faithfulness to origin is an idea inspired by works such as *Ambience 1: Music for Airports* by Brian Eno, as well as the concepts of traditional Tuvan Music. $^5$ Within both examples, music is made or played in specific locations, allowing the natural soundscape and music to be joined in harmony enhancing one another, creating something more than their individual parts.

https://open.spotify.com/embed/album/063f8Ej8rLVTz9KkjQKEMa?utm_source=generator

When thinking about this idea of space for my project, I wanted to take advantage of the live natural soundscape, incorporating crafted augmentations to enhance the experience of the live undulating sounds. The appeal of this concept was one of the main foundations of my project and led me to the idea of creating a piece of software to augment and capture sounds around the listener in real-time. To have the idea of an augmented reality sonic experience work to its fullest potential, I would need the playback of augmented audio to be as transparent as possible, while still delivering good quality. I initially jumped to two possible options: open-back and bone conduction headphones. Although I could have the live feed from the microphones coming through any pair of headphones, this reduces the natural spatialization of the world down to the limitations of microphone representation. After a lot of research on headphone options, I felt that open-back headphones would give me the best result due to their high fidelity playback and transparency. Although bone conduction headphones seemed promising, leaving the ear canal open and allowing complete transparency, I predicted that the fidelity of playback would not have been high enough. For the microphone setup, I initially intended to have two Omni mics attached to the headphones as an attempt to make a binaural recording rig in order to recreate the surroundings as accurately as possible. I concluded that binaural microphones would produce the greatest result due to how they mimic the physiology of human ears. The combination of these two devices would allow the listener to enjoy the sonic environment around them while also hearing the live augmented sounds. Although these tools would lead to better results, they consequently affected the accessibility of the project.

1.2 Sound Walk Inspirations

Within the project, the idea of live audio manipulation occurring while walking around specific locations was inspired by the concept of sound walks, a concept also utilized in the [Sonic Bikes project by Kaffe Mathews](<https://sonicbikes.net/sonic-bike/>).$^6$  Two sound walks that inspired my project and fundamental concepts are [*Her Long Black Hair* by Janet Cardiff](<https://www.publicartfund.org/exhibitions/view/her-long-black-hair/>) $^7$, and *[Electrical Walks* by Christina Kubisch](<https://electricalwalks.org/>).$^8$  The inherent spatial quality of these two sound walks allows them to use their intended environments to the fullest. The narrator of *Her Long Black Hair* walks you through the vibrant soundscape of New York, utilizing binaural recordings that make use of spatialized prerecorded material. In contrast,  *Electrical Walks* makes use of the environment to create the sonic backdrop and actively augments the soundscape around you using electromagnetic transmission through custom-made magnetic headphones. The technique of electromagnetic transmission allows the listener to hear the electromagnetic fields present within their environment, a phenomenon that is not normally audible but perhaps still affects our perception:  “[s]ounds beyond the limits of the ear may be gathered by other sensory systems of the body.” $^9$ 

The accessible nature of Christina Kubish's headphones created the opportunity for the project to travel to many cities allowing people to hear their environments from a new and unnatural perspective. Not only is the project easily moveable and does not need to be paired with a specific location, but there is also a virtual version where listeners can hear a combination of field recordings alongside their electromagnetic counterparts. Such accessibility was a feature I wanted to transfer to my project, hoping to make a unique immersive sonic experience that can be used in any space at any time.

PureData_Progress_6.7.22.JPG

Screenshot_20220627-201147.jpg

*screen capture of app demo

PureData_IterativeFileNaming_Test_6.7.22.mp4

*Short Video Demonstrating the File cataloging system in PD (no audio)

1.3 MaxMSP, PureData, MobMuPlat

To create an accessible tool, I decided to build an application that takes live audio input, catalogs it, and sends an augmented version of the recordings back to the listener in real-time. With limited knowledge of coding languages (C++ and C#), visual scripting languages ([MaxMSP](<https://cycling74.com/products/max>) or [PureData](<https://puredata.info/>)) provided a better foundation for the application. Between the two languages, PureData seemed to be the better option. Not only is PureData free and open source, but it also works with the platform [MobMuPlat](<https://danieliglesia.com/mobmuplat/>) that transfers PureData patches into phone applications. So although I was more familiar with MaxMSP, PureData was the decided scripting language I used for the initial stages of the project.

I first started to learn the framework of PureData with help from a few great resources. $^1$$^0$$^-$$^1$ $^3$  My goal was to create a staggered recording system that would automatically catalog and store recordings to your device tagged with GPS location information so that they could be recalled later on. Although a preliminary version of this system was working on the computer, it would not work with the MobMuPlat phone application platform. Without the prospect of a working phone application and the addition of the required hardware (open-back headphones & Binaural Mics), the project began to lose sight of its accessibility goal. 

1.4 Proceeding Onwards to Game Engines

Although my experiments with PureData and MobMuPlat seemed hopeful at first, when the errors began to pile on and take away from the main goal of the project, a change of direction needed to take place. This is where game engines came into discussion. Game engines as a virtual experience would keep the project accessible while focusing on the same values of the temporal and spatial quality of sound in order to focus on one's perception of their sonic environment. With all these ideas in mind, I decided to make a walking simulator using Unreal Engine 4(UE4) in combination with Wwise. Although taking away the live aspect of sound could potentially hurt the project, a game engine would help the project in terms of accessibility and augmentation possibilities. Recreating the vibrant and realistic soundscape of a simple street is a daunting task within a game, because you will always be limited in terms of source material compared to the natural soundscapes of the world. Despite the cost of such limitations, the creative liberties available with the use of a game engine seemed appropriate to facilitate the priority of accessibility of the project.