Spatial Audio Design / Project 1

21.4.2025 -  18.5.2025 / Week 1 - Week 4
Bong Sue Zhiun / 0366866
Spatial Audio Design / Bachelor of Design ( Hons ) in Creative Media
Project 1:Exercises



TABLE OF CONTENTS










LECTURE

Week 1 / Sound Fundamentals

Nature of Sound
  • Sound is caused by vibrations. 
  • Vibrating objects make the air around them move (sound waves).
  • Sound waves travel through air (or other mediums).
  • These waves make our eardrums vibrate.
  • The brain interprets these vibrations as sound.
  • There are three phases of sound: 

Fig.1.1 Three Phases of sound, Week 1, 21.4.2025

    • Production: The source that creates the sound
    • Propagation: How the sound travels through a medium.
    • Perception: How our ears and brain receive and understand the sound.

Human ear

Fig.1.2 Human ear, Week 1, 21.4.2025


  • Outer Ear:
    • The visible part of the ear and the ear canal.
    • Collects sound and directs it to the eardrum.
  • Middle Ear:
    • Sound waves make the eardrum vibrate.
    • Vibrations are passed to three tiny bones: malleus, incus, and stapes.
  • Inner Ear:
    • Vibrations move fluid inside the cochlea.
    • This creates electrical signals.
    • Signals go to the brain, which recognizes the sound.

Summary from the video

1. How Sound is Produced
  • All sounds are made by vibrations.
  • In humans, vibrations are produced by the vocal cords. 
2. Types of Waves:
  • Transverse Wave
    • Particles move at a right angle to the wave’s direction.
  • Longitudinal Wave

    • Particles vibrate parallel to the wave’s direction.
    • Sound is a longitudinal wave.
Extra Fact: Sound travels fastest through solids.


Properties of a sound wave
  • Wavelength: Distance between two similar points on a wave. It shows the length of the wave.
Fig.1.3 Wavelength, Week 1, 21.4.2025

  • Amplitude: The height of a wave when seen as a graph. A higher amplitude means a louder sound.
Fig.1.4 Amplitude, Week 1, 21.4.2025

  • Frequency: the number of waves that pass in one second. It is measured in kilohertz (kHz). A higher frequency means a higher pitch.
Fig.1.5 Frequency, Week 1, 21.4.2025

  • Echo: Happens when sound bounces back after hitting a surface.

Properties of Sound

1. Pitch 
  • Vibration per second = Frequency
  • Less vibration = low pitch = low frequency, vice versa
  • 1 Hz = 1 cycle per second
  • Range of human hearing: 20 Hz to 20 kHz
2. Loudness
3. Timbre ( quality of the sound )
4. Perceived duration ( This is how long we think a sound lasts )
5. Envelope ( shows how a sound changes—when it gets loud, soft, or stays steady )
6. Spatialization ( This tells us where a sound is coming from—left, right, near, or far ) 

Week 2 / Basic Sound Designing Tools

Digital Audio Workstation (DAW): A sound editing software that is useful for sound design.

Sound Design Techniques

1. Layering

Like stacking pictures, we can layer sounds to create new and interesting effects.

2. Time Stretching / Compression

This changes how long a sound plays without changing how high or low it sounds. It makes the sound faster or slower, but the pitch stays the same.

3. Pitch Shifting

This changes how high or low a sound is without changing how long it lasts.

  • Higher pitch = small or cute sounds (e.g. chipmunk)
  • Lower pitch = big or scary sounds (e.g. monster)

4. Reversing

Playing a sound backward can make it sound strange or spooky. We can mix it with the original sound to build up to the real effect.

5. Mouthing It

If we can’t find the sound we need, we can make it ourselves with our voice. Then we can edit it with tools like pitch shift, reverse, or layering.


Week 3 / Sound in Space ( Environment )

Diegetic Sound

  • Sound that the characters in the film can hear.
  • Examples: Dialogue, weather sounds, traffic, weapons, background music playing inside the scene (like from a radio), and some voiceovers.
  • Purpose:
    • Helps build the world the characters live in.
    • Even sounds we don’t see (like footsteps off-screen) can add depth and tension. 
    • It gives us a sense of the environment and helps with storytelling.
  • Special use: Can be adjusted so we hear exactly what a character hears.

Internal Diegetic Sound

  • Sound that comes from inside a character’s mind, like their inner thoughts or imagination.
  • The audience hears it, but other characters do not.
  • Example: A character thinking to themselves or imagining a sound.

Non-Diegetic Sound

  • Not heard by the characters.
  • Examples: Background music (film score), sound effects added for mood, and narration from outside the story.
  • Purpose:
    • Adds emotion or energy to the scene.
    • Can highlight a moment, especially in action or comedy (like a sound to finish a joke).

Trans-Diegetic Sound

  • When sound shifts between diegetic and non-diegetic.
  • Example: Music we thought was background score suddenly turns out to be playing on a character’s radio.

Creative Exceptions

  • Filmmakers sometimes bend the rules for dramatic effect.
    • In Psycho: We hear Norman’s inner voice, which is his mother’s personality taking over.
    • In La La Land: Background music in the restaurant slowly blends into the character’s imagination as internal music.

Week 4 / Soundscape 

Soundscape:(A Scene Made by Sound )
A soundscape  is a type of environment created through sound. When we hear certain sounds, they make us imagine a place or a moment. Everything around us makes sound, and each sound is connected to something.

What can Soundscape do?
Soundscapes can help us understand many things, such as:

  • Test 1: Distance – How near or far something is.

  • Test 2: Space – Whether a place feels big or small.

  • Test 3: Direction – Where the sound is coming from.

  • Test 4: Temperature – If a place feels hot or cold.

  • Test 5: Weight – If something sounds heavy or light.

  • Test 6: Time/Era – What time or history does the sound remind us of.

  • Test 7: Emotion – How the sound makes us feel.

  • Test 8: Nostalgia – Sounds that remind us of the past.

How We Understand Sounds:

  1. Instinctual – Some sounds feel safe or cute (like high-pitched sounds), while others feel scary or serious (like low-pitched sounds).

  2. Learnt – We learn what certain sounds mean by experience or from others.



How to Export Audio with Adobe Audition

1. File > Export > Multitrack Mixdown > Selected Clips

Fig.1. Steps to export audio using Adobe Audition, Week 2, 29.4.2025



2. Change the sample type.
Fig.1.Steps to export audio using Adobe Audition, Week 2, 29.4.2025


3. After adjusting > Ok. ( bit depth = range of loudness )
Fig.1. Steps to export audio using Adobe Audition, Week 2, 29.4.2025





INSTRUCTIONS




EXERCISES

Week 1

In the first exercise, we need to adjust the equalizer settings on four similar tracks with different pitches to make each one sound like the original track. 

Here’s the link to listen to the audio I edited: 

Original track




Equalizer 1
Fig.3.1.1 eq-1, Week 1, 22.4.2025



Equalizer 2

Fig.3.1.2 eq-2, Week 1, 22.4.2025

   


Equalizer 3

Fig.3.1.3 eq-3, Week 1, 22.4.2025





Equalizer 4

Fig.3.1.4 eq-4, Week 1, 22.4.2025



Week 2

In the Week 2 exercise, we were asked to edit the audio to sound like different situations, such as a phone call, a voice in a closet, a walkie-talkie, a bathroom, an airport announcement, and a stadium announcement. We learned how to use the parametric equalizer and reverb effects to make these edits.


1. Voice of the phone call 

Fig.3.2.1 Telephone effect, Week 2, 29.4.2025




2. Voice coming from inside the closet 

Fig.3.2.2 Closet effect, Week 2, 29.4.2025





3. Voice of walkie talkie
Fig.3.2.3 Walkie Talkie effect, Week 2, 29.4.2025





4. Voice of the bathroom 

Fig.3.2.4 Bathroom effect, Week 2, 29.4.2025




5. Airport announcement
Fig.3.2.5 Airport announcement effect, Week 2, 29.4.2025




6. Stadium announcement

Fig.3.2.6 Stadium announcement effect, Week 2, 29.4.2025

After receiving feedback in Week 4, I made changes to the three sounds that Mr. Razif pointed out.

Voice of walkie talkie ( less muffled )

Fig.3.2.7 Adjusted Walkie Talkie effect, Week 4, 13.5.2025





Airport announcement ( less reflections )

Fig.3.2.8 Adjusted Airport announcement effect, Week 4, 13.5.2025





Stadium announcement ( more reflections )

Fig.3.2.8 Adjusted Stadium announcement effect, Week 4, 13.5.2025




Week 3

We learned how to edit sound and place it where we want. For my immersive design specialization, I set the mix to 5.1. Animation students use stereo for their mix.

Here’s a comparison between stereo and 5.1:

  • Stereo uses 2 channels (left and right) for basic sound direction, commonly used in music and media.

  • 5.1 uses 6 channels (front, rear, center, subwoofer) to create surround sound, providing a more immersive experience, typically used in movies and games.

Fig.3.3.1 Setting up the multitrack session, Week 3, 6.5.2025


Once the mix was set to 5.1, Mr. Razif asked us to explore how sound can come from different directions. He also gave us two scenarios where we had to create sounds that match each situation.

Here’s the link to listen to the audio I edited:

1. Use clip automation to make the jet sound pan from left to right.

Fig.3.3.2 Clip Automation, Week 3, 6.5.2025




Use track automation to make the jet sound pan from left to right.


Fig.3.3.3 Track Automation, Week 3, 6.5.2025





2.  A person walks from left to right and goes into a cave.

Fig.3.3.4 (Left) Parametric Equalizer references, (Right) Parametric Equalizer was modified using track automation, Week 3, 6.5.2025


Fig.3.3.5 (Left) Reverb effect references, (Right) Reverb was modified using track automation, Week 3, 6.5.2025


Fig.3.3.6 All adjustments made through track automation, Week 3, 6.5.2025




After completing the two exercises, Mr. Razif gave us two images of different environments. Our task was to identify the possible sound effects that would match each scene and combine the sound effects to suit the setting.


Environment 1

Fig.3.3.7 Environment 1, Week 3, 6.5.2025


I began by identifying and listing the sound effects that would likely match the image. After that, I searched for those sounds on Freesound and BBC Sound Effects.

1. Background Sounds

  • Low hum from machines or lab equipment

  • Soft whoosh from air ventilation or fans

  • Light electric buzzing for a high-tech feel

2. Tree Chamber Sounds

  • Gentle bubbling or water flowing

  • Wind blowing inside the tube

  • Occasional gas or steam hissing

  • Faint glowing or scanning sound

3. Human Sounds

  • Muffled footsteps on metal floor

  • Distant voices or conversation

4. Computer Sounds

  • Typing noises

  • Soft electronic beeps or computer hums



FINAL OUTCOME

Fig.3.3.8 Final outcome for Environment 1, Week 4, 13.5.2025





Environment 2

Fig.3.3.9 Environment 2, Week 3, 6.5.2025


Below is the list of sound effects I identified for Environment 2.

1. Background Sounds

  • Lab ambient (general background hum)

  • Low buzzing from power machines

  • Occasional loud electric zapping

  • Air ventilation system whooshing

2. Laser Machine Sounds

  • High-pitched sound when the laser charges up

  • Laser firing sound (sci-fi zap)

  • Beep or alarm after the laser is done

3. Lab Sounds

  • Beeps and clicks from floating screens

  • Whirring sounds from robot arms (servo motors)

  • Scanning or radar-like pinging sounds

4. Human Sounds

  • Footsteps on metal floors

  • Muffled background talking

5. Computer Sounds

  • Soft typing sounds

  • Button beeps from control panels

  • Robotic voice giving alerts or messages



FINAL OUTCOME

Fig.3.3.10 Final outcome for Environment 2, Week 4, 13.5.2025







FEEDBACK

Week 4

Specific FeedbackMake the walkie-talkie sound clearer and less muffled. The airport and stadium announcements sound too alike. Try reducing the reflections for the airport (make them fade more slowly and lessen the early reflections), and add more reflections to the stadium to make it feel bigger and different from the airport. 




REFLECTION


In these four weeks, I learned some basic sound editing skills, like using the parametric equalizer and adding reverb. I also learned how to place sounds in different directions. It was a bit hard for me because I think my ears are not very sensitive. Sometimes, I couldn’t hear the changes after I edited the sound. Adobe Audition is a new software for me, so I spent most of my time learning how to use it. Even though it was not easy, this project gave me a good chance to train my ears and learn a new skill. I hope I can do better as I keep learning this semester.





    Comments

    Popular Posts