Horror Scape

For my Horrorscape i decided to recreate an eerie atmosphere of being in an asylum or an abandoned hospital. I wanted to create a dark atmosphere with a back story behind it through the use of both musical and sound design elements. I took a risk in choosing music theory as the main method to achieve this effect rather than a sound design orientated track. The main ambient guitar chords was the main tool i used to evoke an emotional response in the listener and is reminiscent of a lot of horror game soundtracks such as ‘The Last of Us’ or ‘Silent Hill’.

For last trimester we went to Boggo Road and recorded some foley such as rusty doors and metal clanging. I decided to use some of the more agressive transient hits and atmospheric noises and reversed them to build the tension up for certain sections. Ive also delayed and looped some short transients hits such as clicks and door creaks at really quickly rate to change the hits into a very harsh synth due to how many samples a re playing a second. Ive also used frequency and distortion automated to create an interesting transition into certain sections.

IMG_3846

The blog post can be found here

https://dibsaudio.wordpress.com/2015/07/16/boggo-road-goal-recording/

For the guitars ive recorded an electric guitar with a high gain which was slowly played using a bow. As i used ‘The Last of Us’ as my main influence I primarily used D minor diminished chords and octotonic scales to create an uneasy feeling to the track and was later pitched a whole octave down to recreate the lower chords. The signal was put into a delay unit and a heavy use of reverb with a large room size and a wet only signal to recreate the feeling of being in a large expansive building.

Some references on chords can be found here

Screen Shot 2015-10-12 at 5.07.45 pm

Ive also used massive to create a synth which was used to bolster the guitars high end frequencies. I layered a sine wave to create a sub bass and layered white noise to create the a very harsh sharp noise to the noise. Ive used as classic tube to add digital clipping to the synth and introduce a lot of pressure in the lower frequencies.

Screen Shot 2015-10-12 at 4.58.48 pm.

For the laugh and the breath i primarily used vocal transformer to recreate the sound of a little girl laughing. Ive also reversed some breath noises and pitched them down with reverb to further add atmosphere within some sections of the track. Ive also included a playground foley which I’ve recorded last year and automated the pitch to decrease as the composition goes on to create a more sinister change from the creepy introduction.

Screen Shot 2015-10-12 at 5.48.07 pm

Advertisements

LO5- Critiquing Aesthetic and Technical Processes of Productions

Nine Inch Nails – The Hand That Feeds

Common musical elements of industrial rock consist of heavy distortion droning riffs and a brick-wall of sounds from noisy guitars. The use of heavy distorted guitars, violent vocals and large dense drum kits are very common within this genre, and are constructed to sound brute and harsh. The specific use of instruments such as guitars, drums and vocals means that they are limited in terms of musical experimentation but the extensive use of processing to these individual instruments provides the timbre and sonic characteristic to the genre.

For example the use of distortion and the introduction of bit-crush on the palm muted guitars, synths and the hi-hats introduces a layer of high frequency noise and also degrades the signal to the point where it pierces through the mix. Having more high end presence, evokes excitement and energy into the songs as the song slowly introduces high-passes its way in. The aesthetics of harsh noise and distorted signals can be said to contrasts that of mainstream music such as pop by being somewhat flawed but at the same time embracing those flaws to bring in more alternative crowds who do not.

The use of synthesizers are also common within this genre with the use of basic low frequency oscillators to thicken the sound up or piercing saw wave arpeggios to bolster the high end energy from the electric guitars. The use of a signal generated white noise is also used to give energy to the track and further add more abrasive characteristics such as the hi-hats.

 The use of a very brick-walled compression settings on the percussions also introduces a certain dynamic intent where the constant drive of the drums is similar to that of a war drum. The toms and kicks are tuned to be low and punchy and this sort of sound compliments the other heavy sounds from the guitars and the synths. The use of four to the floor playing evokes certain reactions into the audience such as danger, aggression, conflict, tension and can be similiar to the driving force of a marching beat.

Lyrically,”The Hand That Feeds” is a verbal attack on the United States Government, specifically the Bush Administration and its foreign policy, but can be interpreted as speaking out against abusive authority in general. Following the lyrical intent of the song, the use of harsh vocals and screaming are prominent in this genre since it reflects the anti establishment attitude of the song and also compliments the distorted guitars and large drum kit. The vocals in this song also feature delay and reverb to differentiate between the dry instrumentations in the background.

High And Dry – Radiohead

The definition of High and Dry simple equates to being left helpless in a situation because you aren’t given something you need or were promised. The sombre feeling of the song goes well with this message as the song primarily features a very minimalistic production style with clean guitars, washed out percussions and a sombre tone for the vocals.

However, most Radiohead songs have a surface meaning, and then a deeper hidden or even an almost unintentional meaning which seems to reflect the depressing and cruel state of society and the way it has moulded people. The contrast between the heavy distorted guitar and the clean acoustic guitar can be said to be a reflection of this message as the clean guitars have a very pure and honest sound to them while the distorted guitar can be a reflection on anger,hate or even anxiety. These distorted guitars only appear on the chorus of the track and can be said to contrasts between the more sombre tone of the verses while the chorus can be used as a release of all the built up tensions that this track has progressed into. These jumps in contrast calms the listener and signals whatever conflict or event that had just happened has subsided and the use of a reverb tail to indicate the end of the song can be said to indicate a resolution and hope.

High and Dry can also be interpreted as how a person can be shaped by what other people want them to be. With the best intentions, you “turn into something you are not”. For a while, if you’re good, you can appear to be everything expected. This blends well with the overall aesthetic of the track as the song as it utilizes a minor chord progression with a few major chords which creates a facade between of being a hopeful but at the same time a haunting and sad look into social and cultural norms.

The vocals mainly employ a very dry production technique as there is a minimal use of post processing such as reverb, delays and compression. This very dry use of processing gives the track a honest tone to his voice as if untouched by any sort of post processing and coloration done in most tracks of this genre with the use of washed out vocals and pitch correction.

Nick Tart Mixing Session

For Nick Tart’s mixing session, i referenced a lot of early 90s rock in the similar veins of ACDC or Iron Maiden. A major process that was used in most of my references was the heavy use of hall reverb to emulate the sound of a massive rock concert. These process were heavily used in almost every track from the guitars, and vocals all the way to percussion hits such as snares and kicks.

Screen Shot 2015-07-23 at 1.23.12 pm

Before mixing i cut the vocals up into smaller tracks which i applied different reverb settings to. So for that i did not use the global reverb setting that i applied for the other tracks but its own insert.

Screen Shot 2015-07-23 at 12.59.26 pm

Most of the tracks were processed in a similar way and in a similar order except for the guitars where an extra stereo widener was used to separate it further when during the mix. Another small addition was the use of a De-esser in the vocals to reduce the sibilance of certain vowels and was apparent due to the fact that a ribbon microphone was used.

  • Dynamic Processing

A compressor was used to glue the mix together and was integral to equalising the dynamics so that the sound was not overly loud or soft. Sometimes more dynamic processing was used after the chain to compress the wet sounds from the reverb that used at the end of the chain since i wanted the actual room space to be dynamically equal to the dry sounds.

  • EQ

EQ was then applied to cut frequencies that were unwanted or even boost certain frequencies that needed more presence. Most of the EQ applied was very minor only shaving or adding 2 to 5 dBFS of dynamic reduction.

For example, the bass track included a low pass filter that was used to cut every frequency above 7kHz since the actual recording was very plucky and included unwanted metallic noises which clashed with the vocals and the percussions.

An instance where i added frequencies was the vocals or hi hats where i used a shelf EQ to boost frequencies above 10kHz to add more brilliance or intelligibility to the sound and to make them stick through.

  • Reverb

Reverb was used to add bulk to the dry sounds and was used to sometimes artificially increase the loudness of the track instead of a typical dBFS boost. The way reverb was setup was using a send for the overall reverb of the tracks and a smaller room reverb for the vocals. Rather than washing all of the sound with the same setting, different more precise reverb control was used with different room sized and RT decay settings for more of the sounds that were panned in the centre.

  • Stereo Panning

Panning was necessary to separate the tracks into different stereo pockets in the mix. I went for a typical rock panoramic utilising wide guitars and percussions with the bass and vocals offset slightly from the centre. For the stereo recorded tracks such as guitars and overheads extra stereo wideners was used to separate and clean the mix even more and give the guitars its own place in regards to the other harmonic content such as vocals.

Screen Shot 2015-07-23 at 12.59.43 pm

  • Distortion

Distortion can be described as an alteration of a sound source that can add harmonic content or frequency distortion by adding amplitude to a signal. Adding harmonic content on the percussions can add overtones to a sound source which can thicken up percussions such as adding punch to a kick.

Ive also introduced digital clipping into the sound source as a byproduct of distortion. Clipping is a form of distortion that limits a signal once it exceeds a threshold.Hard clipping results in many high frequency harmonics which i found desirable when mixing certain percussions such as snares to add more punch and even introduce more energy in the higher frequencies to make certain percussions stick through the mix.

For this process i used dynamic distortion which is usually used in strong spikes, usually from percussion instruments to give a live music impact. Distortion was used in a very irregular way in that it was used as a way to add grit and noise to kicks and snares. By increasing the distortion in the percussions it brought the transients out which in turn bought more energy to the track especially in the chorus. A little extra low order harmonic distortion produces a pleasant fullness and depth that has an easy-on-the-ears quality

Screen Shot 2015-07-23 at 1.00.22 pm

Parallelhomeaudio.net,. (2015). Types of Audio Distortion. Retrieved 27 August 2015, from http://www.parallelhomeaudio.net/TypesAudioDistortion.html

Cave in – Audio Blog pt 2 – Soundtrack

In this blog i will address the soundtrack we made for the game “Cave in”. Similar to our last gaming project, a brief was given in which the mood of the track was referenced an where they would like the soundtrack to be placed in the game. The brief can be read here.

https://drive.google.com/folderview?id=0BxXA8Wv4GyDwfk9CM2cydzZPLUp6RU4tWTBidmJHQU4zaTJzbHhNNXE2YVBqZ1NTRVJUMWc&usp=sharing&tid=0BxXA8Wv4GyDwfjdPbWlzbXFIMWRoYURLWm45bFNIcmpvTV9JdnVfNE1mTTdaQVlndDhVUVU

For all our composition we used Logic Pro X due to its easy midi sequencing functionalities. All the post processing was done using Logic’s stock plugins and was mainly worked in the raven studio. We had also tested out the music on several devices such as laptop and computer speakers to make sure the music was audible on a range of frequencies.

Composition 1 – Main Menu

For our first composition we wanted to evoke the feeling of easiness taking comfort and warmth into the main emotion. The music mainly featured very simple arpeggios being played by a pizzicato violin which was triggered using Kontakt’s main library. The song lasted 3 minutes which was enough time for the song to be repeated and since it was the main menu theme, it was fine for the track to be short since the average time players were on the main menu were mainly 5-30 seconds.

Composition 2

The main gameplay theme was divided into two sections. The first initial section was composed of a droning synth which played for the initial 30 second to set the dark tone of the cave in. The lateral section of the composition was mainly a thick synthesised bass line which drove the main gameplay as the players escaped a cave in. The overall composition was fairly simple due to the fact that they were a lot of sounds happening with the foley and the dialogue going on.

Since the main gameplay consisted of an average of 10 minutes, the game developer asked for a soundtrack which lasted just as long. The composition was challenging since we had to evoke the feeling of uneasiness for 10 minutes straight and so we recycled a lot of the midi notes but changing the instrumentation every minute.

Composition 3

The last composition was a credit theme in which it would be played during the credits of the game. The music composed is similar to the first track and is meant to evoke the feeling of freedom and happiness. The soundtrack was exclusively done using a piano and heavy reverb was then processed after. The main soundtrack was also accompanied by outdoor foley and the reverb was cut out to in the end reduce the claustrophobic setting the main game is placed.

Cave in – Audio Blog Pt 3 – Foley

Foley is the act of reproducing sound effects that are added in post production to media products such as films or games to improve the audio quality. Most of these sounds are created within a studio environment and often replaces diegetic sound when in the production phase.

In video games, foley is often used to create sounds for certain actions such as footsteps, glass breaking, jumping noises and so on. When recording foley, certain techniques are used to create a sound that are often not present in real-life scenarios. Foley is added artificially to enhance the gameplay experience and also to fully immerse the player in the game in terms of audio.

Foley is often deployed during the post-production stages to obtain a personalised sound that correlates with the aesthetic and mis-en-scene of the game. For example, during the foley stages for “Cave in” we wanted natural sounds that would be expected when entering a cave and also the amount of spatial reverb that would be expected in one. Foley in character-lead first and third-person games are greatly improved in terms of putting the player in the game space and allowing the sound to embody the character the player is being.

For the initial research we checked out some references on in the internet on how certain foley sounds were recorded. For example we learnt that in an outdoor scenario, the microphone should be placed about three feet in front of the Foley Artist when the scene is outdoors and placed six to ten feet away when the scene is indoors. We had also found out the a technique for capturing walking sounds where the “heel / toe” action is used. where you must roll your foot from heel to toe, to create the sound illusion of forward movement.
Marblehead.net,. (2015). The Art of Foley – Feet. Retrieved 26 August 2015, from http://www.marblehead.net/foley/feet.html

Isaza, M. (2015). Andrew Lackey Special: Foley Sessions for Games | Designing Sound.Designingsound.org. Retrieved 23 August 2015, from http://designingsound.org/2009/12/andrew-lackey-special-foley-sessions-for-games/

For the sound recording aspects we had split the task between what sorts of microphones and recordings we wanted to use. For a full list of microphones we used in the recording phase here is a link to our production brief.

https://docs.google.com/document/d/1amYC7oOycCpEVBKySCoj1bxIOj3a4sjGySWA0KmQT8g/edit

Direct Recordings

Dynamic Recording (SM57)

Since most of the sounds of the game involved a lot of heavy transient hits being recorded such as falling rocks or metal rods being clanged we decided to use a dynamic microphone in order to not damage the more sensitive condensers such as the AKG C414 or the Rode NT2A. Most of the recordings were captured in mono using a direct sound capturing method 20-50cms away from the sound source. The gain was also rather high due to the nature of dynamics not being sensitive enough.

DSC_0019

An example of the use of dynamic microphone recording was the pickaxe noise. to achieve the sound we used a metal rod from the foley box which we hit the rock pile in the post production studio with varying velocities.

photo 1 (1)

Another use of the dynamic microphone was a rock smash sound we used to emulate the sound of a huge boulder falling inside a cave. Several heavy rocks were placed into an empty container and thrown around. The box itself gave a very thick sound compared to if the the rocks were thrown in a free spaceThe sound itself was then reverberated and artificial sub-bass harmonics was added in using the EQ to add body to the sound.

Since we mostly done these mono recordings first, we did not have a guideline from the developers for how loud they wanted these audio files to be. We had also not known the audio capabilities of the Unity Engine in which the game was being created on and so some troubleshooting had occurred when most of our first recordings were very soft and could be cranked up. We remedied this issue with some dynamic processing mainly with compression and an adaptive limiter to make sure the audio would not peak.

Screen Shot 2015-08-16 at 7.54.38 PM

We also did get feedback from our recorded sounds for being too soft and so the issue was resolved by compressing and limiting the audio to 0dBFS so that if needed, the loudness can be reduced using the unity engine.

Condenser Recording(C414,NT2A)

For more of the finer and continuous sounds that was requested, we used a condenser microphone to capture more frequencies since the sounds were not loud transient sounds, it was appropriate to use them.

Quiet continuos sounds were all captured using condensers due to the fact that the sensitivity of the microphones were high enough to pick up tiny delicate sounds. One of the uses for the condensers was to capture fire crackling noises for the fire that were being dropped. Small shrewd up tape were rustled into the microphones to emulate crackling noises you would expect from an open fire.

 photo 5 (2)

Another interesting use of the condensers was to capture a chair-squeak sound to emulate the sound of climbing a ladder. The sound was then compressed and EQ due to the fact that the sound was rather quiet and could not be gained loudly since it picked up a lot of background and electrical sounds from the circuitry.

IMG_1121

These were some other sounds that we recorded using condensers.

Stereo Recording (AKG C-451b)

DSC_0012

For stereo recording we had used two pairs of AKG C-451b pencil microphones to capture in a stereo setting. We used several experimental placements but also used a spaced pairing  to capture most of our sounds.

photo 4

We had used an experimental way of recording the miscellaneous noises such as foot steps by using stereo recording. A method we used for stereo recording was by placing two microphones on the left and right side of the foley board. I sat down and pitter-pattered on the furthermost left and right side of the foley board so that the nearest microphone to my feet capture would capture more sounds than the other microphone which made for an interesting natural walking sound to be used for the characters walk cycle.

DSC_0014 (1)

We had also used the same method to record the jumping noises for the characters in-game. Since there is no physical sound for jumping, we created an artificial sound by swinging a rubber pipe in between the two spaced pairings. Since the action of swinging is in a downward direction we wanted to capture the sound from the up-thrust all the way to the sound dying down at the bottom. The second microphone was also tilted downwards so that it could record the sound when the sound source is lower that the original position. We also layered a foot stomp sound in the end of the clip to signify the player has landed.

Recording Dialogue

IMG_1122

Recoding of the dialogue was mainly done in the Raven studio due to the limitations that the post-production studio had.

  • The Raven provided a dead sounding room which was great for isolating vocals without being trapped in a tiny foley room which we found out had problems with sound being reflected from the glass doors.
  • The Raven provided ample space for people to move around so that
  • The lights in the post-production foley recording room gave off a very buzzy sound and picked up alot of noise from the C414 we were using. We couldn’t switch off the lights because the voice actors wanted to see each other whilst delivering lines.

The idea for recording the dialogue came as an after thought from the developers and was originally not meant to have any dialogue at all due to the mood direction of the game being set in a sad and sombre tone. However after having recorded dialogue with the talented voices of Joshua and Jordan anyway without their concern, they quickly adopted to the idea and included it in their game.

 Screen Shot 2015-08-12 at 5.39.04 pm

Screen Shot 2015-08-12 at 9.09.48 PM

To record the dialogue for Cave in we used the AKG C414 condenser microphone that was set in a figure of 8 polar pattern. We wanted to use figure of eight due to the fact that we wanted the voice actors to face each other whilst talking so that it was more of a natural conversation and to save ourselves time from setting up 2 cardioid condensers.

Screen Shot 2015-08-12 at 5.30.59 pm

The dialogue was then split into two tracks one for each character and reverb was post processed into them to emulate the space of being inside a cave. We had asked the game developers on how they wanted the sounds to be formatted

The assets were then shared into google drive once again where the audio was renamed appropriately for the ease of use of the developers.

Screen Shot 2015-08-12 at 5.40.57 pm

The dialogue assets can be heard here.

(Terrible hillbilly accent warning)

Field Recording (Zoom Recorder)

IMG_1085

We had also implemented field recording into our workflow for sounds that could not be achieved indoors or within a studio workspace. An example of the use of field recording for an asset was a glass smashing sound. The sound was recorded outside using a zoom recorded that had a cardioid polar pattern half a metre away from the sound source.

IMG_1082

The sound was then layered with an existing sound we recorded inside the studio with the sound of a bottle being scraped. The sound provided a very high end metallic sound which layered with the lower frequency thud from the previous zoom recording made the glass breaking sound more fuller. The sound was further processed with a delay on the glass shattering noise to elongate the sparkly glass sounds to make it more convincing. The actual sound in game was used to emulate the sound of a lantern being crushed under a heavy rock and the sound we composed had achieved that criteria.

Here is a video of the field recording we did.

https://drive.google.com/drive/folders/0B4rbpZY7giEcfk1SQXgtUHBRN1BqYWIwTnRpVFZiVGNZVkNhSGtFZWYtdjlrWnNzVHNMbW8

Cave In – Audio Blog Pt 1 – Introduction

Screen Shot 2015-08-12 at 9.48.30 PM

For our second major project we were contacted again by Anthony Hope, the developer of the previous game “Your Team” to create another game entitled “Cave In”.

The game’s premise includes two miners who are at the end of a winding cave, mining away, when a cave in occurs. The person mining is blocked in and must dig their way through the small wall of rocks. But the player with the light isn’t blocked in, and as the lights are exploding one by one, must quickly escape before the lights run out, while leaving a trail of light and warnings for the miner to follow.

11738065_10200474323176902_6781812171935986318_n

We had been given a brief that was shared with us over google drive which included base audio requirements and other in-house deadlines set for the rest of the members. The brief that was given to us provided a basis for the main sounds of the game that was needed the most. The brief also included non-diagetic background music that the developers wanted with reference tracks to help us when composing the tracks.

https://drive.google.com/folderview?id=0BxXA8Wv4GyDwfk9CM2cydzZPLUp6RU4tWTBidmJHQU4zaTJzbHhNNXE2YVBqZ1NTRVJUMWc&usp=sharing&tid=0BxXA8Wv4GyDwfjdPbWlzbXFIMWRoYURLWm45bFNIcmpvTV9JdnVfNE1mTTdaQVlndDhVUVU

PROJECT GROUP MEMBERS

  • Adib Hussin (Audio)
  • Joshua Graham (Audio)
  • Jordan Forrester (Audio)
  • Anthony Hope (Game Developer)
  • Jarred Gruss (Game Developer)
  • Benjamin Lovegrove (Game Developer)
  • Huxley Dowling (Programer)
  • Lachlan Murray (Animation)

We had created a pre-production plan for the microphones and studios we planned to use for our own recordings. For an in-depth look, the link to the document can be seen here.

https://docs.google.com/document/d/1amYC7oOycCpEVBKySCoj1bxIOj3a4sjGySWA0KmQT8g/edit

The main form of repository we used to share assets around our team was the use of google drive. We had also shared the drive to the other members of the game team so that it was easy for them to take specific assets or view certain documents that we were following to get a general idea of our work schedules.

Screen Shot 2015-08-12 at 8.55.25 PM

Another form of contact that we used for sharing assets and ideas was a Facebook page that was setup specifically for the project. Since the project had a larger team working on it, an easy and accessible private page was needed to accommodate the game developers,animators and the audio team.

Screen Shot 2015-08-12 at 8.13.32 PM

Audio Assets

For the audio assets that were needed to fill the game, we had split the audio into three main sections that corresponded to the type of audio it was.

player_sound

Foley/FX

  • Provide audio assets that can be played using Unity Engine’s triggering states.
  • Sounds would include short transient hits such as pickaxe noises, jumping noises or glass smashes.
  • More longer continuos sounds would also be included such as rocks falling, long fuse sounds or rain sounds.

Soundtrack/Composition

  • Provide three pieces of non-diagetic music to use for the intro, gameplay and outro.
  • Appropriate music to signify the mood that correlates with the current situation of the game such as urgency or tranquility.

Dialogue

  • Provide sound bytes that can be triggered via checkpoints within the Unity Engine.
  • Provide a voice for the character in which the characters can portray their emotions and provide useful in-game tips to the players.

Production Techniques – Parallel Compression, Haas & Reversing

Over the past few months i have been producing my own personal track and during the mixing stages i have used different techniques to make the mix more coherent and dynamically appropriate for the genre. All of the processing was done on Logic Pro 9 using only the stock plugins.

Parallel Compression

Parallel compression or New York compression, is a dynamic range compression technique used in the later stages of mixing to even out the dynamics in stereo percussion buses, electric bass or even vocals. in a nutshell, Parallel compression is a form of upward compression and is achieved by mixing an unprocessed ‘dry’, or lightly compressed signal with a heavily compressed version of the same signal. What it does is bring the highest transient peaks such as kicks and snare and brings them down for the purpose of dynamic range reduction. by reducing the dynamic range, it brings up the softer sounds, adding audible detail for smaller things such as hi-hats and shakers.

i used parallel compression on my percussions to add an extra layer of thickness to the major transients and backbone of the 4/4 signature. Here is 12 bars of my own percussion stem without parallel compression. The kick and snare are rather loud compared to the hats when the smaller percussions elements kicks in.

Screen Shot 2015-08-10 at 午後3.10.21

For the first step,  i duplicated the drum stems i already have and colour coded them for the parallel compression.

Screen Shot 2015-08-10 at 3.20.43 pm

For the heavily compressed track, i used a very harsh brickwall compression settings. The knee and ratio are maxed to their respective setting so that any signal coming in is dynamically squashed as soon as possible, especially with the absurd ratio rates. The attack and release is relatively quick so that the main transients are processed quickly since the track is mainly composed of quick percussive transients.

Screen Shot 2015-08-10 at 3.24.15 pm

After the compression i adjusted the the wet signal with the dry signal and created an appropriate balance between both of the signals.

Screen Shot 2015-08-10 at 3.41.09 pm

After applying parallel compression, not only did it make the other elements of the stem more dynamically coherent, there were also minor changes to the envelopes of the percussions in terms of dynamics. The decay of the kicks and snares sounded louder and provided more

Haas Effect

The Haas effect or Precedence effect is an effect that is achieved by using a time delay trick on a signal to give it a sense of stereo spaciousness.

This is typically done when a sound is followed by another sound and is separated sufficiently by a short time delay. By creating an artificial lag listeners perceive a single fused auditory image, its perceived spatial location is determine by the position of a sound based on which ear perceives it first and its successive reflections (arriving within 1-35 ms from the initial sound) and will give the perception of depth and spaciousness.

To achieve the Haas effect on logic 9 i used the stock sample delay plugin. The plugin is straight forward to use and only features 2 controllable parameters for delaying the left or right signal. The delay works via samples not ms but a general conversion equivalency of 1000ms = 44100 samples is taken instead. The trick was just to make the delay not noticeable and so precise ms changes were not needed but only objective choices that fit the track.

Screen Shot 2015-08-10 at 4.23.18 pm.

The vocals for the track before the haas effect was generally very mono and sat comfortably well in the centre. However by achieving this effect there is a sense of stereo wideness filling up the panoramic stereo quite well. It also works well with glue-ing the vocals with the synths since both were wide, leaving the percussions to fit in the centre.

Reversing

Reversing is a production technique which is commonly used to create tension and create an artificial slow attack when dealing with transients. Reversing can be used on any audio source to either create an interesting effect and is used

For example reversing sounds such as white noise or vocal breaths can be used as a riser or a down riser to provide energy in a track which is commonly used in high energy genres such as dance music. Another interesting use of reversing is the topic of back-masking and the idea of deliberately reversing phonetics to create secret messages. Elements such as piano scan also be reversed to create a swell or a interesting melodic effect and is used to create a really trippy and uneasy sound.

Going back to the use of risers, here i reversed a crash cymbal to create an artificial swell and is used build tension and also used as a transition in the next section.

Reversing in logic can be done in sample editor under the functions tab. Reversing in logic is a destructive method and so i duplicated the file and used the copy as my reversed signal. Ive then inserted the dry crash at the end of the reverse transient to create the effect of a cymbal being scraped.

Screen Shot 2015-08-24 at 5.19.46 pm

Ive also reversed the toms for a more interesting effect as well as the crash and is highlighted in red. The toms acted as a counter act to the high frequency crash cymbal and provided a a low frequency rumble which gradually increased in dynamics. This gave a gradual increase in the tension and somewhat of a dramatic introduction into the actual tom samples.

Screen Shot 2015-08-24 at 5.25.51 pm