The Importance of Sound Editing
Task 1
Why does sound need to be edited?
In TV and film, sound must be edited, treated and cleaned up after a recording for multiple reasons. A lot of the time when recording audio in a large room or outdoor area, there is a high risk of echoes which disrupt the sound or it could be windy or heavy rain which will dominate the audio over the person in the recording who is speaking. The audio will need to be altered so that the voices are clear enough and have priority over any background noise which interrupts with the scene.
A process called ADR (Automated Dialog Replacement) in many films, must be taken into consideration with multiple scenes in a single film. This may have to be done for a number of reasons, one of the main being that the background, ambience or dialog itself may be too quiet and cannot be heard in editing so the sound crew and the actor(s) in the scene need to go into a room with a screen. They re-watch the scene and match up their dialog and any sounds they make to then replace the original audio of the scene so it becomes more clear for the eventual audience.
In a factual environment, such as an interview there are many reasons to edit the sound and dialog because in an interview situation, the interviewer or interviewee can repeatedly use the filler sound “erm” or “uh” whilst thinking of a question to ask or answering the question they were just asking. It is all about keeping the audience’s attention and people keep stuttering and slurring when speaking, the audience can lose interest so, they edit the audio to make the interview fluent and smooth so that the audience is keep interested and gripped on watching the interview.
In this video, Daisy Ridley uses the filling term “erm” before she starts telling a story. This filler was left in to show that her reaction to Graham’s question is genuine and they aren’t reading from scripts.
Whereas in a fictional environment, sound is edited a lot because this is people’s career where they’ll play a character and keep tripping up on their words or making mistakes which will need to be edited out and stored in a file called a “blooper” they will repeat the same line multiple times until the director of the project finds the perfect take on the line and will continue with the scene for the next line or next scene. Once the filming has been finished, the sound is edited in properly, this means that the score is placed in the duration at the appropriate times. In the editing stage they also add the sound effects for the footage using Foley to make the audio for what is happening on screen sound more realistic.
During this interview, you can hear the audience behind the cameras laughing, clapping and cheering these are always quite loud even though they're behind the cameras this could suggest that there are hidden microphones in the stands so that the audience can be heard clearly so they aren't quiet since they're positioned behind the cameras and aren't real seen on the show. The audience sounds could also be edited into the show once it has ended so that it can be made a bit more clear and set to the right volume so that they don't overpower the audio that Graham and his guests microphones are emitting.
During this interview, you can hear the audience behind the cameras laughing, clapping and cheering these are always quite loud even though they're behind the cameras this could suggest that there are hidden microphones in the stands so that the audience can be heard clearly so they aren't quiet since they're positioned behind the cameras and aren't real seen on the show. The audience sounds could also be edited into the show once it has ended so that it can be made a bit more clear and set to the right volume so that they don't overpower the audio that Graham and his guests microphones are emitting.
In this scene from Ip Man (2008), the music has been added to make this scene seem more tense and brutal as well as dark to show how the main character is not himself in this particular scene. The Foley sounds have been added to show the power that this character has, they use a piece of meat for some of the hit sound effects, the last fighter to be defeated in this scene gets hit repeatedly and it sounds like a hammer and nail against a piece of wood. The sound effects also sound somewhat like a cartoon in places.
The reason that the sound effects for the contact of each hit might sound like this, will be to imply and hint at the amount of power that this character has as well as using the audio and music to visualize how angry this character is and that he is using these Japanese fighters as a form of training and stress release as well as revenge because of what they did to his friend. Just as the fighting starts as well as the last fighter, you can hear a loud and distinct scream coming from the main character which tells the audience that he (physically and mentally) has lost all patience and calmness and has taped into his darker side and is now on the offense rather than the defense.
All of the sounds that can be heard throughout this scene, is used to create a dramatic effect because this scene is very dark and dramatic and the audio emphasizes the drama and the darkness of this small scene.
The reason that the sound effects for the contact of each hit might sound like this, will be to imply and hint at the amount of power that this character has as well as using the audio and music to visualize how angry this character is and that he is using these Japanese fighters as a form of training and stress release as well as revenge because of what they did to his friend. Just as the fighting starts as well as the last fighter, you can hear a loud and distinct scream coming from the main character which tells the audience that he (physically and mentally) has lost all patience and calmness and has taped into his darker side and is now on the offense rather than the defense.
All of the sounds that can be heard throughout this scene, is used to create a dramatic effect because this scene is very dark and dramatic and the audio emphasizes the drama and the darkness of this small scene.
Task 2
Sound crew or audio engineers a mixer to mix and master music for a film or TV etc. They will also a visualizer so they can see the frequency and pitch of the track that they’re creating. Many will use iMacs for the sound editing, as Macs are compatible with many pieces of software and hardware which will help them produce their score and soundtrack for the project they are helping to create with music.
PC/Mac
In the industry, there is such a controversy between PC and Mac because they both have their pros and cons. For instance PC has the advantage for price because PC's are much more affordable compared to Macs and their components/specs are also cheaper. The better thing about about Macs is that they are more industry standard and are compatible for a lot more industry standard software such as "Logic Pro X".
Both Mac and PC can use sound editing software, they share some softwares whereas they also have softwares that will only run on that particular equipment.
Sound editing softwares for Mac:
Audacity. ... WavePad. ... OcenAudio. ... PreSonus Studio One Prime. ... Avid Pro Tools First. ... GarageBand. ... Reaper. ... Adobe Audition.
Sound editing softwares for PC:
Adobe Audition. ... Audacity. ... OcenAudio. ... Acoustica. ... Amadeus Pro. ... Fission. ... Hindenburg Journalist. ... Sound Forge Audio Studio 12.
As we can see, both Mac and PC have their own sound manipulation softwares (Mac - GarageBand) (PC - Acoustica) but, we can also see that they share a lot of softwares that can be used on both platforms (Adobe Audition, Audacity, OcenAudio).
There are multiple different sound editing techniques used in the modern day of music and sound work. A few examples of sound editing are:
In the industry, there is such a controversy between PC and Mac because they both have their pros and cons. For instance PC has the advantage for price because PC's are much more affordable compared to Macs and their components/specs are also cheaper. The better thing about about Macs is that they are more industry standard and are compatible for a lot more industry standard software such as "Logic Pro X".
Both Mac and PC can use sound editing software, they share some softwares whereas they also have softwares that will only run on that particular equipment.
Sound editing softwares for Mac:
Audacity. ... WavePad. ... OcenAudio. ... PreSonus Studio One Prime. ... Avid Pro Tools First. ... GarageBand. ... Reaper. ... Adobe Audition.
Sound editing softwares for PC:
Adobe Audition. ... Audacity. ... OcenAudio. ... Acoustica. ... Amadeus Pro. ... Fission. ... Hindenburg Journalist. ... Sound Forge Audio Studio 12.
As we can see, both Mac and PC have their own sound manipulation softwares (Mac - GarageBand) (PC - Acoustica) but, we can also see that they share a lot of softwares that can be used on both platforms (Adobe Audition, Audacity, OcenAudio).
There are multiple different sound editing techniques used in the modern day of music and sound work. A few examples of sound editing are:
“Key framing Levels” - In media production, a key frame or key frame is a location on a timeline which marks the beginning or end of a transition.
“Ambience/Room Tone” – Ambient sound (AKA ambient audio, ambience, atmosphere, atmos or background noise) means the background sounds which are present in a scene or location. ... Ambient sound is very important in video and film work.
“Crossfades” - make a picture or sound appear or be heard gradually as another disappears or becomes silent.
“Panning” - Panning is the spread of a monaural signal in a stereo or multi-channel sound field - it is critical to the makeup of the stereo image.
In many films and TV Shows, all of these sound editing techniques can be noticed. A good example of this is the opening scene of “Saving Private Ryan” where a grenade detonates near Captain Miller (Tom Hanks) and his hearing gets disorientated and there is a constant ringing which is a symbol for the consequence of war and that struggle to survive. The ringing and muffled sound adds to the effect and represents the realism of war by showing the effects it has mentally as well as physically on a person who has had to experience war. This scene is very iconic since the film premiered in 1998, but this small part with Captain Miller is a very good example of sound editing as the disorientated sound right after the explosion was one of the biggest leaps in sound editing probably since the Lightsaber from Star Wars.
When editing the sound for a film, the sound crew will either add the Foley sounds or the music first. This will be because they will need to observe the audio to see if the music needs to be quietened down when there’s Foley or dialog to know if sounds need to be made quieter or louder depending on the situation on screen and the intensity of what the audience will be seeing.
A perfect example of the procedures is in the final encounter between Neo and Agent Smith in The Matrix Revolutions because in this fight scene, the music changes between loud and quiet when people are speaking or there are key sound effects being used e.g. the weather (rain and thunder) as well as the sound effects being used when one of the characters takes a hit or blocks an attack. All three Matrix films have this trait, where when something important happens the music either gets louder or quieter for sound effects or Foley work, this adds to the realism and has more of an impact on the audience.
No comments:
Post a Comment