p.p1 sound which corresponds to visual sources on screen

p.p1 {margin: 0.0px 0.

0px 0.0px 0.0px; text-align: justify; font: 11.0px ‘Helvetica Neue’; color: #000000; -webkit-text-stroke: #000000}span.s1 {font-kerning: none}This essay will explore the vital uses of sound in cinema, specifically analysing scenes from Spielberg, S.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

(Director). (1998). Saving Private Ryan. Film USA: Amblin Entertainment, and Stanton, A. (Director). (2008). WALL-E. Film USA: Walt Disney Pictures.

The use of sound in film has been extensive since technology has allowed filmmakers such as Eisenstein and Pudovkin to approach sound from a creative stance, leading to more innovative uses of sound. Highly regarded Japanese filmmaker Akira Kurosawa once stated “The most exciting moment is the moment when I add the sound…At this moment, I tremble.” (cited in Bordwell, Thompson and Smith, 2017, p. 265).The first example of the use of sound in film I will be examining will be the Normandy landings scene at the beginning of Saving Private Ryan (1998).

I have chosen to examine this scene as I believe the use of removing sound in this instance creates an extended sense of immersion into the setting. For the initial landing boat scenes, I will be using Siegfried Kracauer’s (1997) distinction between two types of sound; synchronous and asynchronous. Synchronous sound being sound which corresponds to visual sources on screen such as a characters footsteps, and asynchronous sound being sound where the source is off-screen, for example the train whistle being heard instead of the women screaming in Hitchcock’s 39 Steps (1935).

Alongside this, I will also be referring to terminology formed by French composer of experimental music Michel Chion (1994). Chion, B. (1994).

Audio Vision : Sound on Screen. USA: Columbia University Press.As the landing-boats are approaching the shore, the synchronous sound of the hull hitting the water and the spray showering the soldiers is very chilling and does a successful job of establishing the scene alongside the visuals. This relentless weather is then juxtaposed by the faint sound of the engine droning as Captain Miller (Tom Hanks) shakes as his unsteady hands attempt to open his canteen. When accompanied by the silence of the other men, it further highlights the fear that he and all the other men are facing as to what they’re approaching. This juxtaposition of sound is successful in allowing the audience to try and empathise with the soldiers more and understand their situation. The following 10-20 seconds is accompanied by the traumatic sounds of explosions and mortar fire surround the landing vehicle, followed by the spray of the water raining on the soldier’s helmets.

This is use of asynchronous sound as the viewer cannot see the mortar shells hitting the water, yet they hear them hit the water. The spray landing on the helmets of the soldiers is synchronous, as you can see the sea-water splashing down onto them. When partnered with the visual representations of fear on the soldiers faces, this use or sound is incredibly effective at building anticipation and tension about the beach is going to be like. The next effective use of sound to amplify the horror and terror of the landings is when the landing doors are lowered. For 10 seconds the viewer is exposed to synchronous sound of machine gun fire and bullets tearing through the soldiers. The fact that there is no accompanying score in this scene is unique as other big military/war films which made an emphasis on music and score to add to the narrative, such as Kubrick, S.

(Director). (1987). Full Metal Jacket. Film USA: Natant, Harrier Films. The use of only using diegetic & synchronous sound is effective at further immersing the viewer into the action and gritty nature of the scene.

I also believe the absence of any sound bridges between the scenes allow the viewer to be further thrown into the action rather than slowly introduced to it. There is also use of sound perspective, which is evident in the sound of the German machine gun posts at the top of the beach. When trying to exit the landing craft, the viewer just hears the whizzes of the bullets firing at and past them. Yet when the point of view is from the machine guns, the same gunfire is incredibly loud which emphasises the terror of the guns and helplessness of the soldiers on the beach. Another interesting use of sound  perspective is when the camera is underwater, as the sources of sound haven’t moved or changed, yet they would be heard differently when underwater. Again with no sound bridge to introduce this, it feels like a juxtaposition from the audible carnage which was just being portrayed on screen.

I believe this adds a further sense of immersion into the scene, as you are audibly being placed in the same environment as the soldiers, promoting further empathy. The final example of sound I will be analysing from Saving Private Ryan (1998), is arguably one of the most emotive and impacting scenes of the whole film. As Captain Miller (Tom Hanks) makes his way out of the surf and onto the beach, the vast majority of synchronous sound which was present on screen fades out, resulting in the sound being muffled.

It could be argued that this white noise effect is both asynchronous and non-diegetic sound as the source could be from a nearby explosion off-screen making it asynchronous. However it could also be non-diegetic sound as it could be an artistic representation to show the fear felt by Captain Miller. Using terminology formed by Chion (1994), the sound in this scene could be described as rendering, which is defined as using sounds in film to convey or express the effects of feelings associated with the situation on screen. I believe this theory is better suited to analyse this scene as it’s arguably more artistic, and analyses the use of sound more accurately. Even though sound makes up a significant portion of the storytelling in cinema, I also believe that when it’s removed it can also carry the same significance and still convey a meaningful story.p.

p1 {margin: 0.0px 0.0px 0.0px 0.

0px; text-align: justify; font: 11.0px ‘Helvetica Neue’; color: #000000; -webkit-text-stroke: #000000}p.p2 {margin: 0.0px 0.

0px 0.0px 0.0px; text-align: justify; font: 11.0px ‘Helvetica Neue’; color: #000000; -webkit-text-stroke: #000000; min-height: 12.0px}span.

s1 {font-kerning: none}The second example of sound in film I will be analysing to demonstrate its significance will be WALL-E (2008). I have decided to analyse the use of sound in this film as during production, sound was seen as an incredibly important method of storytelling as there wasn’t much dialogue, so feelings and emotions of the characters had to be amplified through sound design more. As it’s an animation, there were no set recordings, ambience or any wild tracks to work from, meaning that all sounds in the film had to be made from scratch. An example of WALL-E’s (Ben Burtt) sound being effective is when EVE (Elissa Knight) is presented with the small flower, then goes into the standby mode.

The synchronous dialogue, although not recorded on set, still hast to put across a strong portrayal of stress and confusion felt by WALL-E (Ben Burtt). Furthermore as he is not a human character, this further demonstrates the importance of the sounds he makes to characterise his emotions. The use of music in WALL-E (2008) is also an effective use of sound to assist the narrative. In their book, Corrigan and White explain how “It violates the ‘realist effect’ of the film yet we accept it, thus the paradox: “much of what is valued in classical cinema – verisimilitude, cause-and-effect relationships – is completely ignored in even the most admired examples of film music” (Corrigan and White, 2009, p. 205). This highlights that music plays a large role in film as it can help set the mood of a scene, or even completely change it within seconds. An example within the film to demonstrate this is when WALL-E (Ben Burtt) returns to his trailer to deposit his findings, and turns on the television.

This is an example of diegetic music, as he interacts with the source to start the music and can hear it. It could be argued that the music then develops into an underscore for the next few shots, as it’s much quieter than before. However because the source is within the narrative, it should remain as diegetic music. Contrastingly, an example of non-diegetic music which still is significant to telling the story is when WALL-E (Ben Burtt) takes EVE (Alissa Knight) on date-like scenarios. The music is quite