Monday, 9 December 2013

White Balance and Colour Temperature

White Balance

White Balance is the process of removing colour casts from a shot so that which appear white when looked at with our eyes, which are far better at judging colours, are also rendered white in the actual shot. The colour temperature (see below) of a light source should be taken into account. Unlike our eyes, digital cameras can often struggle to apply auto white balance. Instead, they can create unsightly blue, orange or green colour casts. Possessing a proper understanding of using digital white balance can help filmmakers to avoid these colour casts, therefore improving their shots and allowing them to use a wider range of lighting conditions.

White Balance has been applied to create better lighting conditions.

Colour Temperature

Understanding colour temperature is the key to perfecting the lighting in a shot. Each light source has its own individual colour, or colour temperature, which can vary from red to blue. For example, candles and sunsets give off light that's closer to red, giving the shot a warm look. Whereas things like clear blue skies give a shot a cool look. Cooler colours like blue and white generally have colour temperatures over 7000K, whereas warmer colours like red and orange have a colour temperature around the 2000K mark. When setting your camera's white balance manauly, you are given a number of pre-set colour temperature options, or you can customise your own settings.

A scale measuring the temperatures of different light sources.

Colour Gel

Colour gels are transparent, coloured filters used in film to colour light or to apply colour correction. Modern gels are thin sheets of polyester or polycarbonate. These are placed in front of a lighting fixture and in the path of the beam, applying a filter of the selected colour over the scene being filmed. Eventually, however, the gel's colours can fade or even melt.

Lighting Fixtures

Lighting Fixtures, such as Arri Lights, are electrical devices used to create artificial light. This is done by use of the electric lamp within the fixture. All fixtures like this have a light socket that holds the lamp in place, but also allows for its replacement. They should also have a switch to control the intensity of the light. Reflectors can also be used to direct the light.
An example of an Arri light in action.

Monday, 2 December 2013

Camera Angles - Instructional Video

Instructional Video



As a group, we put together this instructional video to practically demonstrate the different camera angles. These include High Angle, Low Angle, Eye Level, Bird's Eye view, Worm's Eye view and Canted Angle shots. We also attempted to demonstrate a Crane Shot and a Trombone Effect. We showed all of these in this instructional video. Each of us had our turn at both starring in the video and using the camera to film so that we were able to get some practice at filming these various shots. The video was meant to show, not just how each of these angles/shots were filmed, but the effect they had. The High Angle, for example, creates the illusion that the actor being filmed looks much taller from this  angle. Similarly, the Low Angle creates the illusion of an actor looking much smaller from this angle.

Tuesday, 26 November 2013

Camera Angles - Analysis

Camera Angles

Low Angles
A Low Angle shot in Nicholas Winding Refn's Drive (2011)

In this scene from Nicholas Winding Refn's Drive, a fight has broken out between Ryan Gosling and another character. The scene follows the struggle between the two characters, until Ryan Gosling is eventually able to overpower his attacker, knocking him to the ground and crushing his school with his boot. The purpose of the low angle shot used here is to show how Ryan Gosling's character has overpowered his enemy, giving him a very strong and intimidating image.

A Low Angle shot from Terrence Malick's Tree of Life (2011).
Another low angled shot is used in in this scene from Terrence Malick's The Tree of Life, as we see a character reaching out to the figure above them. As the viewer, we are put in this character's perspective (as though we are reaching out with them), and the mysterious figure above is made to look taller than they may actually be, turning them into a sort of angelic figure that is there to save our character.

High Angle


A High Angled shot from Alfred Hitchcock's Psycho (1960).
In Psycho, Alfred Hitchcock uses a high angled shot during the scene in which Arbogast searches Norman Bates' home, and eventually comes across Mother. The camera tracks Arbogast as he climbs the stairs from a high angle, suggesting that something is lurking around the corner. This allows the audience to be placed in the character's shoes. Like Arbogast himself, we can't know for sure what kind of evil presence could be looming around the corner, building up tension until the shocking appearance of the killer.

Eye Level


An Eye Level shot from George Lucas' The Phantom Menace (1999).
This is a common angle to use in filming. The standard eye-level shot allows for audiences to be comfortably seated within a scene, as opposed to creating the purposefully disturbing and unsettling effect that shots like the Worm's Eye Angle and Oblique Angle can create. In this scene from George Lucas' Star Wars prequel The Phantom Menace, a light saber duel commences between three characters. Use of the eye level shot allows for audiences to become familiar with where characters are positioned within the duel. The use of other, more complicated shots could easily disorient audiences.

Bird's Eye View


A Bird's Eye View shot from Stanley Kubrick's The Shining (1986).
In this first shot from Stanley Kubrick's The Shining, an angle from a bird's eye view is used when Jack Torrance watches Wendy and Danny walking through the hedge maze. Not only does this angle show the audience that these characters are being watched, and give us an idea of the enormity of the hedge maze (setting up the film's climax), but the way in which the shot is zooming in creates the feeling that dome sinister force is closing in on Wendy and Danny (the film's chilling soundtrack adds to this frightening feeling).

Worm's Eye Angle


A Worm's Eye View shot from Stanley Kubrick's The Shining (1986).
In this second shot from Stanley Kubrick's The Shining, an angle from a Worm's Eye view is used (the complete opposite of the angle used above) as Jack, having completely lost his mind, taunts Wendy from the other side of the pantry door. Because Jack's behavior is supposed to disturb and frighten us as an audience, this type of shot is used. This angle is fairly uncommon, and can potentially confuse and even discomfort viewers. Like many of the shots used in this film, it's something we've never seen before, and adds to the idea that we don't fully understand what's happening in the Overlook Hotel. This is an appropriate effect as it goes along with the feeling of terror now felt by Wendy as she watches her husband lose his mind.


Oblique Angle


An Oblique Angle from Danny Boyle's Trance (2013).
A shot that is slightly tilted can be known as a canted or oblique angled shot (also a Dutch Angle). This angle is popular amongst director Danny Boyle, and can be seen many times in his movie Trance. In this scene, the camera suddenly shifts an oblique angle is used, as James McAvoy's character makes a discovery that will have a huge effect on the rest of the film. The shifting of the camera angle is used to signify the shift in the film's plot after this discovery. The canted angle, which will appear quite abnormal to viewers, also suggests that something isn't quite right. This is appropriate, as it is later revealed that the entire scene is actually not real, and is all in the character's head. The use of this angle is a clue to none of this being normal. The Dutch Angle is also very popular among Film Noir.

Tuesday, 19 November 2013

Music Video - Production Journal

Production Journal

Attached below is the final version of our music video,  which uses the track 'I Miss You' by Blink 182.


Pre-Production

Dates: 30th September and 1st October

After studying many different music videos, we were each given the task of generating at least three of our own ideas for a music video of our own. We chose our own track and thought about what visuals could accompany the song, keeping in mind what makes a good music video.
To create a idea that was appropriate for the accompanying track, we had to listen carefully and analyse  the song's lyrics. Analysing the song's lyrics gave us a better understanding of the song's meaning, and helped us to decide on the visuals we could film and the actual plot of our music video.

Dates: 7th and 8th October

When we finalised our idea for a music video, we pitched our idea to the rest of the class in a powerpoint presentation that summarise our music video (plot, shots etc.).
Finally, in our production groups, we decided which of the pitched music videos we would work as a group to produce. Once we had all agreed on an idea for a music video, we began pre-production.

Dates: 14th and 15th, 21st and 22nd October

Our tasks in pre-production included creating storyboards and a shot list for our music video. We would also need to create risk assessments, a contingency plan, call sheets and a summary of our plan for filming (where we were filming, when and why). We needed storyboards and a shot list so that, when we finally went out filming, we knew what shots we needed and how they fit into the video, instead of using our filming time to decide this, meaning our time off-site was much more productive.

A risk assessment was required to ensure that we would be filming in an area that was safe for both us and the equipment. We needed a contingency plan so that, if anything were to happen that would interrupt our filming (bad weather, absences etc.), we would have a backup plan ready that will allow us to still be productive during these sessions (e.g. using the times to edit instead of film).

Finally, we needed a call sheet so that we were able to get in contact with each other at any point whilst filming, and so that we had a schedule for us to aim to follow.
Once we had all of these things, we were able to book out the equipment we needed: a DSLR, a Tripod and an SD Card for our recorded footage. Then, we were ready to go off-site and film.

Production

Dates: 4th and 5th November (Full Days)

In the first week of November, we had completed all of our planning and were ready to film. This was when we encountered our first problem, as two members of our group were absent, meaning we would not be able to go off-site and film, as we were shorthanded. This, however, did give us some extra time to plan our music video. Luckily, one member of our group showed up that afternoon. We were also able to find someone who was prepared to volunteer as an actor for our music video. We then had enough crew members, the panning and the equipment to allow us to film.

Our first stop was at Castle Gardens, were we filmed the majority of the Music Video. It was a great place to film as it was fairly quite, so weren't often interrupted by passers by. Trees also provided us with some shade and, had it rained at all, we were near shelter at all times. Luckily, the weather was dry for most of our filming, and the sun was also bright, providing perfect lighting for our music video.
Most of the shots we filmed were of our actor, but we also wanted to get some attractive shots of the location, which was helped by the nice weather. We also wanted to capture some of the city life (passers by, cars, buildings etc.). We filmed some extra shots across York as part of our contingency plan. In case we lost any of our shots due to technical difficulties, or lost the SD card itself, we filmed some extra shots as backup. These would also be useful if we entered post-production and discovered that we hadn't actually filmed enough shots for the length of our music video.

One of the only problems that we enountered when filming was an issue with continuity. On our second day, we continued our filming at Castle Gardens. However, the weather was drastically different that day. That day was dark and cloudy, whilst the day before was sunny and bright. So, if we were to film, the weather would be drastically different throughout all of our footage, creating a problem with our music video's continuity. With no other time to film, we had no choice but to continue, and use this footage. Luckily, this was the only major problem we faced during production.

Post-Production

Dates: 11th and 12th, 18th and 19th November

Once we had wrapped up filming, we would need to edit our footage using FinalCutPro, and turn it into the Music Video we wanted it to be. We would do this in Post-Production. We would start by logging and transferring all of our captured footage. This included the footage a member of our group had filmed at home with our actor, as this was a scene that required a location with a garage, meaning one of us would have to take a camera home to film this scene in their own time.

We had filmed more than enough footage for the length of our music video, meaning there were even some shots that we didn't need to use. We had to trim each shot down to an appropriate size to allow it to fit in the music video. Some of the shots were too long, but some also had mistakes in them that we only spotted during post-production (e.g. a crew member in the background).

Once our shots were fit to the music, we added our credits (Director: George Bartlett etc.), exported the video and then uploaded the completed Music Video to YouTube.

Monday, 18 November 2013

Unit 21: Post-Production

Post-Production

This is a step-by-step guide to the workflow behind the post-production process.

Folder

The first step in post-production is to decide where the media will be stored. This means you will be required to create a folder on the drive you are working on (either internal or external).
You can create a new folder in your area using Finder by clicking New Folder under settings.
You should also make sure that any additional files that have not been captured through log and transfer have been included in this new folder. Otherwise, these files could be lost. These files could be taken from a DSLR, external audio recorder or downloaded from the internet.

Scratch Disks

You must remember to set scratch disks on Final Cut Pro, as this is an important part of the editing process. This dictates where any captured footage (e.g. through log and transfer) will be stored, including rendered files, waveform, cache and autosaves.

Scratch disks can be set under System Settings on Final Cut Pro.
Project and Sequence

The sequence is found on the timeline. It is the sequence of video/audio clips that you are currently working on. You can have multiple sequences within an individual project. Working with multiple sequences can be useful when crating rushes to preview footage and receive footage.

A second second sequence can be added by selecting File, New and Sequence.
Exporting Footage

In some cases, you may require your files to be in a particular format when exporting footage. This may be because the client (the viewer) wants the video for mobile content, DVD, web use or even uncompressed which allows for higher quality viewing.


Sequences can be exported by selecting File, Export and then QuickTime Movie.
When exporting footage, Final Cut Pro offers two options:
  1. QuickTime Movie - A high quality compressed version of your current sequence in a .MOV file.
  2. Quicktime Conversion - This allows you to choose from multiple formats, the resolution of the final version it exports, and the quality of the finished product.


Sunday, 17 November 2013

Soundtrack Production in Film - Analysis

Sound Mixing, Panning and Technicality

Mixing

Audio mixing is the process in which sounds are combined into one or more channels. During the process, the sounds' frequencies, signal levels and dynamics can all be manipulated. Effects can also be added (e.g. Reverb). The purpose of Audio mixing is to create a sound that is more appealing to viewers/listeners.

Some good examples of Audio mixing can be seen throughout Tom Hooper's adaptation of the stage musical Les Miserables (2012). Because this is a musical, Sound mixing is used to blend the movie's score, the actor's singing, dialogue and other ambient sounds.

In one example, a characters is singing in the rain. This is a pivotal scene in the movie, and it's very important that the song's lyrics are heard, as the character's emotions are being displayed through the song. Audio mixing is used here to ensure that the character's voice (and therefore the song's lyrics) can be heard over the ambient sound of rainfall. This is done by altering the dynamics of the different sounds appropriately.



In another example from Tom Hooper's Les Miserables, Audio mixing is used to amplify voices, so that the stars of the movie can be heard when they should be. Inn this example, Anne Hathaway is singing with a whole choir, but her voice can be distinctly heard over the others, even though they still sing together and at an appropriate volume (they don't sound too loud or too quite).


In both examples, Sound mixing is used to create a sound that is much more appealing to audiences and is more appealing to the scene's context.

Panning

Audio panning is used to create the illusion that a source (e.g. the image on the screen) is moving from one side to the other. Panning can also be used to create a dual mono signal. This is called "panning up", and means that both the left and right channels are sent equally to the left and right output of the mixer.

An example of Audio panning can be seen in Edgar Wright's Shaun of the Dead (2004). During the scene linked below, in which Shaun walks to his local newsagents, oblivious to the zombie apocalypse happening around him, somebody runs past Shaun.

Audio panning is used to create the illusion of the character getting nearer to Shaun, then passing him, and then disappearing into the distance. The sound of his footsteps get closer to Shaun, and then more distant as he runs away.



A "control knob" is used to distribute the sound at a particular position. For example, a knob pointed at the 8 o'clock position would mean that sound only appears on the left channel, and all the energy goes exclusively to the left (meaning it will be played through the left speaker).

In this case, to create the illusion of the character running past, the control knob would be slowly moved to the right (the 4 o'clock position), making the sound appear in the right. After panning, when the whole sound is hear, it will sound as though it is moving from the left to the right in one smooth stroke.

Tuesday, 12 November 2013

Unit 38: Foley Sound (Shaun of the Dead)

Soundtrack Production



We have been tasked with producing a soundtrack using Foley Sound for a clip of our choice.

Time Codes

Listed below are all the sounds we will need to produce for this clip from Shaun of the Dead.

00:00 - 00:20 Footsteps on pavement

00:20 - 00:40 Footsteps on floor tiles

00:01 Shaun trips on curb

00:05 - 00:08 Shaun yawns

00:07 - 00:09 Footsteps of runner on pavement

00:35 Shaun slips on bloody tile

00:26 Shaun opens freezer 00:34 Shaun closes freezer

00:28 Shaun picks up can of Coke 00:31 Shaun puts can of Coke back 00:32 Shaun picks up can of Diet Coke

00:40 Shaun opens freezer 00:44 Shaun closes freezer

00:44 Shaun sighs

Now we need to decide how to recreate these sounds through Foley Sound.

Sounds like footsteps, yawns and sighs are sounds we can easily create ourselves, but we will have find some way of creating more complicated sounds like the slip, the cans and the freezers.

We also need to choose some music to accompany the clip. This music should be slow, eerie and spooky. It should create an unsettling atmosphere and give the idea that something isn't right.

We will have to use Audio Panning when creating our Foley Sound. From 00:07 - 00:09, when the runner passes Shaun, we want to make his steps sound closer and closer and then more and more distant as he comes and goes. This will be difficult but, through panning, we can create this effect. This will mean the sounds matches the visuals in the clip and should be more realistic.

Our Finished Product

http://youtu.be/jfNktIcTX74

Evaluation

After being tasked with producing a soundtrack for this clip, I rewatched the full scene from Shaun of the Dead that the clip had been taken from. I took a closer look at the clip, noting down the sounds that could be heard (car alarms, music in the shop etc.) and the visuals on screen and also took a note of at what point during the clip all of these actions occurred and for how long they lasted. This meant that, when it came to producing these sound effects through Foley art, we would know what different sounds we needed to record and how long in seconds they should be. Hearing the sound effects used in the original clip also inspired the ideas we had for our own sound effects that we produced using Foley.

Next, we had to actually create these sound effects. We worked together to produce these sounds ourselves. Most of these were fairly simple as sounds such as running, yawning and doors opening/closing were all sounds we could easily recreate ourselves. Some sounds, however, were more complicated. One sound we struggled with was the sound of Shaun slipping on the bloody tiles. To produce this sound we needed to make sure we were wearing the right shoes on the right material. When we found the perfect combination, we were able to produce a fitting sound effect.

It was after recording our sounds, whilst editing, that we made use of Audio Panning. This effect was required at the point in the clip when a runner quickly passes Shaun from the right of the screen to the left. His footsteps should have been able to have been heard before the runner was actually seen, which we did by adjusting the audio clip's levels, resulting in the runner being able to be heard before actually appearing on the screen. Also, by panning the audio from right to left, we created the effect of the runner coming closer and closer, passing Shaun, and then becoming more and more distant.

Overall, the sounds we produced for our clip were appropriate and were edited (cut to be shorter, made quite etc.) to fit perfectly to the visuals. We found a soundtrack to accompany the clip too. It's a slow, eerie piece of music, which is perfect for the creepy atmosphere of the clip from Shaun of the Dead. This soundtrack, and the sound effects we produced, worked together to create an eerie and unsettling atmosphere that was perfect for the clip and added another layer of creepiness to the clip.

Monday, 11 November 2013

Unit 21: Cables

Cables

Composite Video

This is an analogue transmission that carries standard definition (typically at 480i or 576i resolution). They are used for standard formats. These include NTSC, PAL and SECAM. These cables only carry one signal, unlike S-Video or Component.

S-Video

This is another analogue cable that carries standard definition. S-Video separates black and white colour signals. This means it achieves a better image than composite, but a lower resolution than component video. Each pin has its own ground pin.

Component Video

RGB distributes the colour signal three ways: red, green and blue. This allows for full HD resolution. Component video cables do not carry audio, meaning they must be paired with audio cables.

VGA (Video Graphics Array)

This cable is typically used for computer monitors and allows for connections to video cards, HDTVs, Laptops, Netbooks and an array of other equipment. It allows for higher resolutions, ranging from 640x400 all the way up to 2048x1536. VGA cables also do not carry audio.

DVI Cable

This is a digital cable, meaning there is no loss in quality. It is often found on HDTVs and Video cards. There are three main types of DVI cables. These are: DVI-D, DVI-A and DVI-I. It allows for resolutions up to 3840x2400. DVI cables also do not carry audio.

HDMI

HDMI Cables are used on many different devices ranging from computers, cameras, monitors and Blu-Ray players. It is a single cable that carries video and audio signals digitally, meaning there is no loss of signal. Version 1.0 allowed for 1920x1080p, whilst Version 1.4 supports ethernet, 3D and 3840x2160p at 30fps. Version 2.0 supports 3840x2160p at 60fps.

RCA Connector

RCA cables are used to carry video signals. They can also carry  audio signals. They are most commonly used for stereo equipment. They are still used by many devices even after the digital switchover. For example, VHS, DVD players and TVs still use RCA cables. However, there is a loss of quality over distance and RCA cables are also susceptible to hum and noise.

1/4 jack and 1/8 jack (3.5mm)
1/4 jack is used for audio recording equipment, stereo equipment, guitar and bass leads. 3.5mm is found on phones, DSLR’s (these cables are intended for short runs)
XLR Connector
This is the most commonly used cable for microphones. Also, due to the design of the connector, the microphone won't disconnect on its own accord.

Tuesday, 22 October 2013

Sound Production Animation - Evaluation

Evaluation

We were given the task of producing a soundtrack for a silent animation using Foley Sound. We had to create our own sound that was appropriate for the animation we chose (we chose Princess Mononoke), and add them in editing. We also choose an appropriate soundtrack (Prelude and Action by Kevin McLeod) to accompany the video.

What We Did Well

The foley sounds we created were fairly simple (blowing through a pen lid for the effect of arrows zipping through the air, tapping against wood for the effect of the horse's hooves etc.) but worked well when synchronised with the visuals as they were timed perfectly and gave the illusion of being the actual sound.

The soundtrack we chose to accompany the visuals was a piece of music called Prelude and Action by Kevin McLeod. This is a very fast paced piece of music that uses Drums and woodwind instruments. This is appropriate as it fits the tone of the visuals. Because of the location and time period the animation is set in, it wouldn't be appropriate to use music that uses any instruments like electric guitars, synthesisers or other technological effects as these are a lot more modern and wouldn't work alongside this clip.

The audio we recorded was synchronised well with the visuals (the sound effects of the arrows were placed exactly when the character's fired their bows). Our aim was to sync the sounds we recorded with the visuals so perfectly that viewers wouldn't be able to tell these are foley sounds as opposed to being the acutal sounds of horse's hooves, arrows being fired or swords clashing.

What Could Have Been Improved

One problem with our final video was that there were some sounds we could not produce ourselves. For example, we could not find a sound to match the visual of one of the soldiers slashing a woman with his sword. Although we created the sound of the sword swinging through the air, we could not find a way to mimic the sound of the steel cutting slashing her. This would be a complicated sound to create ouselves. Given more time, we may have been able to come up with some creative way of creating the sound, but we ended up needing this time to edit our video. This is something that, if we could repeat the task, we would give ourselves some time to improve upon.


Monday, 21 October 2013

Importing, Exporting and Editing

Importing, Exporting and Editing

Importing a file
By clicking 'File' and selecting the option 'Import', we can then import 'Files', allowing us to select files from the computer that we can import into Final Cut Pro for editing.
Rendering a file
Before it can be viewed and edited in Final Cut Pro, a file requires rendering. This is usually a quick process depending on the length of the clip, but it then allows for the file to be edited.
Cutting a clip (Before)
By selecting the 'Razor Blade Tool', and clicking at a particular point in the clip, a clip can be cut. This means that one continuous clip can be cut and separated into two or more, making the editing process much easier.
Cutting a clip (After)
Now the clip has been cut. Using the 'Razor Blade Tool', the clip has been cut halfway through, turning the one clip into two separate clips. This allows for one section of the whole file to be edited without affecting the other.
Cropping a clip
If a clip is  unnecessarily long, and requires shortening, this can be done using the cropping tool. Simply by clicking from the end or beginning of the clip, it can be dragged to the point you want the clip to start or finish,
Adding effects (fade)
To help allow for smoother transitions between clips, effects can be added. In this example, I have added a fade effect between two clips to smoothen the transition. This was done by clicking the 'Effects' tab and looking in the 'Video Transitions' folder.
Exporting a file
A file can be exported by selecting 'File' and 'Export'. I exported it to 'Quick Time Movie' where the  project can be played back.

Monday, 14 October 2013

Production Gear

How to use Production Gear

{This is a link to our instructional video titled 'How To Use Production Gear': http://www.youtube.com/watch?v=_wsm2q5vvmg}

Our task was to create an instructional video that clearly demonstrated to the audience how to use the different pieces of equipment associated with audio and video production (microphone, camera etc.). We explained and demonstrated how each piece of production gear was set up safely and properly, how each was used effectively, and why they specifically were used. We also made sure to emphasise the importance of each piece of equipment by showing the problems we would encounter if just one was taken away. For example, at 1:27 we shut off the lights, and the actor in the video couldn't be seen in the dark, showing how difficult it would be to film without the proper lighting.

What We Did Well

We accomplished our task by clearly demonstrating for the audience how each piece of production gear is used and what their purpose is. For some of the more complicated pieces of equipment like the camera, we used close ups when setting it up as there are many small buttons and switches that cannot be seen from a normal distance.

One of the greatest strengths of our instructional video was the format. The actor (Sam) would introduce a piece of equipment and, step-by-step, would guide the audience through how each one was used and what its main purposes were. The transitions between each demonstration were also smooth and kept the video well-paced. I passed equipment to the actor from out of the shot so that there were no silent pauses and the video's rhythm was never broken. Anytime that there were pauses, we were able to fix this in post-production by adding transitions such as fade in, swipe and dissolve.

What Could Have Been Improved

There were a few problems with the instructional video that we didn't notice until we had uploaded it to YouTube, and that we would improve upon if we were to repeat the task. For example, in the final video, because of the music we used to accompany the video, the actor can sometimes be difficult to hear, as the music is slightly too loud. If we were to repeat the task, we would make sure the actor's voice could still be heard over the music, as the instructions given throughout the video are more important than the video's soundtrack and therefore it is more important that they are heard.

The video also ends quite abruptly which could throw off some audiences as the rest of the video is fairly well-paced. Instead of going straight from the final demonstration to fading out, we could have come up with a better conclusion to our video in which the actor would have thanked the viewer for watching the video and said goodbye, as this would have appeared as a lot less rushed and abrupt than our ending to the final video.

How We Worked As A Group

I believe we worked consistently well as a group throughout the project. Our Director had a clear idea of what he wanted to film, but as a group we all had our own input and helped to shape the video into a final version that we were very happy with. Our cameramen also worked efficiently and one member of the group also helped write a script for our actor. Our actor was enthusiastic throughout the project and was always open to new ideas that he would act out. As a group, we worked together to edit the video.

I operated the microphone as well as assisting the actor by passing him the pieces of equipment that he would then demonstrate how to use to the camera. I also helped with the editing process as I was familiar with the software and had some input with the soundtrack we chose to accompany the video.


Tuesday, 24 September 2013

Unit 38: Soundtrack Production for Animation

Using Foley Sound
Plan

These are the noises that require Foley sound. They are too difficult and/or complicated to create live on set:

Horse's hooves and other noises, Bow and Arrows, Shouting and Screaming, Characters falling over and being hit with arrows

Props

We will be using props to create Foley sounds. These will include:

We can beat coconuts in a rhythm for the sound of the Horse's hooves

We can blow through a pen lid to create the sound of an arrow zipping through the air

We can spring a rule for he sound of an arrow hitting the ground or a tree

We can stab materials like cardboard or paper to create the effect of arrows hitting characters

Soundtrack

We will select a piece of music to accompany the animation. Because it is a fast, action-packed sequence, we will be using music that is fast-paced. The animation should fit with the music's rhythm.





We used this piece of music because we thought the quick drum beat fit well with the  speed of the sequence and added to the tension.

Monday, 23 September 2013

Unit 21: Understanding Video Technology: Production Equipment

Production Equipment

Camera Setups

History of Cameras

A motion picture camera is a device capable of capturing a series of images at high speed. Typically, motion picture cameras capture images at 24 frames per second, meaning 24 individual frames are captured by the camera in a single second. This frame rate allows for audiences to feel the illusion of movement. However, the first movies only played at a frame rate of 14-20 frames per second, meaning movement appeared flawed and jumpy, as there were not enough frames to allow for smoother movements.

Louis Le Prince was a French inventor who shot the first moving pictures on the first working motion picture camera. The camera was a single lens and Le Prince shot on paper film. He was never able to perform a public demonstration of his invention, however, as he mysteriously disappeared in 1890. Over time, cameras became smaller and more compact until they became portable. It wasn't until the 50s and 60s, however, that the first colour cameras were invented.

In the 20th Century, VHS, a videotape based analogue format, was used. However, in the early 2000s, MiniDV was invented. MiniDV was a digital tape based format that allowed for filming in standard definition (720x576). MiniDV was followed by DVD. Currently used consumer cameras allow for full definition (1920x1080). The current industry film standard is 35mm (which is 10-24 megapixels). The current industry digital standard is 4k. However, some films are currently being shot in 6k for an even clearer picture.

How does a Camera work?

Film is located inside the camera, which is completely dark at this point. The shutter then opens, allowing for light to travel through the lens and the film or sensor to be exposed. Then, the shutter closes, and the next piece of film moves into place.



An inside look at a motion picture camera.
The sensor size of a camera has a huge impact on the actual quality of an image. DSLRs have made a big impact on the film industry, because they allow for affordable filmmaking. The general rule of sensors are the smaller the sensor size, the higher the frame right. However, it will not handle dark situation well and will produce grain. The larger the sensor size, the better the quality of the image produced. If it's larger, it will handle dark situations better.


There are three main functions of controlling how an image is exposed correctly with a camera. These are:
  • Shutter Speed - The amount of time that each individual frame is exposed for.
  • Aperture - Controls DoF
  • ISO - Sensitivity


Lighting Setups

The most common lighting setup is the three point.


An example of the three-point system.

Lighting is important for film because it allows for an image to be made more dynamic and interesting to look at. It creates depth which allows for the illusion of a three dimensional image. The Key light is typically placed 45 degrees to either side of the camera. Fill lights are used to fill in the shadows of a scene, whilst Back lights add dimension. When shooting outside, without access to any lights, a reflector can be used instead.


Sound Setups

How sound is recored on set is very important. There are many different methods of doing so. These include:

  • Shotgun Microphone
  • Dynamic Microphone
  • Condenser Microphone

The key for good sound is to ensure the microphone is facing the actor, meaning no other unnecessary and unwanted background noise is accidentally picked up. Also. it is important to ensure that sound levels on the recording device are peaking, but not hitting maximum, as this can cause the sound to be distorted. Volume can always be raised in post-production anyway. Shotgun Microphones are also a great method of recoding sound as they only pick up sound in the direction they are being directed.

Tuesday, 17 September 2013

Unit 38: Soundtrack for the Moving Image

Diagetic, Non-Diagetic and Foley Sound

Diagetic
Diagetic sounds are any sounds that are recorded live by the actors. These would most commonly be the actor's lines of dialogue, or any other sound effects that are easy and simple enough to record live (an actor's dialogue/a door slamming etc.) as a pose to being created later on in the filming process. They could still be edited later, but diagetic sounds are simply sounds that are not artificial, and are real and created in the time and place they are seen/heard in the actual movie.

Jack Nicholson and Shelley Duvall's dialogue in this scene from Stanley Kubrick's The Shining (1986) is an example of Diagetic Sound, as their dialogue was recorded live, as opposed to being created at a later time. It exists in the world we are seeing, and can be heard by all of the character by present there.


In this second example from Paul Thomas Anderson's The Master (2012), Freddie is locked in a jail cell after having been arrested. His temper gets the better of him, and he proceeds to smash up his cell as he and Lancaster shout at each other. These sounds (Freddie being dragged through the corridor, the cell door being slammed, Freddie smashing the toilet and Freddie and Lancaster shouting at each other) are all Diagetic sounds. These were sounds that were recorded as they were actually made. This is because the film's director wanted to capture the mental state of Freddie's character in a way that felt as realistic as possible to audiences. Creating these sounds within that same space also allow for some of the sounds to echo (e.g. the slamming of the cell door), which adds to the scene's realistic feel.


Non-Diagetic
Non-Diagetic sounds are recorded later on in the editing process. These are usually sounds that would have been too difficult to record live because they are too quite or complicated. The most common examples of Non-Diagetic sound would be special effects (gunshots, explosions etc.) or soundtrack (score), as these are sounds that are heard during a scene, but not necessarily recorded there and then. Narration is also Non-Diagetic as, although it is heard during the scene, it is recorded separately.

Morgan Freeman's narration from this scene in from The Shawshank Redemption (1994) is an example of Non-Diagetic sound as it accompanies the scene, but is not actually present in the scene being filmed. It does not exist in the world that we see, and can only be heard by us: the audience.


In this second example from Steven Spielberg's Jurassic Park (1993), after the park's security is shut down, the Tyrannosaurus breaks out of its pen and attacks the park's visitors. This is when we hear the iconic T-Rex roar for the first time. Because the T-Rex used to film this scene was only an animatronic, it could not produce its own sound. So, the film's designer had to create sound appropriate for the animal's roar that could be implemented at a later date. Gary Rydstorm, the film's sound designer, created the T-Rex's roar by combing a number of different animal noises (e.g. whales, elephants, tigers and alligators). Because this is a sound that was engineered at a later date, and did not exist within the scene's diegesis.



Juxtaposition
This is when two things are placed together with the purpose of conflicting with each other. Sometimes, a filmmaker will choose to place a sound and a scene together that contrast with each other, creating juxtaposition. A filmmaker may choose to do this when creating a Horror film, as a means to create a feeling of discomfort and unsettlement with the audience.

In this example from Insidious (2009), James Wan uses juxtaposition to create an eerie atmosphere that unsettles the audience. Because the accompanying soundtrack is so tranquil and relaxed and yet the images are so terrifying, a contrast is created here that discomforts and disorients the audience. It's this terrifying contrast that makes this scene so memorable.



Foley Sound
This is the process of recording a required sound later on, typically during post-production, to further enhance the film's quality. Because it is not recorded live and on-set, Foley Sound is a form of Non-Diagetic sound. For example, the sounds of character's punches during a fight scene or the sound of an explosion in an action movie are often sound effects that have been added later on in the editing process, as this is much easier than recording theses sounds live on set, and can often be made to sound more realistic.


In this example from The Expendables (2010, Foley Sound is used to make the fight scene that happens here appear more realistic by creating the appropriate sound effects and adding them to the scene. Punches, bullets hitting flesh and knives cutting flesh are all sounds that have been created through Foley Sound, as they are not sounds that can be made on the spot easily or safely. They require time and effort to be made to sound as realistic as possible, allowing for a better viewing experience for audiences.

Mood and Meaning

In Film and TV, Soundtrack is used to create different moods and meanings and to establish different atmospheres within a text. For example, a soundtrack could create the feeling of fear and build up tension within a scene through its use of chords. A frightening and tense atmosphere can be created using dissonant chords. These are chords that feel unstable in their movements. They are used in horrors because they often reflect the panic in a Horror film, and amplify the tension. Consonant chords feel much more stable than Dissonant chords, and feel much more perfect and complete. They can be heard in Children's films, Family films and Comedies, due to their soft and gentle feel.

Example: http://www.tcm.com/mediaroom/video/201392/Psycho-Movie-Clip-Drive-Part-2.html
 
 In this example from Alfred Hitchcock's Psycho (1960),  Bernard Herrmann's score plays as Marion drives through the rainy night, moments before pulling over at Bates' Motel. Herrmann's score is an example of Non-Diagetic sound within the scene, as it is part of the film's soundtrack, which has been recorded at a different time and attached to this scene. The score uses dissonant chords throughout. The sound of these unstable chords reflect Marion's state of mind at this time in the film. It suggests that Marion could be mentally unstable, which concerns the audience, and makes us fear for the character and her safety.

The voices within Marion's head are also Non-Diagetic, as they have been recorded at a different time, and inserted into the scene during the film's editing process. These are the voices of the characters Marion has encountered throughout the film so far (the policeman, the car salesman, her boss etc), all of who are discussing their theories about Marion and her intentions. The way these voices echo and their volume increases suggest that Marion is struggling to ignore these voices. These voices create a feeling of paranoia for the audience, as we now know that there are people chasing Marion, and that they are aware of her crimes.

Together, Herrmann's score and the voices in Marion's voices give he audience an idea as to the character's state of mind. We can tell that she is unstable, that she is paranoid about the people who might suspect, and that her fear of being caught is only growing as Herrmann's score speeds up and the voices in her head become louder and louder.

The History of Foley Sound

It was Jack Donovan Foley who began what is now known as Foley art back in 1927. As a member of the sound crew working on Universal Studios' Show Boat, it was Foley's job to produce the required sound effects, as the microphones being used for the filming of the movie could only pick up dialogue. This meant recording sound effects such as footsteps and doors closing in real time. Jack Foley continued to create sounds for films up until his death in 1967. Although his methods are still used in the industry today, the art of Foley has progressed as recording technology has progressed. Sound effects do not have to be recorded in real time. Also, hundreds of props and digital effects are now available to artists to employ.



In the video above, Foley Artist Noisyid demonstrates the art of Foley, and the effect it has on our viewing experience, by removing the audio from a clip from the film The Brain that Wouldn't Die (1962), and creates his own sound effects to accompany the muted clip. Some of this is done by simply recreating the required sound effects (pouring of water, bottle opening etc.). However, at 1:44, we see Noisyid using a different method to create the appropriate sound effect. The accompany the shot of the lab, where we can see pipes flasks bubbling, the Foley Artist creates the kind of background noise we would expect to hear in this situation by blowing into some water through a star, recreating this bubbling sound effect. He has created the same sound effect in a much simpler and easier way, but the resulting sound effect is so perfect that it allows the viewer to feel a sense of reality within the scene and be placed comfortable within it.

Monday, 16 September 2013

Unit 21: Understanding Video Technology

Broadcast Systems and Television Standards

There are three different television systems:

NTSC
This is the standard format in the United States, but has been adopted in other countries like Japan, South Korea, Taiwan and the Philippines. The first NTSC standard was developed in 1941. In 1953, a modified version of the NTSC standard was adopted which allowed colour television broadcasting to be received. NTSC was a widely adopted broadcast system until it was replaced with Digital ATSC. NTSC is based on a 525 line, playing at 30 frames per second at 60HZ. Since it was first developed without consideration for the addition of colour, the added effect of colour into the NTSC format has been the system's weakness.

SECAM
Though work began on SECAM as early as 1956, the technology wasn't made ready for general use all the way until 1967, when further improvements to the compatibility and image quality had been made. Though it was first introduced in France in 1967. it was adopted by many parts of Eastern Europe. Like PAL, SECAM is based on a 625 line, playing at 25 frames per second, but is uses a different colour management system than both PAL and NTSC. Countries on the SECAM system include France, Russia, Eastern Europe and parts of the Middle East.

PAL (Phase Alternate Line)

PAL was first developed by Walter Bruch at Telefunken, a German radio and television company, who patented it in 1962. The first ever broadcasts began in the United Kingdom and West Germany in 1967. As a format, PAL is now used across most of Europe. The PAL format is based on a 625 line, plays at 25 frames per second, on a 50HZ system. The signal is interlaced into two fields composed of 312 lines each. This allows for a better overall picture because of the increased amount of scan lines. However, because fewer frames are displayed per second, a slight flicker in the image can sometimes be noticed.

Three different television systems around the world.
Satellite, Cable and Internet as Broadcast Systems

Satellite

Formed in 1990, Sky is a Satellite Broadcasting Company and is the largest Pay-TV Broadcaster in the UK and Ireland with over 10 million subscribers. Sky offer a diverse range of channels including their exclusive chanels: Sky One, Sky News, Sky Sports and more. They also offer channels in HD and even 3D, but these come at an extra cost.

Sky is a form of Satellite Television. This is Television Programming delivered by the means of communications satellites and received by an outdoor antenna. Some major disadvantages to using Satellite Television are the extra costs associated with using it to its full. These costs can include paying to access extra channels. Also, the price of Satellite Television increases as more televisions are connected.


Cable

Formed in 2006, Virgin Media was the first quadruple-play company in the UK, offering television, internet, mobile and landline services.

Virgin Media is a form of Cable Television. This is a system of distributing television programs via radio frequency, which are transmitted through coaxial cables or light pulses through fibre-optic cables. This is usually a stable and inexpensive option. However, many areas only have a single cable television provider.

Internet

Internet television is the digital distribution of television via the internet. BBC iPlayer is a form of Internet Television, allowing for free and on-demand viewing of the latest television. However, not all forms of internet Television allow free access to on-demand television. For example, Sky Go requires a Sky subscription. Though both of these examples are website-based, they can also be accessed through Androids, iPhones and even games consoles.