Part 2 of Our Information for Photographers Making the Transition to Filmmaking.

DSLR camera with viewfinder for filmmaking. Introduction
In the second part of our introduction to filmmaking for photographers, we look at more of the terminology that you'll encounter as you dive in to the world of digital filmmaking. In part one we picked out some of the most important concepts and systems and here we dig down a little more in to the technology. As with photography, video is a huge genre and one where in my experience, we can only keep on learning. I'm sure we can put together many more parts to this collection of articles but we hope this fills in a few of your knowledge gaps. If you're interested in reading part one, there's a link at the bottom of the page.

4:4:4
This isn’t a football playing formation, it's called chroma sampling, a compression system where the numbers describe how brightness and colour is captured and compressed when the light hits the sensor in your camera. The first number represents luminence, the second number a blue channel and the finally the third number represents a red channel in each pixel. There are variants of this code and each describes a different compression standard. Others you might encounter are 4:2:2, 4:2:1 or 4:2:0. The larger the numbers, the better the quality. Unless the capture is at 4:4:4, the maximum, then because of the compression there’s always some loss of quality but some values will offer an acceptable loss, one not noticeable by the human eye. The parameters describe pixel-level compression and it works by throwing away some colours from each pixel within a 4 x 4 block (or 16) pixels. The numbers indicate where in the matrix the data is thrown away. I could try to explain it further but an article on the Adobe site goes in to the detail very well. Here’s the link blogs.adobe.com/VideoRoad

As you might imagine, capturing as much information about the colour of your scene at the time of recording will give you higher quality footage to work with in the edit. Where these numbers really matter is when recording for TV broadcast or movie productions where the best reproduction is required at all times. Indeed, for TV programming, if a camera isn’t capable of recording to a certain quality, it can’t be used to make the programme. 4:2:2 is a popular standard for TV makers to use and anything less is unsuitable. For productions destined only for the web the higher numbers are probably unnecessary but even here, there’s a noticeable difference between footage captured at the higher and lower qualities.

Data rates
Look at a camera in technical terms and it’s a device for converting light in to data. Any footage captured by any camera is at it’s rawest, just data. Data rates reflect the speed at which that data is captured by the camera and laid down on to the media recording the footage. 50mbits/sec would be a typical data rate and the higher the number, the more data is recorded each second and ultimately better quality footage. An example of a higher data rate might be 150mbit/sec. Data rates work hand-in-hand with the 4:4:4 standard we described earlier. As a higher quality capture needs more data, so a higher data rate is needed to safely record it.

If a camera is capable of capturing at fast speeds, the media used to record it must also be capable of working at those speeds too. If slower cards are used they won’t be able to keep up with the stream of data and eventually lose some of that which is bad news indeed. It’s therefore very important to use media rated at the same or higher data rate the camera is going to push out those streams of information. Sadly the faster the camera or the media, the higher the price. Annoying eh?

Bit Depth
This is another low-level system within a camera's programming, is based on binary numbers and determines the amount of colour a sensor can capture. Typical bit depths are 8 bit or 10 bit. Without going in to the detail of binary numbers, 8 bit captures 256 colours for each red, green and blue (RGB) colour channels. 10 bit captures 1024 colours per channel. As you can see, the higher bit-depth records more detail and as always, is more desireable. This system will be in operation when capturing stills too where the bit-depth can go to 16bit.

Green Screen
Green screen (GS) is the name given to the technique which super-imposes a scene on to a green background, usually in front of which are actors or presenters. It is very much literal in that behind the people is a green screen. Green screen is very popular in movies where instead of actual sets, the scenery is imagined and added by computer in the edit after the actors have done their thing in front of the GS. When the final film is completed it’s almost impossible to tell that the things we’re looking at isn’t real apart from the human beings doing the acting.

Green screen works because the software used to edit the footage is able to take anything it finds in the colour green and replace that with the alternative background. Many news and current affairs programmes are now presented in front of a green screen too. The weather is a classic green screen example. The weather people are not standing in front of those maps we see on the tv, but a large GS with the maps added digitally.

Timelapse
Timelapse is a way to show an event faster than it actually took place. So what might have taken hours, days or months to record is played back in seconds or minutes. A timelapse film isn’t a normal film recording, it’s a series of stills which capture the event, placed sequentially to replicate the frames and play back rates of a film.
The secret to a good timelapse is a smooth playback so the technical settings of the final film will dictate how the images are captured to start with. Smoothness is determined by the amount of time between each capture with shorter times looking smoother than longer times. However, this means more images have been captured and as the playback speed is more or less fixed at normal film rates, which we’ll say is 24fps like a feature film, means the timelapse will be longer. (Each captured frame will fill one of the 24 frames played back in the second). The length and smoothness of the timelapse is a trade-off against each other. What’s important has to be decided before filming begins.

The images for a timelapse can be recorded on any camera but the images will have to be amended to fit in to the dimensions of the film. A typical full HD frame is 1920 wide by 1080 high. An image from a Canon 5DmarkIII can be 5760 wide and 3840 high so you can see some cropping and resizing will be necessary to fit the film size. Framing the scene is something else to be decided before recording to ensure no important parts of the scene are cropped by the smaller dimensions.

If you’re interested in learning more about making timelapses, here’s a link to an article we have on CreativesGo about watching Cress grow. link here.

Parfocal and varifocal
The beauty of modern filmmaking is that lots of photographic equipment can be used to make films. Lenses are perfectly capable for either photography or filmmaking but if those lenses are zooms, then it’s important to understand just what’s the difference between a parfocal and varifocal lens. Simply, a varifocal lens will lose focus when the zoom is adjusted and a parfocal lens doesn’t. Typically, photographic lenses fall in to the varifocal bracket and therefore any filmmaker utilising those lenses when zooming must take that restriction into account. Zoom lenses for filmmaking are always designed to be parfocal. So photographic zoom lenses can be used for filmmaking but be aware of that zoom issue.

Gain
In photography we measure sensor sensitivity by ISO and can adjust it up or down depending on the lighting conditions. Filmmakers use Gain to also adjust the sensitivity of the sensor up or down to capture more or less light on the sensor. Gain is measured in units called a dB.

LUTS
LUT stands for Look Up Table and are used in various ways to adjust colour. One popular application is to apply them to footage captured ‘flat’, i.e. with little or no in-camera processing. The LUT is used to apply pre-formatted adjustments to the colour in the footage when it’s being edited and graded. LUTS can also be used to calibrate monitors or adjust photographs although not all software can work with them however most of the popular packages are designed to work with LUTS.

Conclusion
Here we have looked at more aspects of filmmaking the photographer needs to be aware of as they learn about the new world of the moving image. In our next article
here we take a look at the next stage and making a film. If there’s anything you’d like us to cover, please add a comment below.

Article Date - August 2017

Enjoyed that? Then try these:

Comments Guidelines

Home Legal Privacy

© 2017 Peter Hatter Photography Ltd. All rights reserved.