The Art and Science of White Balance

Mixed colour temperatures in “Annabel Lee”

Colour temperature starts with something mysterious called a “black body”, a theoretical object which absorbs all frequencies of electromagnetic radiation and emits it according to Planck’s Law. Put simply, Planck’s Law states that as the temperature of such a body increases, the light which it emits moves toward the blue end of the spectrum. (Remember from chemistry lessons how the tip of the blue flame was the hottest part of the Bunsen Burner?)

Colour temperature is measured in kelvins, a scale of temperature that begins at absolute zero (-273°C), the coldest temperature physically possible in the universe. To convert centigrade to kelvin, simply add 273.

Tungsten bulbs emit an orange light - dim them down and it gets even more orangey.The surface of the sun has a temperature of 5,778K (5,505°C), so it emits a relatively blue light. The filament of a tungsten studio lamp reaches roughly 3,200K (2,927°C), providing more of an orange light. Connect that fixture to a dimmer and bring it down to 50% intensity and you might get a colour temperature of 2,950K, even more orange.

Incandescent lamps and the sun’s surface follow Planck’s Law fairly closely, but not all light sources rely on thermal radiation, and so their colour output is not dependent on temperature alone. This leads us to the concept of “correlated colour temperature”.

Colour temperature chartThe correlated colour temperature of a source is the temperature which a black body would have to be at in order to emit the same colour of light as that source. For example, the earth’s atmosphere isn’t 7,100K hot, but the light from a clear sky is as blue as a Planckian body glowing at that temperature would be. Therefore a clear blue sky has a correlated colour temperature (CCT) of 7,100K.

LED and fluorescent lights can have their colour cast at least partly defined by CCT, though since CCT is one-dimensional, measuring only the amount of blue versus red, it may give us an incomplete picture. The amounts of green and magenta which LEDs and fluorescents emit varies too, and some parts of the spectrum might be missing altogether, but that’s a whole other can of worms.

The human eye-brain system ignores most differences of colour temperature in daily life, accepting all but the most extreme examples as white light. In professional cinematography, we choose a white balance either to render colours as our eyes perceive them or for creative effect.

6000K HMI lighting photographed at 3200K to give a moonlight feel to “Heretiks”

Most cameras today have a number of white balance presets, such as tungsten, sunny day and cloudy day, and the options to dial in a numerical colour temperature directly or to tell the camera that what it’s currently looking at (typically a white sheet of paper) is indeed white. These work by applying or reducing gain to the red or blue channels of the electronic image.

Interestingly, this means that all cameras have a “native” white balance, a white balance setting at which the least total gain is applied to the colour channels. Arri quotes 5,600K for the Alexa, and indeed the silicon in all digital sensors is inherently less sensitive to blue light than red, making large amounts of blue gain necessary under tungsten lighting. In an extreme scenario – shooting dark, saturated blues in tungsten mode, for example – this might result in objectionable picture noise, but the vast majority of the time it isn’t an issue.

Left: daylight white balance preset (5,600K). Right: tungsten white balance preset (3,200K)
Left: daylight white balance preset (5,600K). Right: tungsten white balance preset (3,200K)

The difficulty with white balance is mixed lighting. A typical example is a person standing in a room with a window on one side of them and a tungsten lamp on the other. Set your camera’s white balance to daylight (perhaps 5,600K) and the window side of their face looks correct, but the other side looks orange. Change the white balance to tungsten (3,200K) and you will correct that side of the subject’s face, but the daylight side will now look blue.

Throughout much of the history of colour cinematography, this sort of thing was considered to be an error. To correct it, you would add CTB (colour temperature blue) gel to the tungsten lamp or perhaps even place CTO (colour temperature orange) gel over the window. Nowadays, of course, we have bi-colour and RGB LED fixtures whose colour temperature can be instantly changed, but more importantly there has been a shift in taste. We’re no longer tied to making all light look white.

A practical light of the “wrong” colour temperatures in “Finding Hope”

To give just one example, Suzie Lavelle, award-winning DP of Normal People, almost always shoots at 4,300K, halfway between typical tungsten and daylight temperatures. She allows her practical lamps to look warm and cozy, while daylight sources come out as a contrasting blue.

It is important to understand colour temperature as a DP, so that you can plan your lighting set-ups and know what colours will be obtained from different sources. However, the choice of white balance is ultimately a creative one, perhaps made at the monitor, dialling through the kelvins to see what you like, or even changed completely in post-production.

The Art and Science of White Balance

24fps or 25fps, which is best?

The monitor overlays here show how “Annabel Lee” was shot at 24fps with a shutter angle of 172.8 to prevent flickering of non-incandescent light sources, a typical recipe for UK filmmakers today.

An article of mine from 2014 weighing the merits of shooting at 24 vs. 25 frames per second has recently been getting a lot of hits. I’m surprised that there’s still so much uncertainty around this issue, because for me it’s pretty clear-cut these days.

When I started out making films at the turn of the millennium, 25fps (or its interlaced variant, 50i) was the only option for video. The tapes ran at that speed and that was that. Cathode ray tube TVs were similarly inflexible, as was PAL DVD when it emerged.

Film could be shot at 24fps, and generally was for theatrical movies, since most cinema projectors only run at that speed, but film for television was shot at 25fps.

Three big technological shifts occurred in the late noughties: the delivery of video over the internet, flat-screen TVs and tapeless cameras. All of these support multiple frame rates, so gradually we found that we had a choice. At the start of a shoot, as a DP I would have to ask which frame rate to set.

The frame rate and resolution menu from my old Canon 600D, the first time I owned a camera that could shoot 24fps.

Americans and others in NTSC regions are in a different situation. Their TV standard of 30fps has a discernibly different look to the international movie standard of 24fps, so the choice of frame rate is as much creative as it is technical. I don’t think anyone can tell the difference between 24 and 25fps, even on a subconscious level, so in Europe it seems we must decide on a purely technical basis.

But in fact, the decision is as much about what people are used to as anything else. I shot a feature film pilot once on 35mm at 25fps and it really freaked out the lab simply because they weren’t used to it.

I shot the 35mm pilot for “The Dark Side of the Earth” (2008) at 25fps because tapes still played a part in postproduction at that time. Today I would not hesitate to shoot at 24.

And what people seem to be most used to and comfortable with in the UK today is 24fps. It offers the most compatibility with digital cinemas and Blu-ray without needing frame rate conversion. (Some cinemas can play 25fps DCPs, and Blu-rays support 25fps in a 50i wrapper which might not play in a lot of US machines, but 24 is always a safer bet for these formats.)

Historically, flickering of non-incandescent light sources and any TV screens in shot was a problem when shooting 24fps in the UK. These days it’s very easy to set your shutter to 172.8° (if your camera measures it as an angle) or 1/50th (if your camera measures it as an interval). This ensures that every frame – even though there are 24 of them per second – captures 1/50th of a second, in sync with the 50Hz mains supply.

 

The Times when 25fps is best

There are some situations in which 25fps is still the best or only option though, most notably when you’re shooting something intended primarily for broadcast on a traditional TV channel in the UK or Europe. The same goes if your primary distribution is on PAL DVD, which I know is still the case for certain types of corporate and educational videos.

Once I was puzzled by a director’s monitor not working on a short film shoot, and discovered that it didn’t support 24fps signals, so I had to choose 25 as my frame rate for that film. So it might be worth checking your monitors if you haven’t shot 24fps with them before.

“Finding Hope” was shot at 25fps simply because the director’s monitor wouldn’t accept 24fps signals.

Finally, if your film contains a lot of archive material or stock footage at 25fps, it makes sense to match that frame rate.

Whichever frame rate you ultimately choose, always discuss it with your postproduction team ahead of time to make sure that you’re all on the same page.

24fps or 25fps, which is best?

Exposure Part 4: ISO

So far in this series we have seen how we can adjust exposure using aperture, which affects depth of field, ND filters, which can help us retain the depth of field we want, and shutter angle, which affects motion blur and flickering of certain light sources. In this final part we’ll look at ISO, perhaps the most misunderstood element of exposure, if indeed we can technically classify it as part of exposure at all!

 

What is ISO?

The acronym stands for International Organization for Standardization, the body which in 1974 combined the old ASA (American Standards Association) units of film speed with the German DIN standard. That’s why you’ll often hear the terms ISO and ASA used interchangeably.

Two different cameras filming the same scene with the same filters, aperture and shutter settings will not necessarily produce an image of equal brightness, because the ways that their electronics convert light into video signals are different. That is why we need ISO, which defines the relationship between the amount of light reaching the sensor (or film) and the brightness of the resulting image.

For example, a common ISO to shoot at today is 800. One way of defining ISO 800 is that it’s the setting required to correctly expose a key-light of 12 foot-candles with a lens set to T2.8 and a 180° shutter at 24fps (1/48th of a second).

If we double the ISO we double the effective sensitivity of the camera, or halve the amount of light it requires. So at ISO 1600 we would only need 6 foot-candles of light (all the other settings being the same), and at ISO 3200 we would need just 3 foot-candles. Conversely, at ISO 400 we would need about 25 foot-candles, or 50 at ISO 200.

 

A Flawed Analogy

Note that I said “effective” sensitivity. This is an important point. In the photochemical world, ISO indeed denotes the light sensitivity of the film stock. It is tempting to see digital ISO as representing the sensitivity of the sensor, and changing the ISO as analogous to loading a different film stock. But in reality the sensitivity of a digital sensor is fixed, and the ISO only determines the amount of gain applied to the sensor data before it is processed (which may happen in camera if you’re shooting linear or log, or in post if you’re shooting RAW).

So a better analogy is that altering the ISO is like altering how long the lab develops the exposed film negative for. This alters the film’s exposure index (EI), hence some digital cameras using the term EI in their menus instead of ISO or ASA.

We can take this analogy further. Film manufacturers specify a recommended development time, an arbitrary period designed to produce the optimal image. If you increase (push) or decrease (pull) the development time you will get a lighter or darker image respectively, but the quality of the image will be reduced in various ways. Similarly, digital camera manufacturers specify a native ISO, which is essentially the recommended amount of gain applied to the sensor data to produce what the manufacturer feels is the best image, and if you move away from that native ISO you’ll get a subjectively “lower quality” image.

Compare the graininess/smoothness of the blacks in these images from my 2017 tests. Click to enlarge.

The most obvious side effect of increasing the ISO is more noticeable noise in the image. It’s exactly the same as turning up the volume on an amplifier; you hear more hiss because the noise floor is being boosted along with the signal itself.

I remember the days of Mini-DV cameras, which instead of ISO had gain; my Canon XL1 had gain settings of -3dB, +6dB and +12dB. It was the exact same thing, just with a different name. What the XL1 called 0dB of gain was what we call the native ISO today.

 

ISO and Dynamic range

At this point we need to bring in the concept of dynamic range. Let’s take the Arri Alexa as an example. This camera has a dynamic range of 14 stops. At its native ISO of 800, those 14 stops of dynamic range are equally distributed above and below “correct” exposure (known as middle grey), so you can overexpose by up to seven stops, and underexpose by up to seven stops, without losing detail.

If you change the Alexa’s ISO, those limits of under- and overexposure still apply, but they’re shifted around middle grey. For example, at 400 ISO you have eight stops of detail below middle grey, but only six above it. This means that, assuming you adjust your iris, shutter or filters to compensate for the change in ISO, you can trade-off highlight detail for shadow detail, or vice versa.

Imagine underexposing a shot by one stop and bringing it back up in post. You increase the highlight detail, because you’re letting half the light through to the sensor, reducing the risk of clipped whites, but you also increase the noise when you bring it up in post. This is basically what you’re doing when you increase your ISO, except that if you’re recording in linear or log then the restoration of brightness and increase in gain happen within the camera, rather than in post with RAW.

Note the increased detail in the bulb at higher ISOs. Click to enlarge..

We can summarise all this as follows:

Doubling the ISO…

  • increases overall brightness by one stop, and
  • increases picture noise.

Then adjusting the exposure to compensate (e.g. closing the iris one stop)…

  • restores overall brightness to its original value,
  • gives you one more stop of detail in the highlights, and
  • gives you one less stop of detail in the shadows.

Alternatively, halving the ISO…

  • decreases overall brightness by one stop, and
  • decreases picture noise.

Then adjusting the exposure to compensate (e.g. opening the iris one stop)…

  • restores overall brightness to its original value,
  • gives you one less stop of detail in the highlights, and
  • gives you one more stop of detail in the shadows.

 

Conclusion

This brings me to the end of my exposure series. We’ve seen that choosing the “correct” exposure is a balancing act, taking into account not just the intended brightness of the image but also the desired depth of field, bokeh, lens flares, motion blur, flicker prevention, noise and dynamic range. I hope this series has helped you to make the best creative decisions on your next production.

See also: “6 Ways to Judge Exposure”

Exposure Part 4: ISO

Exposure Part 3: Shutter

In the first two parts of this series we saw how exposure can be controlled using the lens aperture – with side effects including changes to the depth of field – and neutral density (ND) filters. Today we will look at another means of exposure control: shutter angle.

 

The Physical Shutters of Film Cameras

As with aperture, an understanding of what’s going on under the hood is useful, and that begins with celluloid. Let’s imagine we’re shooting on film at 24fps, the most common frame rate. The film can’t move continuously through the gate (the opening behind the lens where the focused light strikes the film) or we would end up recording just a long vertical streak of light. The film must remain stationary long enough to expose an image, before being moved on by a distance of four perforations (the standard height of a 35mm film frame) so that the next frame can be exposed. Crucially, light must not hit the film while it is being moved, or vertical streaking will occur.

Joram van Hartingsveldt, CC BY-SA 3.0

This is where the shutter comes in. The shutter is a portion of a disc that spins in front of the gate. The standard shutter angle is 180°, meaning that the shutter is a semi-circle. We always describe shutter angles by the portion of the disc which is missing, so a 270° shutter (admitting 1.5x the light of a 180° shutter) is a quarter of a circle, and a 90° shutter (admitting half the light of a 180° shutter) is three-quarters.

The shutter spins continuously at the same speed as the frame rate – so at 24fps the shutter makes 24 revolutions per second. So with a 180° shutter, each 24th of a second is divided into two halves, i.e. 48ths of a second:

  • During one 48th of a second, the missing part of the shutter is over the gate, allowing the light to pass through and the stationary film to be exposed.
  • During the other 48th of a second, the shutter blocks the gate to prevent light hitting the film as it is advanced. The shutter has a mirrored surface so that light from the lens is reflected up the viewfinder, allowing the camera operator to see what they’re shooting.

 

Intervals vs. Angles

If you come from a stills or ENG background, you may be more used to talking about shutter intervals rather than angles. The two things are related as follows:

Frame rate x (360 ÷ shutter angle) = shutter interval denominator

For example, 24 x (360 ÷ 180) = 48 so a film running at 24fps, shot with a 180° shutter, shows us only a 48th of a second’s worth of light on each frame. This has been the standard frame rate and shutter angle in cinema since the introduction of sound in the late 1920s. The amount of motion blur captured in a 48th of a second is the amount that we as an audience have been trained to expect from motion pictures all our lives.

A greater (larger shutter angle, longer shutter interval) or lesser (smaller shutter angle, shorter shutter interval) amount of motion blur looks unusual to us and thus can be used to creative effect. Saving Private Ryan features one of the best-known examples of a small shutter angle in its D-day landing sequence, where the lack of motion blur creates a crisp, hyper-real effect that draws you into the horror of the battle. The effect has been endlessly copied since then, to the point that it now feels almost mandatory to shoot action scenes with a small shutter angle.

Large shutter angles are less common, but the extra motion blur can imply a drugged, fatigued or dream-like state.

In today’s digital environment, only the Arri Alexa Studio has a physical shutter. In other cameras, the sensor’s photo-sites are allowed to charge with light over a certain period of time – still referred to as the shutter interval, even though no actual shutter is involved. The same principles apply and the same 180° angle of the virtual shutter is standard. The camera will allow you to select a shutter angle/interval from a number of options, and on some models like the Canon C300 there is a menu setting to switch between displaying the shutter setting as an angle or an interval.

 

When to Change the Shutter Angle

Sometimes it is necessary to change the shutter angle to avoid flickering. Some luminous devices, such as TV screens and monitors, or HMI lighting not set to flicker-free mode, will appear to strobe, pulse or roll on camera. This is due to them turning on and off multiple times per second, in sync with the alternating current of the mains power supply, but not necessarily in sync with the shutter. For example, if you shoot a domestic fluorescent lamp in the UK, where the mains AC cycles at 50Hz, your 1/48th (180° at 24fps) shutter will be out of sync and the lamp will appear to throb or flicker on camera. The solution is to set the shutter to 172.8° (1/50th), which is indeed what most DPs do when shooting features in the UK. Round multiples of the AC frequency like 1/100th will also work.

You may notice that I have barely mentioned exposure so far in this article. This is because, unlike stills photographers, DPs rarely use the shutter as a means of adjusting exposure. An exception is that we may increase the shutter angle when the daylight is fading, to grab an extra shot. By doubling the shutter angle from 172.8° to 345.6° we double the light admitted, i.e. we gain one stop. As long as there isn’t any fast movement, the extra motion blur is likely to go unnoticed by the audience.

One of the hallmarks of amateur cinematography is that sunny scenes have no motion blur, due to the operator (or the camera’s auto mode) decreasing the shutter interval to avoid over-exposure. It is preferable to use ND filters to cut light on bright days, as covered in part two of this series.

For the best results, the 180° (or thereabouts) shutter angle should be retained when shooting slow motion as well. If your camera displays intervals rather than angles, ideally your interval denominator should be double the frame rate. So if you want to shoot at 50fps, set the shutter interval to 1/100th. For 100fps, set the shutter to 1/200th, and so on.

If you do need to change the shutter angle for creative or technical reasons, you will usually want to compensate with the aperture. If you halve the time the shutter is open for, you must double the area of the aperture to maintain the same exposure, and vice versa. For example, if your iris was set to T4 and you change the shutter from 180° to 90° you will need to stop up to T2.8. (Refer back to my article on aperture if you need to refresh your memory about T-stops.)

In the final part of this series we’ll get to grips with ISO.

Learn more about exposure in my online course, Cinematic Lighting. Until this Thursday (19/11/20) you can get it for the special price of £15.99 by using the voucher code INSTA90.

Exposure Part 3: Shutter

Exposure Part 2: Neutral Density (ND) Filters

In the first part of this series, I explained the concepts of f-stops and T-stops, and looked at how aperture can be used to control exposure. We saw that changing the aperture causes side effects, most noticeably altering the depth of field.

How can we set the correct exposure without compromising our depth of field? Well, as we’ll see later in this series, we can adjust the shutter angle and/or ISO, but both of those have their own side effects. More commonly a DP will use neutral density (ND) filters to control the amount of light reaching the lens. These filters get their name from the fact that they block all wavelengths of light equally, so they darken the image without affecting the colour.

 

When to use an ND Filter

Let’s look at an example. Imagine that I want to shoot at T4; this aperture gives a nice depth of field, on the shallow side but not excessively so. My subject is very close to a bright window and my incident light meter is giving me a reading of f/11. (Although I’m aiming for a T-stop rather an f-stop, I can still use the f-number my meter gives me; in fact if my lens were marked in f-stops then my exposure would be slightly off because the meter does not know the transmission efficiency of my lens.) Let’s remind ourselves of the f-stop/T-stop series before we go any further:

1      1.4      2      2.8      4      5.6      8      11      16      22     32

By looking at this series, which can be found printed on any lens barrel or permanently displayed on a light meter’s screen, I can see that f/11 (or T11) is three stops down from f/4 (or T4) – because 11 is three numbers to the right of 4 in the series. To achieve correct exposure at T4 I’ll need to cut three stops of light. I can often be seen on set counting the stops like this on my light meter or on my fingers. It is of course possible to work it out mathematically or with an app, but that’s not usually necessary. You quickly memorise the series of stops with practice.

 

What Strength of filter to choose

Some ND filters are marked in stops, so I could simply select a 3-stop ND and slide it into my matte box or screw it onto my lens. Other times – the built-in ND filters on the Sony FS7, for example – they’re defined by the fraction of light they let through. So the FS7’s 1/4 ND cuts two stops; the first stop halves the light – as we saw in part of one of this series – and the second stop halves it again, leaving us a quarter of the original amount. The 1/16 setting cuts four stops.

However, most commonly, ND filters are labelled in optical density. A popular range of ND filters amongst professional cinematographers are those made by Tiffen, and a typical set might be labelled as follows:

.3      .6      .9      1.2

That’s the optical density, a property defined as the natural logarithm of the ratio of the quantity of light entering the filter to the quantity of light exiting it on the other side. A .3 ND reduces the light by half because 10 raised to the power of -0.3 is about 0.5, and reducing light by half, as we’ve previously established, means dropping one stop.

If that maths is a bit much for you, don’t worry. All you really need to do is multiply the number of stops you want to cut by 0.3 to find the filter you need. So, going back to my example with the bright window, to get from T11 to T4, i.e. to cut three stops, I’ll pick the .9 ND.

It’s far from intuitive at first, but once you get your head around it, and memorise the f-stops, it’s not too difficult. Trust me!

Here are a couple more examples:

  • Light meter reads f/8 and you want to shoot at T5.6. That’s a one stop difference. (5.6 and 8 are right next to each other in the stop series, as you’ll see if you scroll back to the top.) 1 x 0.3 = 0.3 so you should use the .3 ND.
  • Light meter reads f/22 and you want to shoot at T2.8. That’s a six stop difference (scroll back up and count them), and 6 x 0.3 = 1.8, so you need a 1.8 ND filter. If you don’t have one, you need to stack two NDs in your matte box that add up to 1.8, e.g. a 1.2 and a .6.

 

Variations on a Theme

Variable ND filters are also available. These consist of two polarising filters which can be rotated against each other to progressively lighten or darken the image. They’re great for shooting guerilla-style with a small crew. You can set your iris where you want it for depth of field, then expose the image by eye simply by turning the filter. On the down side, they’re hard to use with a light meter because there is often little correspondence between the markings on the filter and stops. They can also have a subtle adverse effect on skin tones, draining a person’s apparent vitality, as some of the light which reflects off human skin is polarised.

IR pollution increases with successively stronger ND filters (left to right) used on a Blackmagic Micro Cinema Camera. The blue dyes in this costume evidently reflect a large amount of IR.

Another issue to look out for with ND filters is infra-red (IR). Some filters cut only the visible wavelengths of light, allowing IR to pass through. Some digital sensors will interpret this IR as visible red, resulting in an image with a red colour cast which can be hard to grade out because different materials will be affected to different degrees. Special IR ND filters are available to eliminate this problem.

These caveats aside, ND filters are the best way to adjust exposure (downwards at least) without affecting the image in any other way.

In the next part of this series I’ll look at shutter angles, what they mean, how they affect exposure and what the side effects are.

Learn how to use ND filters practically with my Cinematic Lighting online couse. Enter voucher code INSTA90 for an amazing 90% off.

Exposure Part 2: Neutral Density (ND) Filters

Exposure Part 1: Aperture

This is the first in a series of posts where I will look in detail at the four means of controlling the brightness of a digital video image: aperture, neutral density (ND) filters, shutter angle and ISO. It is not uncommon for newer cinematographers to have only a partial understanding of these topics, enough to get by in most situations; that was certainly the case with me for many years. The aim of this series is to give you an understanding of the underlying mechanics which will enable you to make more informed creative decisions.

You can change any one of the four factors, or any combination of them, to reach your desired level of exposure. However, most of them will also affect the image in other ways; for example, aperture affects depth of field. One of the key responsibilities of the director of photography is to use each of the four factors not just to create the ideal exposure, but to make appropriate use of these “side effects” as well.

 

f-stops and t-stops

The most common way of altering exposure is to adjust the aperture, a.k.a. the iris, sometimes described as changing “the stop”. Just like the pupil in our eyes, the aperture of a photographic lens is a (roughly) circular opening which can be expanded or contracted to permit more or less light through to the sensor.

You will have seen a series of numbers like this printed on the sides of lenses:

1      1.4      2      2.8      4      5.6      8      11      16      22     32

These are ratios – ratios of the lens’ focal length to its iris diameter. So a 50mm lens with a 25mm diameter iris is at f/2. Other lengths of lens would have different iris diameters at f/2 (e.g. 10mm diameter for a 20mm lens) but they would all produce an image of the same brightness. That’s why we use f-stops to talk about iris rather than diameters.

But why not label a lens 1, 2, 3, 4…? Why 1, 1.2, 2, 2.8…? These magic numbers are f-stops. A lens set to f/1.4 will let in twice as much light as (or “one stop more than”) a lens set to f/2, which in turn will let in twice as much as one set to f/2.8, and so on. Conversely, a lens set to f/2.8 will let in half as much light as (or “one stop less than”) a lens set to f/2, and so on. (Note that a number between any of these f-stops, e.g. f/1.8, is properly called an f-number, but not an f-stop.) These doublings or halvings – technically known as a base-2 logarithmic scale – are a fundamental concept in exposure, and mimic our eyes’ response to light.

If you think back to high-school maths and the πr² squared formula for calculating the area of a circle from its radius, the reason for the seemingly random series of numbers will start to become clear. Letting in twice as much light requires twice as much area for those light rays to fall on, and remember that the f-number is the ratio of the focal length to the iris diameter, so you can see how square roots are going to get involved and why f-stops aren’t just plain old round numbers.

If you’re shooting with a cine lens, rather than a stills lens, you’ll see the same series of numbers on the barrel, but here they are T-stops rather than f-stops. T-stops are f-stops adjusted to compensate for the light transmission efficiency. Two different lenses set to, say, f/2 will not necessarily produce equally bright images, because some percentage of light travelling through the elements will always be lost, and that percentage will vary depending on the quality of the glass and the number of elements. A lens with 100% light transmission would have the same f-number and T-number, but in practice the T-number will always be a little bigger than the f-number. For example, Cooke’s 15-40mm zoom is rated at a maximum aperture of T2 or f/1.84.

 

Fast and slow lenses

When buying or renting a lens, one of the first things you will want to know is its maximum aperture. Lenses are often described as being fast (larger maximum aperture, denoted by a smaller f- or T-number like T1.4) or slow (smaller maximum aperture, denoted by a bigger f- or T-number like T4). These terms come from the fact that the shutter speed would need to be faster or slower to capture the same amount of light… but more on that later in the series.

Faster lenses are generally more expensive, but that expense may well be outweighed by the savings made on lighting equipment. Let’s take a simple example, and imagine an interview lit by a 4-bank Kino Flo and exposed at T2.8. If our lens can open one stop wider (known as stopping up) to T2 then we double the amount of light reaching the sensor. We can therefore halve the level of light – by turning off two of the Kino Flo’s tubes or by renting a cheaper 2-bank unit in the first place. If we can stop up further, to T1.4, then we only need one Kino tube to achieve the same exposure.

 

Side effects

One of the first things that budding cinematographers learn is that wider apertures make for a smaller depth of field, i.e. the range of distances within which a subject will be in focus is smaller. In simple terms, the background of the image is blurrier when the depth of field is shallower.

It is often tempting to go for the shallowest possible depth of field, because it feels more cinematic and helps conceal shortcomings in the production design, but that is not the right look for every story. A DP will often choose a stop to shoot at based on the depth of field they desire. That choice of stop may affect the entire lighting budget; if you want to shoot at a very slow T14 like Douglas Slocombe did for the Indiana Jones trilogy, you’re going to need several trucks full of lights!

There is another side effect of adjusting the aperture which is less obvious. Lenses are manufactured to perform best in the middle of their iris range. If you open a lens up to its maximum aperture or close it down to its minimum, the image will soften a little. Therefore another advantage of faster lenses is the ability to get further away from their maximum aperture (and poorest image quality) with the same amount of light.

Finally it is worth noting that the appearance of bokeh (out of focus areas) and lens flares also changes with aperture. The Cooke S4 range, for example, renders out-of-focus highlights as circles when wide open, but as octagons when stopped down. With all lenses, the star pattern seen around bright light sources will be stronger when the aperture is smaller. You should shoot tests – like these I conducted in 2017 – if these image artefacts are a critical part of your film’s look.

Next time we’ll look at how we can use ND filters to control exposure without compromising our choice of stop.

Learn how to use exposure practically with my Cinematic Lighting online couse. Enter voucher code INSTA90 for an amazing 90% off.

Exposure Part 1: Aperture

How is Dynamic Range Measured?

The high dynamic range of the ARRI Alexa Mini allowed me to retain all the sky detail in this shot from “Above the Clouds”.

Recently I’ve been pondering which camera to shoot an upcoming project on, so I consulted the ASC’s comparison chart. Amongst the many specs compared is dynamic range, and I noticed that the ARRI Alexa’s was given as 14+ stops, while the Blackmagic URSA’s is 15. Having used both cameras a fair bit, I can tell you that there’s no way in Hell that the Ursa has a higher dynamic range than the Alexa. So what’s going on here?

 

What is dynamic range?

To put it simply, dynamic range is the level of contrast that an imaging system can handle. To quote Alan Roberts, who we’ll come back to later:

This is normally calculated as the ratio of the exposure which just causes white clipping to the exposure level below which no details can be seen.

A photosite on a digital camera’s sensor outputs a voltage proportional to the amount of light hitting it, but at some point the voltage reaches a maximum, and no matter how much more light you add, it won’t change. At the other end of the scale, a photosite may receive so little light that it outputs no voltage, or at least nothing that’s discernible from the inherent electronic noise in the system. These upper and lower limits of brightness may be narrowed by image processing within the camera, with RAW recording usually retaining the full dynamic range, while linear Rec. 709 severely curtails it.

In photography and cinematography, we measure dynamic range in stops – doublings and halvings of light which I explain fully in this article. One stop is a ratio of 2:1, five stops are 32:1, thirteen stops are almost 10,000:1

It’s worth pausing here to point out the difference between dynamic range and latitude, a term which is sometimes regarded as synonymous, but it’s not. The latitude is a measure of how much the camera can be over- or under-exposed without losing any detail, and is dependent on both the dynamic range of the camera and the dynamic range of the scene. (A low-contrast scene will allow more latitude for incorrect exposure than a high-contrast scene.)

 

Problems of Measurement

Before digital cinema cameras were developed, video had a dynamic range of about seven stops. You could measure this relatively easily by shooting a greyscale chart and observing the waveform of the recorded image to see where the highlights levelled off and the shadows disappeared into the noise floor. With today’s dynamic ranges into double digits, simple charts are no longer practical, because you can’t manufacture white enough paper or black enough ink.

For his excellent video on dynamic range, Filmmaker IQ’s John Hess built a device fitted with a row of 1W LEDs, using layers of neutral density gel to make each one a stop darker than its neighbour. For the purposes of his demonstration, this works fine, but as Phil Rhodes points out on RedShark News, you start running into the issue of the dynamic range of the lens.

It may seem strange to think that a lens has dynamic range, and in the past when I’ve heard other DPs talk about certain glass being more or less contrasty, I admit that I haven’t thought much about what that means. What it means is flare, and not the good anamorphic streak kind, but the general veiling whereby a strong light shining into the lens will raise the overall brightness of the image as it bounces around the different elements. This lifts the shadows, producing a certain amount of milkiness. Even with high contrast lenses, ones which are less prone to veiling, the brightest light on your test device will cause some glare over the darkest one, when measuring the kind of dynamic range today’s cameras enjoy.

 

Manufacturer Measurements

Going back to my original query about the Alexa versus the URSA, let’s see exactly what the manufacturers say. ARRI specifically states that its sensor’s dynamic range is over 14 stops “as measured with the ARRI Dynamic Range Test Chart”. So what is this chart and how does it work? The official sales blurb runs thusly:

The ARRI DRTC-1 is a special test chart and analysis software for measurement of dynamic range and sensitivity of digital cameras. Through a unique stray light reduction concept this system is able to accurately measure up to 15.5 stops of dynamic range.

The “stray light reduction” is presumably to reduce the veiling mentioned earlier and provide more accurate results. This could be as simple as covering or turning off the brighter lights when measuring the dimmer ones.

I found a bit more information about the test chart in a 2011 camera shoot-out video, from that momentous time when digital was supplanting film as the cinematic acquisition format of choice. Rather than John Hess’s ND gel technique, the DRTC-1 opts for something else to regulate its light output, as ARRI’s Michael Bravin explains in the video:

There’s a piece of motion picture film behind it that’s checked with a densitometer, and what you do is you set the exposure for your camera, and where you lose detail in the vertical and horizontal lines is your clipping point, and where you lose detail because of noise in the shadow areas is your lowest exposure… and in between you end up finding the number of stops of dynamic range.

Blackmagic Design do not state how they measure the dynamic range of their cameras, but it may be a DSC Labs Xlya. This illuminated chart boasts a shutter system which “allows users to isolate and evaluate individual steps”, plus a “stepped xylophone shape” to minimise flare problems.

Art Adams, a cinema lens specialist at ARRI, and someone who’s frequently quoted in Blain Brown’s Cinematography: Theory & Practice, told Y.M. Cinema Magazine:

I used to do a lot of consulting with DSC Labs, who make camera test charts, so I own a 20-stop dynamic range chart (DSC Labs Xyla). This is what most manufacturers use to test dynamic range (although not ARRI, because our engineers don’t feel it’s precise enough) and I see what companies claim as usable stops. You can see that they are just barely above the noise floor.

 

Conclusions

Obviously these ARRI folks I keep quoting may be biased. I wanted to find an independent test that measures both Blackmagics and Alexas with the same conditions and methodology, but I couldn’t find one. There is plenty of anecdotal evidence that Alexas have a bigger dynamic range, in fact that’s widely accepted as fact, but quantifying the difference is harder. The most solid thing I could find is this, from a 2017 article about the Blackmagic Ursa Mini 4.6K (first generation):

The camera was measured at just over 14 stops of dynamic range in RAW 4:1 [and 13 stops in ProRes]. This is a good result, especially considering the price of the camera. To put this into perspective Alan measured the Canon C300 mkII at 15 stops of dynamic range. Both the URSA Mini 4.6 and C300 mkII are bettered by the ARRI Alexa and Amira, but then that comes as no surprise given their reputation and price.

The Alan mentioned is Alan Roberts, something of a legend when it comes to testing cameras. It is interesting to note that he is one of the key players behind the TLCI (Television Lighting Consistency Index), a mooted replacement for CRI (Colour Rendering Index). It’s interesting because this whole dynamic range business is starting to remind me of my investigation into CRI, and is leading me to a similar conclusion, that the numbers which the manufacturers give you are all but useless in real-world cinematography.

Whereas CRI at least has a standardised test, there’s no such thing for dynamic range. Therefore, until there is more transparency from manufacturers about how they measure it, I’d recommend ignoring their published values. As always when choosing a camera, shoot your own tests if at all possible. Even the most reliable numbers can’t tell you whether you’re going to like a camera’s look or not, or whether it’s right for the story you want to tell.

When tests aren’t possible, and I know that’s often the case in low-budget land, at least try to find an independent comparison. I’ll leave you with this video from the Slanted Lens, which compares the URSA Mini Pro G2 with the ARRI Amira (which uses the same Alev III sensor as the Alexa). They don’t measure the dynamic range, but you can at least see the images side by side, and in the end it’s the images that matter, not the numbers.

How is Dynamic Range Measured?

8 Things to Look For When Buying a Cinema Camera

A couple of weeks ago I shared my thoughts about whether a director of photography should own equipment. My conclusion was that it can be useful early in your career, when you’re shooting corporates or tiny films with no hire budget. So what is the best camera for indie cinematography?

I’m not going to answer that, but I will tell you what to look for when investing in a camera. Hopefully these tips will help you choose the one that’s right for you from the huge and ever-changing array of professional cameras on the market, from the humble DSLR to the ubiquitous Reds and everything in between.

 

1. Image quality

Shooting on a Sony FS7 for “Finding Hope”

The quality of the image is of course the most imporant attribute of any camera. Rather than any technical specifications, I’m talking about the aesthetic quality here: how does it feel? Does it have that elusive “cinematic” quality? Is it “filmic”? Does it remind you of certain kinds of movies?

A good place to start is to look up sample footage on YouTube, or better still Vimeo for less compression muddying the issue. If you can borrow the camera and try it out before you buy, even better. Take away some test footage and try grading it too.

 

2. Resolution

Resolution, the sheer number of pixels a camera can record, is part of image quality, but I include it as a separate point because I see it as more of a technical consideration than an aesthetic one. You should ask yourself what longevity you require from your films – will people still be watching them, say two or three years from now, and if so what sort of resolution might be the norm by then?

Also consider your delivery platform. If everything you shoot is going on YouTube, perhaps you don’t need more than 1080P (standard HD).

 

3. Dynamic Range

Dynamic range is a measure of how much contrast a camera can handle. Too small a dynamic range and you will frequently struggle with bright areas “clipping” – i.e. losing details – or dark areas getting lost in the image noise. Also, the wider the dynamic range, the more flexibility you will have in grading.

For a cinematic image, 12 stops of dynamic range is the absolute minimum, with 14 or more being ideal.

 

4. Maximum ISO

Some ISO tests I conducted on an Arri Alexa Classic in 2017

The ISO (International Standards Organisation) scale rates the light sensitivity of a camera. The most important thing is the native ISO, the one at which the camera is optimised to give the cleanest image with the most detail. On some cameras, setting an ISO other than the native one reduces the image quality considerably.

The higher the ISO, the less light will be required to expose an image correctly. 800 is typical these days, but many cameras go much higher than that. It is worth thinking about spending more money to get a camera with a higher native ISO, because you may save a lot of money on lighting.

 

5. Lens Mount

This is crucial because you may already have a collection of lenses, or you may intend to hire certain lenses, and you need to be sure that they will fit your new camera’s mount.

The Canon EF mount is extremely common and will open up a huge range of options for stills glass as well as some low-end cinema glass. The smaller MFT (micro four-thirds) mount also has a wide range of lenses.

Top-end cameras have PL mounts which take all the beautiful cinema lenses used on big movies, but only choose this route if you are willing to part with a lot of cash!

 

6. Form Factor

A Blackmagic Micro Cinema Camera, not so micro once it’s rigged with rails, matte box, wireless follow focus, battery, monitor and video transmitter!

When I started in the industry, cameras were all ergonomically designed to sit on your shoulder, with a nice handgrip to the right of the lens and an EVF (electronic viewfinder) to provide a third point of stabilising contact. Nowadays cameras tend to be boxy, heavy and uncomfortable to hold without additional accessories (see below).

Again, try to gets your hands on the camera in a shop and see how it feels before you purchase. As well as handheld shooting, consider how easy it will be to rig onto dollies, sliders, gimbals, etc.

 

7. Required Accessories

Buying the camera body itself is unlikely to be the end of your expenditure. You will need lenses, batteries, a battery charger, cards, a card reader and almost certainly some kind of stabilising system, be it a simple shoulder rig or an electronic gimbal.

You may also want an EVF, a tripod, matte box, follow focus – the list can seem endless! Be careful to budget your essential accessories before buying the camera. Some cameras seem like bargains until you add up all the extras. Pay particular attention to the media, and to exactly what speed of media you need in order to shoot at the resolution and frame rate that you require, as this can get very expensive.

 

8. Codec

What file type and codec does the camera shoot? Does your editing system support that format? If not, how time-consuming will it be to convert everything?

What compression ratios does the camera support? How much hard drive space will you need to store an hour of footage at that ratio? What about ten hours, plus back-ups? Often there is a trade-off between a highly compressed format like H.264 which is light on disc space but may need converting before you can edit it, and a lightly compressed format like ProRes which burns through disc space but can be dropped straight into most editing software.

8 Things to Look For When Buying a Cinema Camera

How Colour Works

Colour is a powerful thing. It can identify a brand, imply eco-friendliness, gender a toy, raise our blood pressure, calm us down. But what exactly is colour? How and why do we see it? And how do cameras record it? Let’s find out.

 

The Meaning of “Light”

One of the many weird and wonderful phenomena of our universe is the electromagnetic wave, an electric and magnetic oscillation which travels at 186,000 miles per second. Like all waves, EM radiation has the inversely-proportional properties of wavelength and frequency, and we humans have devised different names for it based on these properties.

The electromagnetic spectrum

EM waves with a low frequency and therefore a long wavelength are known as radio waves or, slightly higher in frequency, microwaves; we used them to broadcast information and heat ready-meals. EM waves with a high frequency and a short wavelength are known as x-rays and gamma rays; we use them to see inside people and treat cancer.

In the middle of the electromagnetic spectrum, sandwiched between infrared and ultraviolet, is a range of frequencies between 430 and 750 terahertz (wavelengths 400-700 nanometres). We call these frequencies “light”, and they are the frequencies which the receptors in our eyes can detect.

If your retinae were instead sensitive to electromagnetic radiation of between 88 and 91 megahertz, you would be able to see BBC Radio 2. I’m not talking about magically seeing into Ken Bruce’s studio, but perceiving the FM radio waves which are encoded with his silky-smooth Scottish brogue. Since radio waves can pass through solid objects though, perceiving them would not help you to understand your environment much, whereas light waves are absorbed or reflected by most solid objects, and pass through most non-solid objects, making them perfect for building a picture of the world around you.

Within the range of human vision, we have subdivided and named smaller ranges of frequencies. For example, we describe light of about 590-620nm as “orange”, and below about 450nm as “violet”. This is all colour really is: a small range of wavelengths (or frequencies) of electromagnetic radiation, or a combination of them.

 

In the eye of the beholder

Scanning electron micrograph of a retina

The inside rear surfaces of your eyeballs are coated with light-sensitive cells called rods and cones, named for their shapes.

The human eye has about five or six million cones. They come in three types: short, medium and long, referring to the wavelengths to which they are sensitive. Short cones have peak sensitivity at about 420nm, medium at 530nm and long at 560nm, roughly what we call blue, green and red respectively. The ratios of the three cone types vary from person to person, but short (blue) ones are always in the minority.

Rods are far more numerous – about 90 million per eye – and around a hundred times more sensitive than cones. (You can think of your eyes as having dual native ISOs like a Panasonic Varicam, with your rods having an ISO six or seven stops faster than your cones.) The trade-off is that they are less temporally and spatially accurate than cones, making it harder to see detail and fast movement with rods. However, rods only really come into play in dark conditions. Because there is just one type of rod, we cannot distinguish colours in low light, and because rods are most sensitive to wavelengths of 500nm, cyan shades appear brightest. That’s why cinematographers have been painting night scenes with everything from steel grey to candy blue light since the advent of colour film.

The spectral sensitivity of short (blue), medium (green) and long (red) cones

The three types of cone are what allow us – in well-lit conditions – to have colour vision. This trichromatic vision is not universal, however. Many animals have tetrachromatic (four channel) vision, and research has discovered some rare humans with it too. On the other hand, some animals, and “colour-blind” humans, are dichromats, having only two types of cone in their retinae. But in most people, perceptions of colour result from combinations of red, green and blue. A combination of red and blue light, for example, appears as magenta. All three of the primaries together make white.

Compared with the hair cells in the cochlea of your ears, which are capable of sensing a continuous spectrum of audio frequencies, trichromacy is quite a crude system, and it can be fooled. If your red and green cones are triggered equally, for example, you have no way of telling whether you are seeing a combination of red and green light, or pure yellow light, which falls between red and green in the spectrum. Both will appear yellow to you, but only one really is. That’s like being unable to hear the difference between, say, the note D and a combination of the notes C and E. (For more info on these colour metamers and how they can cause problems with certain types of lighting, check out Phil Rhode’s excellent article on Red Shark News.)

 

Artificial eye

A Bayer filter

Mimicking your eyes, video sensors also use a trichromatic system. This is convenient because it means that although a camera and TV can’t record or display yellow, for example, they can produce a mix of red and green which, as we’ve just established, is indistinguishable from yellow to the human eye.

Rather than using three different types of receptor, each sensitive to different frequencies of light, electronic sensors all rely on separating different wavelengths of light before they hit the receptors. The most common method is a colour filter array (CFA) placed immediately over the photosites, and the most common type of CFA is the Bayer filter, patented in 1976 by an Eastman Kodak employee named Dr Bryce Bayer.

The Bayer filter is a colour mosaic which allows only green light through to 50% of the photosites, only red light through to 25%, and only blue to the remaining 25%. The logic is that green is the colour your eyes are most sensitive to overall, and that your vision is much more dependent on luminance than chrominance.

A RAW, non-debayered image

The resulting image must be debayered (or more generally, demosaiced) by an algorithm to produce a viewable image. If you’re recording log or linear then this happens in-camera, whereas if you’re shooting RAW it must be done in post.

This system has implications for resolution. Let’s say your sensor is 2880×1620. You might think that’s the number of pixels, but strictly speaking it isn’t. It’s the number of photosites, and due to the Bayer filter no single one of those photosites has more than a third of the necessary colour information to form a pixel of the final image. Calculating that final image – by debayering the RAW data – reduces the real resolution of the image by 20-33%. That’s why cameras like the Arri Alexa or the Blackmagic Cinema Camera shoot at 2.8K or 2.5K, because once it’s debayered you’re left with an image of 2K (cinema standard) resolution.

 

colour Compression

Your optic nerve can only transmit about one percent of the information captured by the retina, so a huge amount of data compression is carried out within the eye. Similarly, video data from an electronic sensor is usually compressed, be it within the camera or afterwards. Luminance information is often prioritised over chrominance during compression.

Examples of chroma subsampling ratios

You have probably come across chroma subsampling expressed as, for example, 444 or 422, as in ProRes 4444 (the final 4 being transparency information, only relevant to files generated in postproduction) and ProRes 422. The three digits describe the ratios of colour and luminance information: a file with 444 chroma subsampling has no colour compression; a 422 file retains colour information only in every second pixel; a 420 file, such as those on a DVD or BluRay, contains one pixel of blue info and one of red info (the green being derived from those two and the luminance) to every four pixels of luma.

Whether every pixel, or only a fraction of them, has colour information, the precision of that colour info can vary. This is known as bit depth or colour depth. The more bits allocated to describing the colour of each pixel (or group of pixels), the more precise the colours of the image will be. DSLRs typically record video in 24-bit colour, more commonly described as 8bpc or 8 bits per (colour) channel. Images of this bit depth fall apart pretty quickly when you try to grade them. Professional cinema cameras record 10 or 12 bits per channel, which is much more flexible in postproduction.

CIE diagram showing the gamuts of three video standards. D65 is the standard for white.

The third attribute of recorded colour is gamut, the breadth of the spectrum of colours. You may have seen a CIE (Commission Internationale de l’Eclairage) diagram, which depicts the range of colours perceptible by human vision. Triangles are often superimposed on this diagram to illustrate the gamut (range of colours) that can be described by various colour spaces. The three colour spaces you are most likely to come across are, in ascending order of gamut size: Rec.709, an old standard that is still used by many monitors; P3, used by digital cinema projectors; and Rec.2020. The latter is the standard for ultra-HD, and Netflix are already requiring that some of their shows are delivered in it, even though monitors capable of displaying Rec.2020 do not yet exist. Most cinema cameras today can record images in Rec.709 (known as “video” mode on Blackmagic cameras) or a proprietary wide gamut (“film” mode on a Blackmagic, or “log” on others) which allows more flexibility in the grading suite. Note that the two modes also alter the recording of luminance and dynamic range.

To summarise as simply as possible: chroma subsampling is the proportion of pixels which have colour information, bit depth is the accuracy of that information and gamut is the limits of that info.

That’s all for today. In future posts I will look at how some of the above science leads to colour theory and how cinematographers can make practical use of it.

SaveSave

How Colour Works

6 Ways to Judge Exposure

Exposing the image correctly is one of the most important parts of a cinematographer’s job. Choosing the T-stop can be a complex technical and creative decision, but fortunately there are many ways we can measure light to inform that decision.

First, let’s remind ourselves of the journey light makes: photons are emitted from a source, they strike a surface which absorbs some and reflects others – creating the impressions of colour and shade; then if the reflected light reaches an eye or camera lens it forms an image. We’ll look at the various ways of measuring light in the order the measurements occur along this light path, which is also roughly the order in which these measurements are typically used by a director of photography.

 

1. Photometrics data

You can use data supplied by the lamp manufacturer to calculate the exposure it will provide, which is very useful in preproduction when deciding what size of lamps you need to hire. There are apps for this, such as the Arri Photometrics App, which allows you to choose one of their fixtures, specify its spot/flood setting and distance from the subject, and then tells you the resulting light level in lux or foot-candles. An exposure table or exposure calculation app will translate that number into a T-stop at any given ISO and shutter interval.

 

2. Incident meter

Some believe that light meters are unnecessary in today’s digital landscape, but I disagree. Most of the methods listed below require the camera, but the camera may not always be handy – on a location recce, for example. Or during production, it would be inconvenient to interrupt the ACs while they’re rigging the camera onto a crane or Steadicam. This is when having a light meter on your belt becomes very useful.

An incident meter is designed to measure the amount of light reaching the subject. It is recognisable by its white dome, which diffuses and averages the light striking its sensor. Typically it is used to measure the key, fill and backlight levels falling on the talent. Once you have input your ISO and shutter interval, you hold the incident meter next to the actor’s face (or ask them to step aside!) and point it at each source in turn, shading the dome from the other sources with your free hand. You can then decide if you’re happy with the contrast ratios between the sources, and set your lens to the T-stop indicated by the key-light reading, to ensure correct exposure of the subject’s face.

 

3. Spot meter (a.k.a. reflectance meter)

Now we move along the light path and consider light after it has been reflected off the subject. This is what a spot meter measures. It has a viewfinder with which you target the area you want to read, and it is capable of metering things that would be impractical or impossible to measure with an incident meter. If you had a bright hillside in the background of your shot, you would need to drive over to that hill and climb it to measure the incident light; with a spot meter you would simply stand at the camera position and point it in the right direction. A spot meter can also be used to measure light sources themselves: the sky, a practical lamp, a flame and so on.

But there are disadvantages too. If you spot meter a Caucasian face, you will get a stop that results in underexposure, because a Caucasian face reflects quite a lot of light. Conversely, if you spot meter an African face, you will get a stop that results in overexposure, because an African face reflects relatively little light. For this reason a spot meter is most commonly used to check whether areas of the frame other than the subject – a patch of sunlight in the background, for example – will blow out.

Your smartphone can be turned into a spot meter with a suitable app, such as Cine Meter II, though you will need to configure it using a traditional meter and a grey card. With the addition of a Luxiball attachment for your phone’s camera, it can also become an incident meter.

The remaining three methods of judging exposure which I will cover all use the camera’s sensor itself to measure the light. Therefore they take into account any filters you’re using as well transmission loss within the lens (which can be an issue when shooting on stills glass, where the marked f-stops don’t factor in transmission loss).

 

4. Monitors and viewfinders

The letter. Photo: Amy Nicholson

In the world of digital image capture, it can be argued that the simplest and best way to judge exposure is to just observe the picture on the monitor. The problem is, not all screens are equal. Cheap monitors can misrepresent the image in all kinds of ways, and even a high-end OLED can deceive you, displaying shadows blacker than any cinema or home entertainment system will ever match. There are only really two scenarios in which you can reliably judge exposure from the image itself: if you’ve owned a camera for a while and you’ve become very familiar with how the images in the viewfinder relate to the finished product; or if the monitor has been properly calibrated by a DIT (Digital Imaging Technician) and the screen is shielded from light.

Most cameras and monitors have built-in tools which graphically represent the luminance of the image in a much more accurate way, and we’ll look at those next. Beware that if you’re monitoring a log or RAW image in Rec.709, these tools will usually take their data from the Rec.709 image.

 

5. Waveforms and histograms

These are graphs which show the prevalence of different tones within the frame. Histograms are the simplest and most common. In a histogram, the horizontal axis represents luminance and the vertical axis shows the number of pixels which have that luminance. It makes it easy to see at a glance whether you’re capturing the greatest possible amount of detail, making best use of the dynamic range. A “properly” exposed image, with a full range of tones, should show an even distribution across the width of the graph, with nothing hitting the two sides, which would indicate clipped shadows and highlights. A night exterior would have a histogram crowded towards the left (darker) side, whereas a bright, low contrast scene would be crowded on the right.

A waveform plots luminance on the vertical axis, with the horizontal axis matching the horizontal position of those luminance values within the frame. The density of the plotting reveals the prevalence of the values. A waveform that was dense in the bottom left, for example, would indicate a lot of dark tones on the lefthand side of frame. Since the vertical (luminance) axis represents IRE (Institute of Radio Engineers) values, waveforms are ideal when you need to expose to a given IRE, for example when calibrating a system by shooting a grey card. Another common example would be a visual effects supervisor requesting that a green screen be lit to 50 IRE.

 

6. Zebras and false colours

Almost all cameras have zebras, a setting which superimposes diagonal stripes on parts of the image which are over a certain IRE, or within a certain range of IREs. By digging into the menus you can find and adjust what those IRE levels are. Typically zebras are used to flag up highlights which are clipping (theoretically 100 IRE), or close to clipping.

Exposing an image correctly is not just about controlling highlight clipping however, it’s about balancing the whole range of tones – which brings us to false colours. A false colour overlay looks a little like a weather forecaster’s temperature map, with a code of colours assigned to various luminance values. Clipped highlights are typically red, while bright areas still retaining detail (known as the “knee” or “shoulder”) are yellow. Middle grey is often represented by green, while pink indicates the ideal level for caucasian skin tones (usually around 55 IRE). At the bottom end of the scale, blue represents the “toe” – the darkest area that still has detail – while purple is underexposed. The advantage of zebras and false colours over waveforms and histograms is that the former two show you exactly where the problem areas are in the frame.

I hope this article has given you a useful overview of the tools available for judging exposure. Some DPs have a single tool they rely on at all times, but many will use all of these methods at one time or another to produce an image that balances maximising detail with creative intent. I’ll leave you with a quote from the late, great Douglas Slocombe, BSC who ultimately used none of the above six methods!

I used to use a light meter – I used one for years. Through the years I found that, as schedules got tighter and tighter, I had less and less time to light a set. I found myself not checking the meter until I had finished the set and decided on the proper stop. It would usually say exactly what I thought it should. If it didn’t, I wouldn’t believe it, or I would hold it in such a way as to make it say my stop. After a time I decided this was ridiculous and stopped using it entirely. The “Raiders” pictures were all shot without a meter. I just got used to using my eyes.

6 Ways to Judge Exposure