It is night. We Steadicam into a moonlit bedroom, drifting across a window – where a raven is visible on the outside ledge, tapping at the glass with its beak – and land on a sleeping couple. The woman, Annabel, wakes up and goes to the window, causing the bird to flee. Crossing over to her far shoulder, we rest on Annabel’s reflection for a moment, before racking focus to another woman outside, maybe 200ft away, running towards a cliff. All in one shot.
Such was the action required in a scene from Annabel Lee, the most ambitious short I’ve ever been involved with. Based on Edgar Allen Poe’s poem, the film was the brainchild of actor Angel Parker, who plays the titular character. It was directed by Amy Coop, who had already to decided to shoot on an Alexa Mini with Cooke Anamorphics before I was even hired.
Working with animals has its own difficulties, but for me as director of photography the challenges of this particular shot were:
Making the bedroom appear moonlit by the single window, without any lamps being visible at any point in the Steadicam move.
Lighting the view outside.
Ensuring the live raven read on camera even though the shot was quite wide.
Making Annabel bright enough that her reflection would read, without washing out the rest of the scene.
Blocking the camera in concert with Annabel’s move so that its reflection would not be seen.
I left that last one in the capable hands of Steadicam op Rupert Peddle, along with Angel and Amy. What they ended up doing was timing Angel’s move so that she would block the window from camera at the moment that the camera’s reflection would have appeared.
Meanwhile, I put my head together with gaffer Bertil Mulvad to tackle the other four challenges. We arrived at a set-up using only three lights:
A LiteMat 1 above the window (indoors) which served to light Annabel and her reflection, as well as reaching to the bed.
Another LED source outside the window to one side, lighting the raven.
A nine-light Maxibrute on a cherry-picker, side-lighting the woman outside and the cliffs. This was gelled with CTB to match the daylight LEDs.
Unfortunately the outside LED panel backlit the window glass, which was old and kept fogging up, obscuring the raven. With hindsight that panel might have been better on the other side of the window (left rather than right, but still outside), even though it would have created some spill problems inside. (To be honest, this would have made the lighting direction more consistent with the Maxibrute “moonlight” as well. It’s so easy to see this stuff after the fact!)
Everything else worked very well, but editor Jim Page did have to cut in a close-up of the raven, without which you’d never have known it was there.
So far in this series we have seen how we can adjust exposure using aperture, which affects depth of field, ND filters, which can help us retain the depth of field we want, and shutter angle, which affects motion blur and flickering of certain light sources. In this final part we’ll look at ISO, perhaps the most misunderstood element of exposure, if indeed we can technically classify it as part of exposure at all!
What is I.S.O.?
The acronym stands for International Organization for Standardization, the body which in 1974 combined the old ASA (American Standards Association) units of film speed with the German DIN standard. That’s why you’ll often hear the terms ISO and ASA used interchangeably.
Two different cameras filming the same scene with the same filters, aperture and shutter settings will not necessarily produce an image of equal brightness, because the ways that their electronics convert light into video signals are different. That is why we need ISO, which defines the relationship between the amount of light reaching the sensor (or film) and the brightness of the resulting image.
For example, a common ISO to shoot at today is 800. One way of defining ISO 800 is that it’s the setting required to correctly expose a key-light of 12 foot-candles with a lens set to T2.8 and a 180° shutter at 24fps (1/48th of a second).
If we double the ISO we double the effective sensitivity of the camera, or halve the amount of light it requires. So at ISO 1600 we would only need 6 foot-candles of light (all the other settings being the same), and at ISO 3200 we would need just 3 foot-candles. Conversely, at ISO 400 we would need about 25 foot-candles, or 50 at ISO 200.
A Flawed Analogy
Note that I said “effective” sensitivity. This is an important point. In the photochemical world, ISO indeed denotes the light sensitivity of the film stock. It is tempting to see digital ISO as representing the sensitivity of the sensor, and changing the ISO as analogous to loading a different film stock. But in reality the sensitivity of a digital sensor is fixed, and the ISO only determines the amount of gain applied to the sensor data before it is processed (which may happen in camera if you’re shooting linear or log, or in post if you’re shooting RAW).
So a better analogy is that altering the ISO is like altering how long the lab develops the exposed film negative for. This alters the film’s exposure index (EI), hence some digital cameras using the term EI in their menus instead of ISO or ASA.
We can take this analogy further. Film manufacturers specify a recommended development time, an arbitrary period designed to produce the optimal image. If you increase (push) or decrease (pull) the development time you will get a lighter or darker image respectively, but the quality of the image will be reduced in various ways. Similarly, digital camera manufacturers specify a native ISO, which is essentially the recommended amount of gain applied to the sensor data to produce what the manufacturer feels is the best image, and if you move away from that native ISO you’ll get a subjectively “lower quality” image.
The most obvious side effect of increasing the ISO is more noticeable noise in the image. It’s exactly the same as turning up the volume on an amplifier; you hear more hiss because the noise floor is being boosted along with the signal itself.
I remember the days of Mini-DV cameras, which instead of ISO had gain; my Canon XL1 had gain settings of -3dB, +6dB and +12dB. It was the exact same thing, just with a different name. What the XL1 called 0dB of gain was what we call the native ISO today.
I.S.O. and Dynamic range
At this point we need to bring in the concept of dynamic range. Let’s take the Arri Alexa as an example. This camera has a dynamic range of 14 stops. At its native ISO of 800, those 14 stops of dynamic range are equally distributed above and below “correct” exposure (known as middle grey), so you can overexpose by up to seven stops, and underexpose by up to seven stops, without losing detail.
If you change the Alexa’s ISO, those limits of under- and overexposure still apply, but they’re shifted around middle grey. For example, at 400 ISO you have eight stops of detail below middle grey, but only six above it. This means that, assuming you adjust your iris, shutter or filters to compensate for the change in ISO, you can trade-off highlight detail for shadow detail, or vice versa.
Imagine underexposing a shot by one stop and bringing it back up in post. You increase the highlight detail, because you’re letting half the light through to the sensor, reducing the risk of clipped whites, but you also increase the noise when you bring it up in post. This is basically what you’re doing when you increase your ISO, except that if you’re recording in linear or log then the restoration of brightness and increase in gain happen within the camera, rather than in post with RAW.
We can summarise all this as follows:
Doubling the ISO…
increases overall brightness by one stop, and
increases picture noise.
Then adjusting the exposure to compensate (e.g. closing the iris one stop)…
restores overall brightness to its original value,
gives you one more stop of detail in the highlights, and
gives you one less stop of detail in the shadows.
Alternatively, halving the ISO…
decreases overall brightness by one stop, and
decreases picture noise.
Then adjusting the exposure to compensate (e.g. opening the iris one stop)…
restores overall brightness to its original value,
gives you one less stop of detail in the highlights, and
gives you one more stop of detail in the shadows.
This brings me to the end of my exposure series. We’ve seen that choosing the “correct” exposure is a balancing act, taking into account not just the intended brightness of the image but also the desired depth of field, bokeh, lens flares, motion blur, flicker prevention, noise and dynamic range. I hope this series has helped you to make the best creative decisions on your next production.
In the first two parts of this series we saw how exposure can be controlled using the lens aperture – with side effects including changes to the depth of field – and neutral density (ND) filters. Today we will look at another means of exposure control: shutter angle.
The Physical Shutters of Film Cameras
As with aperture, an understanding of what’s going on under the hood is useful, and that begins with celluloid. Let’s imagine we’re shooting on film at 24fps, the most common frame rate. The film can’t move continuously through the gate (the opening behind the lens where the focused light strikes the film) or we would end up recording just a long vertical streak of light. The film must remain stationary long enough to expose an image, before being moved on by a distance of four perforations (the standard height of a 35mm film frame) so that the next frame can be exposed. Crucially, light must not hit the film while it is being moved, or vertical streaking will occur.
This is where the shutter comes in. The shutter is a portion of a disc that spins in front of the gate. The standard shutter angle is 180°, meaning that the shutter is a semi-circle. We always describe shutter angles by the portion of the disc which is missing, so a 270° shutter (admitting 1.5x the light of a 180° shutter) is a quarter of a circle, and a 90° shutter (admitting half the light of a 180° shutter) is three-quarters.
The shutter spins continuously at the same speed as the frame rate – so at 24fps the shutter makes 24 revolutions per second. So with a 180° shutter, each 24th of a second is divided into two halves, i.e. 48ths of a second:
During one 48th of a second, the missing part of the shutter is over the gate, allowing the light to pass through and the stationary film to be exposed.
During the other 48th of a second, the shutter blocks the gate to prevent light hitting the film as it is advanced. The shutter has a mirrored surface so that light from the lens is reflected up the viewfinder, allowing the camera operator to see what they’re shooting.
Intervals vs. Angles
If you come from a stills or ENG background, you may be more used to talking about shutter intervals rather than angles. The two things are related as follows:
For example, 24 x (360 ÷ 180) = 48 so a film running at 24fps, shot with a 180° shutter, shows us only a 48th of a second’s worth of light on each frame. This has been the standard frame rate and shutter angle in cinema since the introduction of sound in the late 1920s. The amount of motion blur captured in a 48th of a second is the amount that we as an audience have been trained to expect from motion pictures all our lives.
A greater (larger shutter angle, longer shutter interval) or lesser (smaller shutter angle, shorter shutter interval) amount of motion blur looks unusual to us and thus can be used to creative effect. Saving Private Ryan features one of the best-known examples of a small shutter angle in its D-day landing sequence, where the lack of motion blur creates a crisp, hyper-real effect that draws you into the horror of the battle. The effect has been endlessly copied since then, to the point that it now feels almost mandatory to shoot action scenes with a small shutter angle.
Large shutter angles are less common, but the extra motion blur can imply a drugged, fatigued or dream-like state.
In today’s digital environment, only the Arri Alexa Studio has a physical shutter. In other cameras, the sensor’s photo-sites are allowed to charge with light over a certain period of time – still referred to as the shutter interval, even though no actual shutter is involved. The same principles apply and the same 180° angle of the virtual shutter is standard. The camera will allow you to select a shutter angle/interval from a number of options, and on some models like the Canon C300 there is a menu setting to switch between displaying the shutter setting as an angle or an interval.
When to Change the Shutter Angle
Sometimes it is necessary to change the shutter angle to avoid flickering. Some luminous devices, such as TV screens and monitors, or HMI lighting not set to flicker-free mode, will appear to strobe, pulse or roll on camera. This is due to them turning on and off multiple times per second, in sync with the alternating current of the mains power supply, but not necessarily in sync with the shutter. For example, if you shoot a domestic fluorescent lamp in the UK, where the mains AC cycles at 50Hz, your 1/48th (180° at 24fps) shutter will be out of sync and the lamp will appear to throb or flicker on camera. The solution is to set the shutter to 172.8° (1/50th), which is indeed what most DPs do when shooting features in the UK. Round multiples of the AC frequency like 1/100th will also work.
You may notice that I have barely mentioned exposure so far in this article. This is because, unlike stills photographers, DPs rarely use the shutter as a means of adjusting exposure. An exception is that we may increase the shutter angle when the daylight is fading, to grab an extra shot. By doubling the shutter angle from 172.8° to 345.6° we double the light admitted, i.e. we gain one stop. As long as there isn’t any fast movement, the extra motion blur is likely to go unnoticed by the audience.
One of the hallmarks of amateur cinematography is that sunny scenes have no motion blur, due to the operator (or the camera’s auto mode) decreasing the shutter interval to avoid over-exposure. It is preferable to use ND filters to cut light on bright days, as covered in part two of this series.
For the best results, the 180° (or thereabouts) shutter angle should be retained when shooting slow motion as well. If your camera displays intervals rather than angles, ideally your interval denominator should be double the frame rate. So if you want to shoot at 50fps, set the shutter interval to 1/100th. For 100fps, set the shutter to 1/200th, and so on.
If you do need to change the shutter angle for creative or technical reasons, you will usually want to compensate with the aperture. If you halve the time the shutter is open for, you must double the area of the aperture to maintain the same exposure, and vice versa. For example, if your iris was set to T4 and you change the shutter from 180° to 90° you will need to stop up to T2.8. (Refer back to my article on aperture if you need to refresh your memory about T-stops.)
In the final part of this series we’ll get to grips with ISO.
Learn more about exposure in my online course, Cinematic Lighting. Until this Thursday (19/11/20) you can get it for the special price of £15.99 by using the voucher code INSTA90.
In the first part of this series, I explained the concepts of f-stops and T-stops, and looked at how aperture can be used to control exposure. We saw that changing the aperture causes side effects, most noticeably altering the depth of field.
How can we set the correct exposure without compromising our depth of field? Well, as we’ll see later in this series, we can adjust the shutter angle and/or ISO, but both of those have their own side effects. More commonly a DP will use neutral density (ND) filters to control the amount of light reaching the lens. These filters get their name from the fact that they block all wavelengths of light equally, so they darken the image without affecting the colour.
When to use an N.D. Filter
Let’s look at an example. Imagine that I want to shoot at T4; this aperture gives a nice depth of field, on the shallow side but not excessively so. My subject is very close to a bright window and my incident light meter is giving me a reading of f/11. (Although I’m aiming for a T-stop rather an f-stop, I can still use the f-number my meter gives me; in fact if my lens were marked in f-stops then my exposure would be slightly off because the meter does not know the transmission efficiency of my lens.) Let’s remind ourselves of the f-stop/T-stop series before we go any further:
1 1.4 2 2.8 4 5.6 8 11 16 22 32
By looking at this series, which can be found printed on any lens barrel or permanently displayed on a light meter’s screen, I can see that f/11 (or T11) is three stops down from f/4 (or T4) – because 11 is three numbers to the right of 4 in the series. To achieve correct exposure at T4 I’ll need to cut three stops of light. I can often be seen on set counting the stops like this on my light meter or on my fingers. It is of course possible to work it out mathematically or with an app, but that’s not usually necessary. You quickly memorise the series of stops with practice.
What Strength of filter to choose
Some ND filters are marked in stops, so I could simply select a 3-stop ND and slide it into my matte box or screw it onto my lens. Other times – the built-in ND filters on the Sony FS7, for example – they’re defined by the fraction of light they let through. So the FS7’s 1/4 ND cuts two stops; the first stop halves the light – as we saw in part of one of this series – and the second stop halves it again, leaving us a quarter of the original amount. The 1/16 setting cuts four stops.
However, most commonly, ND filters are labelled in optical density. A popular range of ND filters amongst professional cinematographers are those made by Tiffen, and a typical set might be labelled as follows:
.3 .6 .9 1.2
That’s the optical density, a property defined as the natural logarithm of the ratio of the quantity of light entering the filter to the quantity of light exiting it on the other side. A .3 ND reduces the light by half because 10 raised to the power of -0.3 is about 0.5, and reducing light by half, as we’ve previously established, means dropping one stop.
If that maths is a bit much for you, don’t worry. All you really need to do is multiply the number of stops you want to cut by 0.3 to find the filter you need. So, going back to my example with the bright window, to get from T11 to T4, i.e. to cut three stops, I’ll pick the .9 ND.
It’s far from intuitive at first, but once you get your head around it, and memorise the f-stops, it’s not too difficult. Trust me!
Here are a couple more examples:
Light meter reads f/8 and you want to shoot at T5.6. That’s a one stop difference. (5.6 and 8 are right next to each other in the stop series, as you’ll see if you scroll back to the top.) 1 x 0.3 = 0.3 so you should use the .3 ND.
Light meter reads f/22 and you want to shoot at T2.8. That’s a six stop difference (scroll back up and count them), and 6 x 0.3 = 1.8, so you need a 1.8 ND filter. If you don’t have one, you need to stack two NDs in your matte box that add up to 1.8, e.g. a 1.2 and a .6.
Variations on a Theme
Variable ND filters are also available. These consist of two polarising filters which can be rotated against each other to progressively lighten or darken the image. They’re great for shooting guerilla-style with a small crew. You can set your iris where you want it for depth of field, then expose the image by eye simply by turning the filter. On the down side, they’re hard to use with a light meter because there is often little correspondence between the markings on the filter and stops. They can also have a subtle adverse effect on skin tones, draining a person’s apparent vitality, as some of the light which reflects off human skin is polarised.
Another issue to look out for with ND filters is infra-red (IR). Some filters cut only the visible wavelengths of light, allowing IR to pass through. Some digital sensors will interpret this IR as visible red, resulting in an image with a red colour cast which can be hard to grade out because different materials will be affected to different degrees. Special IR ND filters are available to eliminate this problem.
These caveats aside, ND filters are the best way to adjust exposure (downwards at least) without affecting the image in any other way.
In the next part of this series I’ll look at shutter angles, what they mean, how they affect exposure and what the side effects are.
Learn how to use ND filters practically with my Cinematic Lighting online couse. Enter voucher code INSTA90 for an amazing 90% off.
This is the first in a series of posts where I will look in detail at the four means of controlling the brightness of a digital video image: aperture, neutral density (ND) filters, shutter angle and ISO. It is not uncommon for newer cinematographers to have only a partial understanding of these topics, enough to get by in most situations; that was certainly the case with me for many years. The aim of this series is to give you an understanding of the underlying mechanics which will enable you to make more informed creative decisions.
You can change any one of the four factors, or any combination of them, to reach your desired level of exposure. However, most of them will also affect the image in other ways; for example, aperture affects depth of field. One of the key responsibilities of the director of photography is to use each of the four factors not just to create the ideal exposure, but to make appropriate use of these “side effects” as well.
f-stops and t-stops
The most common way of altering exposure is to adjust the aperture, a.k.a. the iris, sometimes described as changing “the stop”. Just like the pupil in our eyes, the aperture of a photographic lens is a (roughly) circular opening which can be expanded or contracted to permit more or less light through to the sensor.
You will have seen a series of numbers like this printed on the sides of lenses:
1 1.4 2 2.8 4 5.6 8 11 16 22 32
These are ratios – ratios of the lens’ focal length to its iris diameter. So a 50mm lens with a 25mm diameter iris is at f/2. Other lengths of lens would have different iris diameters at f/2 (e.g. 10mm diameter for a 20mm lens) but they would all produce an image of the same brightness. That’s why we use f-stops to talk about iris rather than diameters.
But why not label a lens 1, 2, 3, 4…? Why 1, 1.2, 2, 2.8…? These magic numbers are f-stops. A lens set to f/1.4 will let in twice as much light as (or “one stop more than”) a lens set to f/2, which in turn will let in twice as much as one set to f/2.8, and so on. Conversely, a lens set to f/2.8 will let in half as much light as (or “one stop less than”) a lens set to f/2, and so on. (Note that a number between any of these f-stops, e.g. f/1.8, is properly called an f-number, but not an f-stop.) These doublings or halvings – technically known as a base-2 logarithmic scale – are a fundamental concept in exposure, and mimic our eyes’ response to light.
If you think back to high-school maths and the πr² squared formula for calculating the area of a circle from its radius, the reason for the seemingly random series of numbers will start to become clear. Letting in twice as much light requires twice as much area for those light rays to fall on, and remember that the f-number is the ratio of the focal length to the iris diameter, so you can see how square roots are going to get involved and why f-stops aren’t just plain old round numbers.
If you’re shooting with a cine lens, rather than a stills lens, you’ll see the same series of numbers on the barrel, but here they are T-stops rather than f-stops. T-stops are f-stops adjusted to compensate for the light transmission efficiency. Two different lenses set to, say, f/2 will not necessarily produce equally bright images, because some percentage of light travelling through the elements will always be lost, and that percentage will vary depending on the quality of the glass and the number of elements. A lens with 100% light transmission would have the same f-number and T-number, but in practice the T-number will always be a little bigger than the f-number. For example, Cooke’s 15-40mm zoom is rated at a maximum aperture of T2 or f/1.84.
Fast and slow lenses
When buying or renting a lens, one of the first things you will want to know is its maximum aperture. Lenses are often described as being fast (larger maximum aperture, denoted by a smaller f- or T-number like T1.4) or slow (smaller maximum aperture, denoted by a bigger f- or T-number like T4). These terms come from the fact that the shutter speed would need to be faster or slower to capture the same amount of light… but more on that later in the series.
Faster lenses are generally more expensive, but that expense may well be outweighed by the savings made on lighting equipment. Let’s take a simple example, and imagine an interview lit by a 4-bank Kino Flo and exposed at T2.8. If our lens can open one stop wider (known as stopping up) to T2 then we double the amount of light reaching the sensor. We can therefore halve the level of light – by turning off two of the Kino Flo’s tubes or by renting a cheaper 2-bank unit in the first place. If we can stop up further, to T1.4, then we only need one Kino tube to achieve the same exposure.
One of the first things that budding cinematographers learn is that wider apertures make for a smaller depth of field, i.e. the range of distances within which a subject will be in focus is smaller. In simple terms, the background of the image is blurrier when the depth of field is shallower.
It is often tempting to go for the shallowest possible depth of field, because it feels more cinematic and helps conceal shortcomings in the production design, but that is not the right look for every story. A DP will often choose a stop to shoot at based on the depth of field they desire. That choice of stop may affect the entire lighting budget; if you want to shoot at a very slow T14 like Douglas Slocombe did for the Indiana Jones trilogy, you’re going to need several trucks full of lights!
There is another side effect of adjusting the aperture which is less obvious. Lenses are manufactured to perform best in the middle of their iris range. If you open a lens up to its maximum aperture or close it down to its minimum, the image will soften a little. Therefore another advantage of faster lenses is the ability to get further away from their maximum aperture (and poorest image quality) with the same amount of light.
Finally it is worth noting that the appearance of bokeh (out of focus areas) and lens flares also changes with aperture. The Cooke S4 range, for example, renders out-of-focus highlights as circles when wide open, but as octagons when stopped down. With all lenses, the star pattern seen around bright light sources will be stronger when the aperture is smaller. You should shoot tests – like these I conducted in 2017 – if these image artefacts are a critical part of your film’s look.
Next time we’ll look at how we can use ND filters to control exposure without compromising our choice of stop.
Learn how to use exposure practically with my Cinematic Lighting online couse. Enter voucher code INSTA90 for an amazing 90% off.
Where do you start, as a director of photography lighting a set? What should be the first brushstroke when you’re painting with light?
I believe the answer is backlight, and I think many DPs would agree with me.
Let’s take the example of a night exterior in a historical fantasy piece, as featured in my online course, Cinematic Lighting. The main source of light in such a scene would be the moon. Where am I going to put it? At the back.
The before image is lit by an LED panel serving purely as a work-light while we rehearsed. It’s not directly above the camera, but off to the right, so the lighting isn’t completely flat, but there is very little depth in the image. Beyond the gate is a boring black void.
The after image completely transforms the viewer’s understanding of the three-dimensional space. We get the sense of a world beyond the gate, an intriguing world lighter than the foreground, with a glimpse of trees and space. Composing the brazier in the foreground has added a further plane, again increasing the three-dimensional impression.
Here is the lighting diagram for the scene. (Loads more diagrams like this can be seen on my Instagram feed.)
The “moon” is a 2.5KW HMI fresnel way back amongst the trees, hidden from camera by the wall on the right. This throws the gate and the characters into silhouette, creating a rim of light around their camera-right sides.
To shed a little light on Ivan’s face as he looks camera-left, I hid a 4×4′ Kino Flo behind the lefthand wall, again behind the actors.
The LED from the rehearsal, a Neewer 480, hasn’t moved, but now it has an orange gel and is dimmed very low to subtly enhance the firelight. Note how the contrasting colours in the frame add to the depth as well.
So I’ll always go into a scene looking at where to put a big backlight, and then seeing if I need any additional sources. Sometimes I don’t, like in this scene from the Daylight Interior module of the course.
Backlight for interior scenes is different to night interiors. You cannot simply put it where you want it. You must work with the position of the windows. When I’m prepping interiors, I always work with the director to try to block the scene so that we can face towards the window as much as possible, making it our backlight. If a set is being built, I’ll talk to the production designer at the design stage to get windows put in to backlight the main camera positions whenever possible.
In the above example, lit by just the 2.5K HMI outside the window, I actually blacked out windows behind camera so that they would not fill in the nice shadows created by the backlight.
Daylight exteriors are different again. I never use artificial lights outdoors in daytime any more. I prefer to work with the natural light and employ reflectors, diffusion or negative fill to mould it where necessary.
So it’s very important to block the scene with the camera facing the sun whenever possible. Predicting the sun path may take a little work, but it will always be worth it.
Here I’ve shot south, towards the low November sun, and didn’t need to modify the light at all.
Shooting in the opposite direction would have looked flat and uninteresting, not to mention causing potential problems with the cast squinting in the sunlight, and boom and camera shadows being cast on them.
You can learn much more about the principles and practice of cinematic lighting by taking my online course on Udemy. Currently you can get an amazing 90% off using the voucher code INSTA90 until November 19th.
Many productions are up and running again, and a recent ScreenSkills seminar revealed how two high-end series were amongst the first to tackle TV-making during a global pandemic.
Death in Paradise is a long-running crime drama about fish-out-of-water British detectives – the latest played by Ralf Little – heading murder investigations on the fictional Caribbean island of Saint Marie. Production of the show’s tenth season, originally scheduled for April, commenced instead in late July.
The Pursuit of Love is a mini-series based on the novel by Nancy Mitford, set between the two world wars. Lily James and Emily Beecham star as women in quest of husbands, in an adaptation written and directed by Emily Mortimer. Filming again began in late July, in South West England.
What both productions have in common, and a key reason why they were able to start up ahead of so many others, is that their insurance was already in place before lockdown hit. The policies include producer’s indemnity, covering costs outside of the production’s control.
Co-executive producer Alex Jones of Red Planet Pictures explained that Death in Paradise had a few other things going for it too. Most obvious of these was the location, the French archipelago of Guadeloupe, which formed a natural bubble. All cast and crew were tested for Covid-19 before flying out, then again seven days after arrival and at the start of each filming block. Having been around for ten years made adapting the production easier than starting one from scratch, Jones believes.
Ian Hogan, line producer of The Pursuit of Love, did not have the advantage of an established machine. He said that a full-time health and safety adviser with a background in location management spent weeks working out Coronavirus protocols for the period drama. Crew members each received a copy of these, and were required to agree that they would not go out in their spare time except for exercise and essential shopping. Every day they must declare remotely that they have no symptoms of Covid-19 before they can receive a green pass which allows them through location security. They must then take a temperature test before accessing the set.
Both producers insist that age and underlying health problems are not a barrier to work. Cast and crew who are particularly vulnerable to Covid-19 are given a personalised risk assessment with mitigation steps to follow.
Death in Paradise chose to film using the “one metre plus” social distancing rule common to both France and England. A former assistant director was hired as a Covid supervisor, a role which sometimes involved helping to re-block scenes to avoid physical proximity.
But for The Pursuit of Love, as the title suggests, intimacy was crucial. The producers opted for a close-contact system, dividing personnel into cohorts. A mobile testing lab with a capacity of 70 a day is always on location, and everyone is checked at least once a week. The Director’s Cohort – consisting of Mortimer, the cast, and key on-set crew like the DP, boom op and focus puller – are tested twice a week.
A monitor signal is distributed wirelessly around the set to production iPads and personal devices, to prevent a crowded video village. The DIT sends this camera feed via a local wifi network using Qtake.
Both productions require face-coverings. At least one director of Death in Paradise switched from a mask to a visor so that their cast and crew could read their facial expressions, so important when giving notes.
Visors are also used for close-contact work like make-up and costume, the two departments perhaps most affected by the pandemic. Hogan hired extra make-up trucks so that the chairs could be sufficiently spaced, and both productions expanded their crews to obviate the need for dailies. Instead, extra MUAs and dressers might be engaged for eight weeks out of 12, but on an exclusive basis so that they don’t risk spreading the virus to or from other sets.
Wardrobe fitting for supporting artists is much more involved than usual, as the same costume cannot be tried on multiple people without cleaning in-between. Greater numbers of costumes must be hired, and measurements that are taken remotely are much more important.
All of this is expensive, of course. Jones estimates it has added 15 per cent to Death in Paradise‘s budget, covered fortunately by the insurance. The pace of filming has slowed, but not as much as might be expected, with just two extra filming days per block, and slightly less coverage recorded than before.
Both Jones and Hogan praised the responsibility and enthusiasm with which their crews returned to work. They are positive about the future of TV production. While there have been fears that Coronavirus would shrink crews, Jones’s has actually grown, with a larger off-set support staff. “Our industry is booming,” he concluded, “and it will continue to boom when this is all over.”
Thanks in no small part to the excellent “making of” book by Don Shay and Jody Duncan, Jurassic Park was a formative experience for the 13-year-old Neil Oseman, setting me irrevocably on the path to filmmaking as a career. So let me take you back in time and behind the scenes of an iconic piece of popcorn fodder.
Man creates dinosaurs
Even before author Michael Crichton delivered the manuscript of his new novel in May 1990, Steven Spielberg had expressed an interest in adapting it. A brief bidding war between studios saw Joe Dante (Gremlins), Tim Burton (Batman) and Richard Donner (Superman) in the frame to direct, but Spielberg and Universal Pictures were the victors.
The screenplay went through several drafts, first by Crichton himself, then by Malio Scotch Marmo and finally by David Koepp, who would go on to script Mission: Impossible, Spider-Man and Panic Room. Pre-production began long before Koepp finished writing, with Spielberg generating storyboards based directly on scenes from the book so that his team could figure out how they were going to bring the dinosaurs to life.
Inspired by a life-size theme park animatronic of King Kong, Spielberg initially wanted all the dinsoaurs to be full-scale physical creatures throughout. This was quickly recognised as impractical, and instead Stan Winston Studio, creators of the Terminator endoskeleton, the Predator make-up and the fifteen-foot-tall Alien queen, focused on building full-scale hydraulically-actuated dinosaurs that would serve primarily for close-ups and mids.
Meanwhile, to accomplish the wider shots, Spielberg hired veteran stop-motion animator Phil Tippett, whose prior work included ED-209 in RoboCop, the tauntaun and AT-AT walkers in The Empire Strikes Back, and perhaps most relevantly, the titular creature from Dragonslayer. After producing some beautiful animatics – to give the crew a clearer previsualisation of the action than storyboards could provide – Tippett shot test footage of the “go-motion” process he intended to employ for the real scenes. Whilst this footage greatly improved on traditional stop-motion by incorporating motion blur, it failed to convince Spielberg.
At this point, Dennis Muren of Industrial Light and Magic stepped in. Muren was the visual effects supervisor behind the most significant milestones in computer-generated imagery up to that point: the stained-glass knight in Young Sherlock Holmes (1986), the water tendril in The Abyss (1989) and the liquid metal T-1000 in Terminator 2: Judgment Day (1991). When Spielberg saw his test footage – initially just skeletons running in a black void – the fluidity of the movement immediately grabbed the director’s attention. Further tests, culminating in a fully-skinned tyrannosaur stalking a herd of gallimimuses, had Spielberg completely convinced. On seeing the tests himself, Tippett famously quipped: “I think I’m extinct.”
Tippett continued to work on Jurassic Park, however, ultimately earning a credit as dinosaur supervisor. Manipulating a custom-built armature named the Dinosaur Input Device, Tippett and his team were able to have their hands-on techniques recorded by computer and used to drive the CG models.
Building on his experiences working with the E.T. puppet, Spielberg pushed for realistic animal behaviours, visible breathing, and bird-like movements reflecting the latest paleontological theories, all of which would lend credibility to the dinosaurs. Effects co-supervisor Mark Dippe stated: “We used to go outdoors and run around and pretend we were gallimisuses or T-Rexes hunting each other, and shoot [reference] film.”
Dinosaurs eat man
Production began in August 1992 with three weeks on the Hawaiian island of Kauai. Filming progressed smoothly until the final day on location, which had to be scrubbed due to Hurrican Iniki (although shots of the storm made it into the finished film). After a brief stint in the Mojave Desert, the crew settled into the stages at Universal Studios and Warner Brothers to record the bulk of the picture.
The most challenging sequence to film would also prove to be the movie’s most memorable: the T-Rex attack on the jeeps containing Sam Neill’s Dr. Grant, Jeff Goldblum’s Ian Malcolm, lawyer Gennaro and the children, Lex and Tim. It was the ultimate test for Stan Winston’s full-scale dinosaurs.
The main T-Rex puppet weighed over six tonnes and was mounted on a flight simulator-style platform that had to be anchored into the bedrock under the soundstage. Although its actions were occasionally pre-programmed, the animal was mostly puppeteered live using something similar to the Dinosaur Input Device.
But the torrential rain in which the scene takes place was anathema to the finely tuned mechanics and electronics of the tyrannosaur. “As [the T-Rex] would get rained on,” Winston explained, “his skin would soak up water, his weight would change, and in the middle of the day he would start having the shakes and we would have to dry him down.”
Although hints of this shaking can be detected by an eagle-eyed viewer, the thrilling impact of the overall sequence was clear to Spielberg, who recognised that the T-Rex was the star of his picture. He hastily rewrote the ending to bring the mighty creature back, relying entirely on CGI for the new climax in which it battles raptors in the visitor centre’s rotunda.
Woman inherits the earth
After wrapping 12 days ahead of schedule, Jurassic Park hit US cinemas on June 11th, 1993. It became the highest-grossing film of all time, a title which it would hold until Titanic’s release four years later. 1994’s Oscar ceremony saw the prehistoric blockbuster awarded not only Best Visual Effects but also Best Sound Editing and Best Sound Mixing. Indeed, Gary Rydstrom’s contribution to the film – using everything from a dolphin/walrus combination for the raptors’ calls, to the sound of his own dog playing with a rope toy for the T-Rex – cannot be overstated.
Jurassic Park has spawned four sequels to date (with a fifth on the way), and its impact on visual effects was enormous. For many years afterwards, blockbusters were filled with CGI that was unable to equal, let alone surpass, the quality of Jurassic Park’s. Watching it today, the CGI is still impressive if a little plasticky in texture, but I believe that the full-size animatronics which form the lion’s share of the dinosaurs’ screen time are what truly give the creatures their memorable verisimilitude. The film may be 27 years old, but it’s still every bit as entertaining as it was in 1993.
Firelight adds colour and dynamism to any lighting set-up, not to mention being essential for period and fantasy films. But often it’s not practical to use real firelight as your source. Even if you could do it safely, continuity could be a problem.
A production that can afford an experienced SFX crew might be able to employ fishtails, V-shaped gas outlets that produce a highly controllable bar of flame, as we did on Heretiks. If such luxuries are beyond your budget, however, you might need to think about simulating firelight. As my gaffer friend Richard Roberts once said while operating an array of flickering tungsten globes (method no. 3), “There’s nothing like a real fire… and this is nothing like a real fire.”
1. Waving Hands
The simplest way to fake firelight is to wave your hands in front of a light source. This will work for any kind of source, hard or soft; just experiment with movements and distances and find out what works best for you. A layer of diffusion on the lamp, another in a frame, and the waving hands in between, perhaps?
One of my favourite lighting stories involves a big night exterior shot from The First Musketeer which was done at the Chateau de Fumel in the Lot Valley, France. We were just about to turnover when a bunch of automatic floodlights came on, illuminating the front of the chateau and destroying the period illusion of our scene. We all ran around for a while, looking for the off switch, but couldn’t find it. In the end I put orange gel on the floodlights and had someone crouch next to each one, wiggling their hands like a magician, and suddenly the chateau appeared to be lit by burning braziers.
All you need is a collapsible reflector with a gold side, and an open-face tungsten fixture. Simply point the latter at the former and wobble the reflector during the take to create the flickering effect.
3. Tungsten Array
If you want to get more sophisticated, you can create a rig of tungsten units hooked up to a dimmer board. Electronic boxes exist to create a flame-like dimming pattern, but you can also just do it by pushing the sliders up and down randomly. I’ve done this a lot with 100W tungsten globes in simple pendant fittings, clipped to parts of the set or to wooden battens. You can add more dynamics by gelling the individual lamps with different colours – yellows, oranges and reds.
Larger productions tend to use Brutes, a.k.a. Dinos, a.k.a. 9-lights, which are banks of 1K pars. The zenith of this technique is the two megawatt rig built by gaffer John Higgins for Roger Deakins, CBE, BSC, ASC on 1917.
4. Programmed L.E.D.
Technological advances in recent years have provided a couple of new methods of simulating firelight. One of these is the emergence of LED fixtures with built-in effects programmes like police lights, lightning and flames. These units come in all shapes, sizes and price-ranges.
On War of the Worlds: The Attack last year, gaffer Callum Begley introduced me to Astera tubes, and we used their flame effect for a campfire scene in the woods when we were having continuity problems with the real fire. For the more financially challenged, domestic fire-effect LED bulbs are cheap and screw into standard sockets. Philip Bloom had a few of these on goose-neck fittings which we used extensively in the fireplaces of Devizes Castle when shooting a filmmaking course for Mzed.
5. L.e.D. Screen
A logical extension of an LED panel or bulb that crudely represents the pattern of flames is an LED screen that actually plays video footage of a fire. The oil rig disaster docu-drama Deep Horizon and Christoper Nolan’s Dunkirk are just two films that have used giant screens to create the interactive light of off-camera fires. There are many other uses for LED screens in lighting, which I’ve covered in detail before, with the ultimate evolution being Mandalorian-style virtual volumes.
You don’t necessarily need a huge budget to try this technique. What about playing one of those festive YouTube videos of a crackling log fire on your home TV? For certain shots, especially given the high native ISOs of some cameras today, this might make a pretty convincing firelight effect. For a while now I’ve been meaning to try fire footage on an iPad as a surrogate candle. There is much here to explore.
So remember, there may be no smoke without fire, but there can be firelight without fire.
As I write this, I’ve just got back from my first trip to the cinema in six months. Although they have been allowed to reopen in England since July 4th, the higher operating costs in the pandemic kept many cinemas dark well into August. On Friday the 21st, my local branch of the Light here in Cambridge finally opened its doors, and I went along to experience post-Covid cinema.
Studios have been shifting their release dates throughout the lockdown, with some films giving up on theatrical exhibition altogether, so the Light, like its competitors, has filled its screens with classics for now. I selected Jurassic Park, which I haven’t seen on the big screen since its original release in 1993.
When I arrived, the lobby was dark and almost empty. Like most public spaces, it had sprouted new signage and a one-way system since March, and it took me a couple of attempts to find the right lane. Once inside the main corridor though, little had changed except the odd hand sanitiser dispenser on the wall.
I found my screen and took a seat. As with everything from trains to swimming pools, pre-booking is now strongly recommended, due to the diminished capacity caused by social distancing. When you pick your seat, the website makes you leave two empties between your party and the next. You can even pre-purchase your popcorn and bucket of cola.
I needn’t have booked, however. In a screen of about 100 seats, exactly ten were occupied. It will take the general public a while to cotton on that cinema-going is an option again, even before they decide whether they feel comfortable doing so.
As I sat masked and expectant, my hands sticky from sanitiser that refused to evaporate, I was treated to a rare site: a cinema employee inside the auditorium. He announced that they didn’t have any ads or trailers yet, so they would delay starting the film to give everyone a chance to arrive.
A few minutes later, the man reappeared and asked us all to decamp to the corridor. Apparently they had installed a new sound system, and they needed to test it, which could be very loud. Why they couldn’t have checked the system for eardrum bursting at some point in the last six months is beyond me.
The ten of us duly waited in the corridor. A snatch of the Imperial March from an adjacent screen betokened another classic being wheeled out. A woman with a spray bottle and a cloth, masked like all of her colleagues, worked her way down the corridor, cleaning the door handles. A group next to me (but, I hasten to add, appropriately distant) cracked jokes about the sex appeal of Jeff Goldblum’s Ian Malcom. Another group, evidently missing the trailers, watched one on a phone. (If that doesn’t sum up the existential crisis facing cinema, I don’t know what does.)
At last we were readmitted. The lights dimmed, the sounds of a jungle faded up on the brand new sound system, and the Universal logo appeared. But the trademark globe looked like a deflated football. The film was being projected in the wrong aspect ratio. And not just slightly. It was almost unwatchably stretched, like the flat 1.85:1 images were being shown through a 2:1 anamorphic lens.
By the time the first scene was dissolving away to Bob Peck’s cries of “Shoot her!” the problem hadn’t been corrected, so I stepped out to find a member of staff. The senior person on duty claimed that the problem lay with the file supplied by the distributor, not with the projection. “There’s nothing I can do,” he insisted, while I goggled over my mask in disbelief.
At this point, had I not had this article to write, I would have gone home and watched the film on Netflix, or even on DVD. (There’s that existential crisis again.) But I persevered, trying not to imagine Dean Cundey weeping tears of frustration into his beard.
Fortunately, Jurassic Park is such a great film that it could be appreciated even in the face of such technical incompetence. A larger audience would have been nice, to enjoy the scares and humour with, though since screaming and laughing project dangerous droplets further, perhaps that’s less than ideal these days.
Overall, I must say that I found the experience of going to the cinema less altered than many other aspects of life. I’ve got used to wearing a mask, so much so that I was halfway home before I remembered to take it off, and I normally avoid peak times so the emptiness didn’t feel too unusual.
But with the rise in streaming subscriptions during lockdown, and the understandable caution that many feel about going out, cinemas will need to work much harder to get bums back on flip-up seats. The kind of technical troubles that the Light suffered tonight will only strengthen the case for staying at home, mask-free and pyjama-clad, where you can control both the virus and the aspect ratio.
A week after writing this, I went to a Showcase to see Tenet. The member of staff who took our tickets unequivocally told us that the printed screen number was wrong, and that we should go to another one. We did so. The ads and trailers finally started, fifteen minutes late. We were just wondering why they were trailing such kid-friendly movies when another member of staff came in and told us that Tenet was showing in the original screen after all, and by the way, you’ve missed the first couple of minutes.