Where do you start, as a director of photography lighting a set? What should be the first brushstroke when you’re painting with light?
I believe the answer is backlight, and I think many DPs would agree with me.
Let’s take the example of a night exterior in a historical fantasy piece, as featured in my online course, Cinematic Lighting. The main source of light in such a scene would be the moon. Where am I going to put it? At the back.
The before image is lit by an LED panel serving purely as a work-light while we rehearsed. It’s not directly above the camera, but off to the right, so the lighting isn’t completely flat, but there is very little depth in the image. Beyond the gate is a boring black void.
The after image completely transforms the viewer’s understanding of the three-dimensional space. We get the sense of a world beyond the gate, an intriguing world lighter than the foreground, with a glimpse of trees and space. Composing the brazier in the foreground has added a further plane, again increasing the three-dimensional impression.
Here is the lighting diagram for the scene. (Loads more diagrams like this can be seen on my Instagram feed.)
The “moon” is a 2.5KW HMI fresnel way back amongst the trees, hidden from camera by the wall on the right. This throws the gate and the characters into silhouette, creating a rim of light around their camera-right sides.
To shed a little light on Ivan’s face as he looks camera-left, I hid a 4×4′ Kino Flo behind the lefthand wall, again behind the actors.
The LED from the rehearsal, a Neewer 480, hasn’t moved, but now it has an orange gel and is dimmed very low to subtly enhance the firelight. Note how the contrasting colours in the frame add to the depth as well.
So I’ll always go into a scene looking at where to put a big backlight, and then seeing if I need any additional sources. Sometimes I don’t, like in this scene from the Daylight Interior module of the course.
Backlight for interior scenes is different to night interiors. You cannot simply put it where you want it. You must work with the position of the windows. When I’m prepping interiors, I always work with the director to try to block the scene so that we can face towards the window as much as possible, making it our backlight. If a set is being built, I’ll talk to the production designer at the design stage to get windows put in to backlight the main camera positions whenever possible.
In the above example, lit by just the 2.5K HMI outside the window, I actually blacked out windows behind camera so that they would not fill in the nice shadows created by the backlight.
Daylight exteriors are different again. I never use artificial lights outdoors in daytime any more. I prefer to work with the natural light and employ reflectors, diffusion or negative fill to mould it where necessary.
So it’s very important to block the scene with the camera facing the sun whenever possible. Predicting the sun path may take a little work, but it will always be worth it.
Here I’ve shot south, towards the low November sun, and didn’t need to modify the light at all.
Shooting in the opposite direction would have looked flat and uninteresting, not to mention causing potential problems with the cast squinting in the sunlight, and boom and camera shadows being cast on them.
You can learn much more about the principles and practice of cinematic lighting by taking my online course on Udemy. Currently you can get an amazing 90% off using the voucher code INSTA90 until November 19th.
Firelight adds colour and dynamism to any lighting set-up, not to mention being essential for period and fantasy films. But often it’s not practical to use real firelight as your source. Even if you could do it safely, continuity could be a problem.
A production that can afford an experienced SFX crew might be able to employ fishtails, V-shaped gas outlets that produce a highly controllable bar of flame, as we did on Heretiks. If such luxuries are beyond your budget, however, you might need to think about simulating firelight. As my gaffer friend Richard Roberts once said while operating an array of flickering tungsten globes (method no. 3), “There’s nothing like a real fire… and this is nothing like a real fire.”
1. Waving Hands
The simplest way to fake firelight is to wave your hands in front of a light source. This will work for any kind of source, hard or soft; just experiment with movements and distances and find out what works best for you. A layer of diffusion on the lamp, another in a frame, and the waving hands in between, perhaps?
One of my favourite lighting stories involves a big night exterior shot from The First Musketeer which was done at the Chateau de Fumel in the Lot Valley, France. We were just about to turnover when a bunch of automatic floodlights came on, illuminating the front of the chateau and destroying the period illusion of our scene. We all ran around for a while, looking for the off switch, but couldn’t find it. In the end I put orange gel on the floodlights and had someone crouch next to each one, wiggling their hands like a magician, and suddenly the chateau appeared to be lit by burning braziers.
All you need is a collapsible reflector with a gold side, and an open-face tungsten fixture. Simply point the latter at the former and wobble the reflector during the take to create the flickering effect.
3. Tungsten Array
If you want to get more sophisticated, you can create a rig of tungsten units hooked up to a dimmer board. Electronic boxes exist to create a flame-like dimming pattern, but you can also just do it by pushing the sliders up and down randomly. I’ve done this a lot with 100W tungsten globes in simple pendant fittings, clipped to parts of the set or to wooden battens. You can add more dynamics by gelling the individual lamps with different colours – yellows, oranges and reds.
Larger productions tend to use Brutes, a.k.a. Dinos, a.k.a. 9-lights, which are banks of 1K pars. The zenith of this technique is the two megawatt rig built by gaffer John Higgins for Roger Deakins, CBE, BSC, ASC on 1917.
4. Programmed L.E.D.
Technological advances in recent years have provided a couple of new methods of simulating firelight. One of these is the emergence of LED fixtures with built-in effects programmes like police lights, lightning and flames. These units come in all shapes, sizes and price-ranges.
On War of the Worlds: The Attack last year, gaffer Callum Begley introduced me to Astera tubes, and we used their flame effect for a campfire scene in the woods when we were having continuity problems with the real fire. For the more financially challenged, domestic fire-effect LED bulbs are cheap and screw into standard sockets. Philip Bloom had a few of these on goose-neck fittings which we used extensively in the fireplaces of Devizes Castle when shooting a filmmaking course for Mzed.
5. L.e.D. Screen
A logical extension of an LED panel or bulb that crudely represents the pattern of flames is an LED screen that actually plays video footage of a fire. The oil rig disaster docu-drama Deep Horizon and Christoper Nolan’s Dunkirk are just two films that have used giant screens to create the interactive light of off-camera fires. There are many other uses for LED screens in lighting, which I’ve covered in detail before, with the ultimate evolution being Mandalorian-style virtual volumes.
You don’t necessarily need a huge budget to try this technique. What about playing one of those festive YouTube videos of a crackling log fire on your home TV? For certain shots, especially given the high native ISOs of some cameras today, this might make a pretty convincing firelight effect. For a while now I’ve been meaning to try fire footage on an iPad as a surrogate candle. There is much here to explore.
So remember, there may be no smoke without fire, but there can be firelight without fire.
Last week, Greig Fraser, ASC, ACS and Baz Idoine were awarded the Emmy for Outstanding Cinematography for a Single-camera Series (Half-hour) for The Mandalorian. I haven’t yet seen this Star Wars TV series, but I’ve heard and read plenty about it, and to call it a revolution in filmmaking is not hyperbole.
Half of the series was not shot on location or on sets, but on something called a volume: a stage with walls and ceiling made of LED screens, 20ft tall, 75ft across and encompassing 270° of the space. I’ve written before about using large video screens to provide backgrounds in limited aways, outside of train windows for example, and using them as sources of interactive light, but the volume takes things to a whole new level.
In the past, the drawback of the technology has been one of perspective; it’s a flat, two-dimensional screen. Any camera movement revealed this immediately, because of the lack of parallax. So these screens tended to be kept to the deep background, with limited camera movement, or with plenty of real people and objects in the foreground to draw the eye. The footage shown on the screens was pre-filmed or pre-rendered, just video files being played back.
The Mandalorian‘s system, run by multiple computers simultaneously, is much cleverer. Rather than a video clip, everything is rendered in real time from a pre-built 3D environment known as a load, running on software developed for the gaming industry called Unreal Engine. Around the stage are a number of witness cameras which use infra-red to monitor the movements of the cinema camera in the same way that an actor is performance-captured for a film like Avatar. The data is fed into Unreal Engine, which generates the correct shifts in perspective and sends them to the video walls in real time. The result is that the flat screen appears, from the cinema camera’s point of view, to have all the depth and distance required for the scene.
The loads are created by CG arists working to the production designer’s instructions, and textured with photographs taken at real locations around the world. In at least one case, a miniature set was built by the art department and then digitised. The scene is lit with virtual lights by the DP – all this still during preproduction.
The volume’s 270° of screens, plus two supplementary, moveable screens in the 90° gap behind camera, are big enough and bright enough that they provide most or all of the illumination required to film under. The advantages are obvious. “We can create a perfect environment where you have two minutes to sunset frozen in time for an entire ten-hour day,” Idoine explains. “If we need to do a turnaround, we merely rotate the sky and background, and we’re ready to shoot!”
Traditional lighting fixtures are used minimally on the volume, usually for hard light, which the omni-directional pixels of an LED screen can never reproduce. If the DPs require soft sources beyond what is built into the load, the technicians can turn any off-camera part of the video screens into an area of whatever colour and brightness are required – a virtual white poly-board or black solid, for example.
A key reason for choosing the volume technology was the reflective nature of the eponymous Mandalorian’s armour. Had the series been shot on a green-screen, reflections in his shiny helmet would have been a nightmare for the compositing team. The volume is also much more actor- and filmmaker-friendly; it’s better for everyone when you can capture things in-camera, rather than trying to imagine what they will look like after postproduction. “It gives the control of cinematography back to the cinematographer,” Idoine remarks. VR headsets mean that he and the director can even do a virtual recce.
The Mandalorian shoots on the Arri Alexa LF (large format), giving a shallow depth of field which helps to avoid moiré problems with the video wall. To ensure accurate chromatic reproduction, the wall was calibrated to the Alexa LF’s colour filter array.
Although the whole system was expensive to set up, once up and running it’s easy to imagine how quickly and cheaply the filmmakers can shoot on any given “set”. The volume has limitations, of course. If the cast need to get within a few feet of a wall, for example, or walk through a door, then that set-piece has to be real. If a scene calls for a lot of direct sunlight, then the crew move outside to the studio backlot. But undoubtedly this technology will improve rapidly, so that it won’t be long before we see films and TV episodes shot entirely on volumes. Perhaps one day it could overtake traditional production methods?
I’m certainly glad you could join me today. It’s a fantastic day here and I hope it is wherever you’re at. Are you ready to read a fantastic little blog post? Good, then let’s get started.
For twelve years, across 400 episodes, Bob Ross entertained all generations of Americans with his public access TV series, The Joy of Painting. Although he floated up to join the happy little clouds in 1995, in recent years YouTube and Twitch have brought his shows to a new audience, of which I am a humble member. Bob’s hypnotic, soft-spoken voice, his unfailingly positive attitude, and the magical effects of his wet-on-wet oil-painting technique make his series calming, comforting and captivating in equal measure.
Having watched every episode at least twice now, I’ve noticed several nuggets of Bob Ross wisdom that apply just as well to cinematography as they do to painting.
1. “The more plains you have in your painting, the more depth it has… and that’s what brings the happy buck.”
Bob always starts with the background of his scene and paints forward: first the sky with its happy little clouds; then often some almighty mountains; then the little footy hills; some trees way in the distance, barely more than scratches on the canvas; then perhaps a lake, its reflections springing forth impossibly from Bob’s brush; the near bank; and some detailed trees and bushes in the foreground, with a little path winding through them.
Just as with landscape painting, depth is tremendously important in cinematography. Creating a three-dimensional world with a monoscopic camera is a big part of a DP’s job, which starts with composition – shooting towards a window, for example, rather than a wall – and continues with lighting. Depth increases production value, which makes for a happy producer and a happy buck for you when you get hired again.
2. “As things get further away from you in a landscape, they get lighter in value.”
Regular Joy of Painting viewers soon notice that the more distant layers of Bob’s paintings use a lot more Titanium White than the closer ones. Bob frequently explains that each layer should be darker and more detailed than the one behind it, “and that’s what creates the illusion of depth”.
Distant objects seem lighter and less contrasty because of a phenomenon called aerial perspective, basically atmospheric scattering of light. As a DP, you can simulate this by lighting deeper areas of your frame brightly, and keeping closer areas dark. This might be achieved by setting up a flag to provide negative fill to an object in the foreground, or by placing a battery-powered LED fixture at the end of a dark street. The technique works for night scenes and small interiors, just as well as daytime landscapes, even though aerial perspective would never occur there in real life. The viewer’s brain will subconsciously recognise the depth cue and appreciate the three-dimensionality of the set much more.
3. “Don’t kill the little misty area; that’s your separator.”
After completing each layer, particularly hills and mountains, Bob takes a clean, dry brush and taps gently along the bottom of it. This has a blurring and fading effect, giving the impression that the base of the layer is dissolving into mist. When he paints the next layer, he takes care to leave a little of this misty area showing behind it.
We DPs can add atmos (smoke) to a scene to create separation. Because there will be more atmos between the lens and a distant object than between the lens and a close object, it really aids the eye in identifying different plains. That makes the image both clearer and more aesthetically pleasing. Layers can also be separated with backlight, or a differentiation of tones or colours.
4. “You need the dark in order to show the light.”
Hinting at the tragedy in his own life, Bob often underlines the importance of playing dark tones against light ones. “It’s like in life. Gotta have a little sadness once in a while so you know when the good times come,” he wisely remarks, as he taps away at the canvas with his fan-brush, painting in the dark rear leaves of a tree. Then he moves onto the lighter foreground leaves, “but don’t kill your dark areas,” he cautions.
If there’s one thing that makes a cinematic image, it’s contrast. It can be very easy to over-light a scene, and it’s often a good idea to try turning a fixture or two off to see if the mood is improved. However bright or dark your scene is, where you don’t put light is just as important as where you do. Flagging a little natural light, blacking out a window, or removing the bubble from a practical can often add a nice bit of shape to the image.
5. “Maybe… maybe… maybe… Let’s DROP in an almighty tree.”
As the end of the episode approaches, and the painting seems complete, Bob has a habit of suddenly adding a big ol’ tree down one or both sides of the canvas. Since this covers up background layers that have been carefully constructed earlier in the show, Bob often gets letters complaining that he has spoilt a lovely painting. “Ruined!” is the knowing, light-hearted comment of the modern internet viewer.
The function of these trees is to provide a foreground framing element which anchors the side of the image. I discussed this technique in my article on composing a wide shot. A solid, close object along the side or base of the frame makes the image much stronger. It gives a reason for the edge of the frame to be there rather than somewhere else. As DPs, we may not be able to just paint a tree in, but there’s often a fence, a pillar, a window frame, even a supporting artist that we can introduce to the foreground with a little tweaking of the camera position.
The ol’ clock on the wall tells me it’s time to go, so until next time: happy filming, and God bless, my friend.
Like many of us, I’ve watched a lot of streaming shows this year. One of the best was Chernobyl, the HBO/Sky Atlantic mini-series about the nuclear power plant disaster of 1986, which I cheekily binged during a free trial of Now TV.
In July, Chernobyl deservedly scooped multiple honours at the Virgin Media British Academy Television (Craft) Awards. In addition to it claiming the Bafta for best mini-series, lead actor Jared Harris, director Johan Renck, director of photography Jakob Ihre, production designers Luke Hull and Claire Levinson-Gendler, costume designer Odile Dicks-Mireaux, editors Simon Smith and Jinx Godfrey, composer Hildur Gudnadóttir, and the sound team all took home the awards in their respective fiction categories.
I use the phrase “took home” figuratively, since no-one had left home in the first place. The craft awards ceremony was a surreal, socially-distanced affair, full of self-filmed, green-screened celebrities. Comedian Rachel Parris impersonated writer/actor Jessica Knappett, and the two mock-argued to present the award for Photography & Lighting: Fiction. Chernobyl’s DP Jakob Ihre, FSF gave his acceptance speech in black tie, despite being filmed on a phone in his living room. In it he thanked his second unit DP Jani-Petteri Passi as well as creator/writer Craig Mazin, one of the few principal players not to receive an award.
Mazin crafted a tense and utterly engrossing story across five hour-long instalments, a story all the more horrifying for its reality. Beginning with the suicide of Harris’ Valery Legasov on the second anniversary of the disaster, the series shifts back to 1986 and straight into the explosion of the No. 4 reactor at the Chernobyl Nuclear Power Plant in the Soviet Ukraine. Legasov, along with Brosi Shcherbina (Stellan Skarsgård) and the fictional, composite character Ulana Khomyuk (Emily Watson) struggle to contain the meltdown while simultaneously investigating its cause. Legions of men are sacrificed to the radiation, wading through coolant water in dark, labyrinthine tunnels to shut off valves, running across what remains of the plant’s rooftop to collect chunks of lethal graphite, and mining in sweltering temperatures beneath the core to install heat exchangers that will prevent another catastrophic explosion.
For Swedish-born NFTS (National Film and Television School) graduate Jakob Ihre, Chernobyl was a first foray into TV. His initial concept for the show’s cinematography was to reflect the machinery of the Soviet Union. He envisaged a heavy camera package representing the apparatus of the state, comprised of an Alexa Studio, with its mechanical shutter, plus anamorphic lenses. “After another two or three months of preproduction,” he told the Arri Channel, “we realised maybe that’s the wrong way to go, and we should actually focus on the characters, on the human beings, the real people who this series is about.”
Sensitivity and respect for the people and their terrible circumstances ultimately became the touchstone for both Ihre and his director. The pair conducted a blind test of ten different lens sets, and both independently selected Cooke Panchros. “We did a U-turn and of course we went for spherical lenses, which in some way are less obtrusive and more subtle,” said Ihre. For the same reason, he chose the Alexa Mini over its big brother. A smaller camera package like this is often selected when filmmakers wish to distract and overwhelm their cast as little as possible, and is believed by many to result in more authentic performances.
When it came to lighting, “We were inspired by the old Soviet murals, where you see the atom, which is often symbolised as a sun with its rays, and you see the workers standing next to that and working hand in hand with the so-called ‘friendly’ atom.” Accordingly, Ihre used light to represent gamma radiation, with characters growing brighter and over-exposed as they approach more dangerous areas.
Ihre thought of the disaster as damaging the fabric of the world, distorting reality. He strove to visualise this through dynamic lighting, with units on dimmers or fitted with remote-controlled shutters. He also allowed the level of atmos (smoke) in a scene to vary – normally a big no-no for continuity. The result is a series in which nothing feels safe or stable.
The DP shot through windows and glass partitions wherever possible, to further suggest a distorted world. Working with Hull and Levinson-Gendler, he tested numerous transparent plastics to find the right one for the curtains in the hospital scenes. In our current reality, filled with perspex partitions (and awards ceremonies shot on phones), such imagery of isolation is eerily prescient.
The subject of an invisible, society-changing killer may have become accidentally topical, but the series’ main theme was more deliberately so. “What is the cost of lies?” asks Legasov. “It’s not that we’ll mistake them for the truth. The real danger is that if we hear enough lies, then we no longer recognise the truth at all.” In our post-truth world, the disinformation, denial and delayed responses surrounding the Chernobyl disaster are uncomfortably familiar.
Recently I’ve been pondering which camera to shoot an upcoming project on, so I consulted the ASC’s comparison chart. Amongst the many specs compared is dynamic range, and I noticed that the ARRI Alexa’s was given as 14+ stops, while the Blackmagic URSA’s is 15. Having used both cameras a fair bit, I can tell you that there’s no way in Hell that the Ursa has a higher dynamic range than the Alexa. So what’s going on here?
What is dynamic range?
To put it simply, dynamic range is the level of contrast that an imaging system can handle. To quote Alan Roberts, who we’ll come back to later:
This is normally calculated as the ratio of the exposure which just causes white clipping to the exposure level below which no details can be seen.
A photosite on a digital camera’s sensor outputs a voltage proportional to the amount of light hitting it, but at some point the voltage reaches a maximum, and no matter how much more light you add, it won’t change. At the other end of the scale, a photosite may receive so little light that it outputs no voltage, or at least nothing that’s discernible from the inherent electronic noise in the system. These upper and lower limits of brightness may be narrowed by image processing within the camera, with RAW recording usually retaining the full dynamic range, while linear Rec. 709 severely curtails it.
In photography and cinematography, we measure dynamic range in stops – doublings and halvings of light which I explain fully in this article. One stop is a ratio of 2:1, five stops are 32:1, thirteen stops are almost 10,000:1
It’s worth pausing here to point out the difference between dynamic range and latitude, a term which is sometimes regarded as synonymous, but it’s not. The latitude is a measure of how much the camera can be over- or under-exposed without losing any detail, and is dependent on both the dynamic range of the camera and the dynamic range of the scene. (A low-contrast scene will allow more latitude for incorrect exposure than a high-contrast scene.)
Problems of Measurement
Before digital cinema cameras were developed, video had a dynamic range of about seven stops. You could measure this relatively easily by shooting a greyscale chart and observing the waveform of the recorded image to see where the highlighs levelled off and the shadows disappeared into the noise floor. With today’s dynamic ranges into double digits, simple charts are no longer practical, because you can’t manufacture white enough paper or black enough ink.
For his excellent video on dynamic range, Filmmaker IQ’s John Hess built a device fitted with a row of 1W LEDs, using layers of neutral density gel to make each one a stop darker than its neighbour. For the purposes of his demonstration, this works fine, but as Phil Rhodes points out on RedShark News, you start running into the issue of the dynamic range of the lens.
It may seem strange to think that a lens has dynamic range, and in the past when I’ve heard other DPs talk about certain glass being more or less contrasty, I admit that I haven’t thought much about what that means. What it means is flare, and not the good anamorphic streak kind, but the general veiling whereby a strong light shining into the lens will raise the overall brightness of the image as it bounces around the different elements. This lifts the shadows, producing a certain amount of milkiness. Even with high contrast lenses, ones which are less prone to veiling, the brightest light on your test device will cause some glare over the darkest one, when measuring the kind of dynamic range today’s cameras enjoy.
Going back to my original query about the Alexa versus the URSA, let’s see exactly what the manufacturers say. ARRI specifically states that its sensor’s dynamic range is over 14 stops “as measured with the ARRI Dynamic Range Test Chart”. So what is this chart and how does it work? The official sales blurb runs thusly:
The ARRI DRTC-1 is a special test chart and analysis software for measurement of dynamic range and sensitivity of digital cameras. Through a unique stray light reduction concept this system is able to accurately measure up to 15.5 stops of dynamic range.
The “stray light reduction” is presumably to reduce the veiling mentioned earlier and provide more accurate results. This could be as simple as covering or turning off the brighter lights when measuring the dimmer ones.
I found a bit more information about the test chart in a 2011 camera shoot-out video, from that momentous time when digital was supplanting film as the cinematic acquisition format of choice. Rather than John Hess’s ND gel technique, the DRTC-1 opts for something else to regulate its light output, as ARRI’s Michael Bravin explains in the video:
There’s a piece of motion picture film behind it that’s checked with a densitometer, and what you do is you set the exposure for your camera, and where you lose detail in the vertical and horizontal lines is your clipping point, and where you lose detail because of noise in the shadow areas is your lowest exposure… and in between you end up finding the number of stops of dynamic range.
Blackmagic Design do not state how they measure the dynamic range of their cameras, but it may be a DSC Labs Xlya. This illuminated chart boasts a shutter system which “allows users to isolate and evaluate individual steps”, plus a “stepped xylophone shape” to minimise flare problems.
I used to do a lot of consulting with DSC Labs, who make camera test charts, so I own a 20-stop dynamic range chart (DSC Labs Xyla). This is what most manufacturers use to test dynamic range (although not ARRI, because our engineers don’t feel it’s precise enough) and I see what companies claim as usable stops. You can see that they are just barely above the noise floor.
Obviously these ARRI folks I keep quoting may be biased. I wanted to find an independent test that measures both Blackmagics and Alexas with the same conditions and methodology, but I couldn’t find one. There is plenty of anecdotal evidence that Alexas have a bigger dynamic range, in fact that’s widely accepted as fact, but quantifying the difference is harder. The most solid thing I could find is this, from a 2017 article about the Blackmagic Ursa Mini 4.6K (first generation):
The camera was measured at just over 14 stops of dynamic range in RAW 4:1 [and 13 stops in ProRes]. This is a good result, especially considering the price of the camera. To put this into perspective Alan measured the Canon C300 mkII at 15 stops of dynamic range. Both the URSA Mini 4.6 and C300 mkII are bettered by the ARRI Alexa and Amira, but then that comes as no surprise given their reputation and price.
The Alan mentioned is Alan Roberts, something of a legend when it comes to testing cameras. It is interesting to note that he is one of the key players behind the TLCI (Television Lighting Consistency Index), a mooted replacement for CRI (Colour Rendering Index). It’s interesting because this whole dynamic range business is starting to remind me of my investigation into CRI, and is leading me to a similar conclusion, that the numbers which the manufacturers give you are all but useless in real-world cinematography.
Whereas CRI at least has a standardised test, there’s no such thing for dynamic range. Therefore, until there is more transparency from manufacturers about how they measure it, I’d recommend ignoring their published values. As always when choosing a camera, shoot your own tests if at all possible. Even the most reliable numbers can’t tell you whether you’re going to like a camera’s look or not, or whether it’s right for the story you want to tell.
When tests aren’t possible, and I know that’s often the case in low-budget land, at least try to find an independent comparison. I’ll leave you with this video from the Slanted Lens, which compares the URSA Mini Pro G2 with the ARRI Amira (which uses the same Alev III sensor as the Alexa). They don’t measure the dynamic range, but you can at least see the images side by side, and in the end it’s the images that matter, not the numbers.
We’re all familiar with the “good/fast/cheap” triangle. You can pick any two, but never all three. When it comes to lighting films, I would posit that there is a slightly different triangle of truth labelled “beautiful/realistic/cheap”. When you’re working to a tight budget, a DP often has to choose between beautiful or realistic lighting, where a better-funded cinematographer can have both.
I first started thinking about this in 2018 when I shot Annabel Lee. Specifically it was when we were shooting a scene from this short period drama – directed by Amy Coop – in a church. Our equipment package was on the larger side for a short, but still far from ideal for lighting up a building of that size. Our biggest instrument was a Nine-light Maxi Brute, which is a grid of 1KW par globes, then we had a couple of 2.5K HMIs and nothing else of any signifcant power.
The master shot for the scene was a side-on dolly move parallel to the central aisle, with three large stained-glass windows visible in the background. My choices were either to put a Maxi Brute or an HMI outside each window, to use only natural light, or to key the scene from somewhere inside the building. The first option was beautiful but not realistic, as I shall explain, the second option would have been realistic but not beautiful (and probably under-exposed) and the third would have been neither.
I went with the hard source outside of each window. I could not diffuse or bounce the light because that would have reduced the intensity to pretty much nothing. (Stained-glass windows don’t transmit a lot of light through them.) For the same reason, the lamps had to be pretty close to the glass.
The result is that, during this dolly shot, each of the three lamps is visible at one time or another. You can’t tell they’re lamps – the blown-out panes of glass disguise them – but the fact that there are three of them rather gives away that they are not the sun! (There is also the issue that contiguous scenes outside the church have overcast light, but that is a discontinuity I have noticed in many other films and series.)
I voiced my concerns to Amy at the time – trying to shirk responsibility, I suppose! Fortunately she found it beautiful enough to let the realism slide.
But I couldn’t help thinking that, with a larger budget and thus larger instruments, I could have had both beauty and realism. If I had had three 18K HMIs, for example, plus the pre-rig time to put them on condors or scaffolding towers, they could all have been high enough and far enough back from the windows that they wouldn’t have been seen. I would still have got the same angle of light and the nice shafts in the smoke, but they would have passed much more convincingly as a single sun source. Hell, if I’d had the budget for a 100KW SoftSun then I really could have done it with one source!
There have been many other examples of the beauty/realism problem throughout my career. One that springs to mind is Above the Clouds, where the 2.5K HMI which I was using as a backlight for a night exterior was in an unrealistic position. The ground behind the action sloped downwards, so the HMI on its wind-up stand threw shafts of light upwards. With the money for a cherry-picker, a far more moon-like high-angle could have been achieved. Without such funds, my only alternative was to sacrifice the beauty of a backlight altogether, which I was not willing to do.
The difference between that example and Annabel Lee is that Clouds director Leon Chambers was unable to accept the unrealistic lighting, and ended up cutting around it. So I think it’s quite important to get on the same page as your director when you’re lighting with limited means.
I remember asking Paul Hyett when we were preppingHeretiks, “How do you feel about shafts of ‘sunlight’ coming into a room from two different directions?” He replied that “two different directions is fine, but not three.” That was a very nice, clear drawing of the line between beauty (or at least stylisation) and realism, which helped me enormously during production.
The beauty/realism/cost triangle is one we all have to navigate. Although it might sometimes give us regrets about what could have been, as long we’re on the same page as our directors we should still get results we can all live with.
Lately, having run out of interesting series, I’ve found myself watching a lot of nineties blockbusters: Outbreak, Twister, Dante’s Peak, Backdraft, Daylight. Whilst eighties movies were the background to my childhood, and will always have a place in my heart, it was the cinema of the nineties that I was immersed in as I began my own amateur filmmaking. So, looking back on those movies now, while certain clichés stand out like sore thumbs, they still feel to me like solid examples of how to make a summer crowd-pleaser.
Let’s get those clichés out of the way first. The lead character always has a failed marriage. There’s usually an opening scene in which they witness the death of a spouse or close relative, before the legend “X years later” fades up. The dog will be saved, but the crotchety elderly character will die nobly. Buildings instantly explode towards camera when touched by lava, hurricanes, floods or fires. A stubborn senior authority figure will refuse to listen to the disgraced lead character who will ultimately be proven correct, to no-one’s surprise.
There’s an intensity to nineties action scenes, born of the largely practical approach to creating them. The decade was punctuated by historic advances in digital effects: the liquid metal T-1000 in Terminator 2 (1991), digital dinosaurs in Jurassic Park(1993), motion-captured passengers aboard the miniature Titanic (1997), Bullet Time in The Matrix (1999). Yet these techniques remained expensive and time-consuming, and could not match traditional methods of creating explosions, floods, fire or debris. The result was that the characters in jeopardy were generally surrounded by real set-pieces and practical effects, a far more nerve-wracking experience for the viewer than today, when we can tell that our heroes are merely imagining their peril on a green-screen stage.
One thing I was looking out for during these movie meanders down memory lane was lens selection. A few weeks back, a director friend had asked me to suggest examples of films that preferred long lenses. He had mentioned that such lenses were more in vogue in the nineties, which I’d never thought about before.
As soon as I started to consider it, I realised how right my friend was. And how much that long-lens looked had influenced me. When I started out making films, I was working with the tiny sensors of Mini-DV cameras. I would often try to make my shots look more cinematic by shooting on the long end of the zoom. This was partly to reduce the depth of field, but also because I instinctively felt that the compressed perspective was more in keeping with what I saw at the cinema.
I remember being surprised by something that James Cameron said in his commentary on the Aliens DVD:
I went to school on Ridley [Scott]’s style of photography, which was actually quite a bit different from mine, because he used a lot of long lenses, much more so than I was used to working with.
I had assumed that Cameron used long lenses too, because I felt his films looked incredibly cinematic, and because I was so sure that cinematic meant telephoto. I’ve discussed in the past what I think people tend to mean by the term “cinematic”, and there’s hardly a definitive answer, but I’m now sure that lens length has little to do with it.
And yet… are those nineties films influencing me still? I have to confess, I struggle with short lenses to this day. I find it hard to make wide-angle shots look as good. On Above the Clouds, to take just one example, I frequently found that I preferred the wide shots on a 32mm than a 24mm. Director Leon Chambers agreed; perhaps those same films influenced him?
A deleted scene from Ren: The Girl with the Mark ends with some great close-ups shot on my old Sigma 105mm still lens, complete with the slight wobble of wind buffeting the camera, which to my mind only adds to the cinematic look! On a more recent project, War of the Worlds: The Attack, I definitely got a kick from scenes where we shot the heroes walking towards us down the middle of the street on a 135mm.
Apart from the nice bokeh, what does a long lens do for an image? I’ve already mentioned that it compresses perspective, and because this is such a different look to human vision, it arguably provides a pleasing unreality. You could describe it as doing for the image spatially what the flicker of 24fps (versus high frame rates) does for it temporally. Perhaps I shy away from short lenses because they look too much like real life, they’re too unforgiving, like many people find 48fps to be.
The compression applies to people’s faces too. Dustin Hoffman is not known for his small nose, yet it appears positively petite in the close-up below from Outbreak. While this look flatters many actors, others benefit from the rounding of their features caused by a shorter lens.
Perhaps the chief reason to be cautious of long lenses is that they necessitate placing the camera further from the action, and the viewer will sense this, if only on a subconscious level. A long lens, if misused, can rob a scene of intimacy, and if overused could even cause the viewer to disengage with the characters and story.
I’ll leave you with some examples of long-lens shots from the nineties classics I mentioned at the start of this post. Make no mistake, these films employed shorter lenses too, but it certainly looks to me like they used longer lenses on average than contemporary movies.
White walls are the bane of a DP’s existence. They bounce light around everywhere, killing the mood, and they look cheap and boring in the background of your shot. Nonetheless, with so many contemporary buildings decorated this way, it’s a challenge we all have to face. Today I’m going to look back on two short films I’ve photographed, and explain the different approaches I took to get the white-walled locations looking nice.
Finding Hope is a moving drama about a couple grieving for the baby they have lost. It was shot largely at the home of the producer, Jean Maye, on a Sony FS7 with Sigma and Pentax stills glass.
Exit Eve is a non-linear narrative about the dehumanisation of an au pair by her wealthy employers. With a fairly respectable budget for a short, this production shot in a luxurious Battersea townhouse on an Arri Alexa Classic with Ultra Primes.
“Crown”-inspired colour contrast
It was January 2017 when we made Finding Hope, and I’d recently been watching a lot of The Crown. I liked how that series punctuated its daylight interior frames with pools of orange light from practicals. We couldn’t afford much of a lighting package, and I thought that pairing existing pracs with dimmers and tungsten bulbs would be a cheap and easy way to break up the white walls and bring some warmth – perhaps a visual representation of the titular hope – into the heavy story.
I shot all the daylight interiors at 5600K to get that warmth out of the pracs. Meanwhile I shaped the natural light as far as possible with the existing curtains, and beefed it up with a 1.2K HMI where I could. I used no haze or lens diffusion on the film because I felt it needed the unforgiving edges.
For close-ups, I often cheated the pracs a little closer and tweaked the angle, but I chose not to supplement them with movie lamps. The FS7’s native ISO of 2500 helped a lot, especially in a nighttime scene where the grieving parents finally let each other in. Director Krysten Resnick had decided that there would be tea-lights on the kitchen counter, and I asked art director Justine Arbuthnot to increase the number as much as she dared. They became the key-light, and again I tweaked them around for the close-ups.
My favourite scene in Finding Hope is another nighttime one, in which Crystal Leaity sits at a piano while Kevin Leslie watches from the doorway. I continued the theme of warm practicals, bouncing a bare 100W globe off the wall as Crystal’s key, and shaping the existing hall light with some black wrap, but I alternated that with layers of contrasting blue light: the HMI’s “moonlight” coming in through the window, and the flicker of a TV in the deep background. This latter was a blue-gelled 800W tungsten lamp bounced off a wobbling reflector.
When I saw the finished film, I was very pleased that the colourist had leant into the warm/cool contrast throughout the piece, even teasing it out of the daylight exteriors.
Trapped in a stark white townhouse
I took a different approach to colour in Exit Eve. Director Charlie Parham already knew that he wanted strong red lighting in party scenes, and I felt that this would be most effective if I kept colour out of the lighting elsewhere. As the film approaches its climax, I did start to bring in the orange of outside streetlamps, and glimpses of the party’s red, but otherwise I kept the light stark and white.
Converted from a Victorian schoolhouse, the location had high ceilings, huge windows and multiple floors, so I knew that I would mostly have to live with whatever natural light did or didn’t shine in. We were shooting during the heatwave of 2018, with many long handheld takes following lead actor Thalissa Teixeria from room to room and floor to floor, so even the Alexa’s dynamic range struggled to cope with the variations in light level.
For a night scene in the top floor bedroom, I found that the existing practicals were perfectly placed to provide shape and backlight. I white-balanced to 3600K to keep most of the colour out of them, and rigged black solids behind the camera to prevent the white walls from filling in the shadows.
(Incidentally, the night portions of this sequence were shot as one continuous take, despite comprising two different scenes set months apart. The actors did a quick-change and the bed was redressed by the art department while it was out frame, but sadly this tour de force was chopped up in the final cut.)
I had most control over the lighting when it came to the denouement in the ground floor living area. Here I was inspired by the work of Bradford Young, ASC to backlight the closed blinds (with tungsten units gelled to represent streetlights) and allow the actors inside to go a bit dim and murky. For a key moment we put a red gel on one of the existing spotlights in the living room and let the cast step into it.
So there we have it, two different approaches to lighting in a while-walled location: creating colour contrast with dimmed practicals, or embracing the starkness and saving the colour for dramatic moments. How will you tackle your next magnolia-hued background?
For another example of how I’ve tackled white-walled locations, see my Forever Alone blog.
Over the ten weeks of lockdown to date, I have accumulated four rolls of 35mm film to process. They may have to wait until it is safe for me to visit my usual darkroom in London, unless I decide to invest in the equipment to process film here at home. As this is something I’ve been seriously considering, I thought this would be a good time to remind myself of the science behind it all, by describing how film and the negative process work.
Black and White
The first thing to understand is that the terminology is full of lies. There is no celluloid involved in film – at least not any more – and there never has been any emulsion.
However, the word “film” itself is at least accurate; it is quite literally a strip of plastic backing coated with a film of chemicals, even if that plastic is not celluloid and those chemicals are not an emulsion. Celluloid (cellulose mononitrate) was phased out in the mid-twentieth century due to its rampant inflammability, and a variety of other flexible plastics have been used since.
As for “emulsion”, it is in fact a suspension of silver halide crystals in gelatine. The bigger the crystals, the grainier the film, but the more light-sensitive too. When the crystals are exposed to light, tiny specks of metallic silver are formed. This is known as the latent image. Even if we could somehow view the film at this stage without fogging it completely, we would see no visible image as yet.
For that we need to process the film, by bathing it in a chemical developer. Any sufficiently large specks of silver will react with the developer to turn the entire silver halide crystal into black metallic silver. Thus areas that were exposed to light turn black, while unlit areas remain transparent; we now have a negative image.
Before we can examine the negative, however, we must use a fixer to turn the unexposed silver halide crystals into a light-insensitive, water-soluble compound that we can wash away.
Now we can dry our negative. At this stage it can be scanned for digital manipulation, or printed photo-chemically. This latter process involves shining light through the negative onto a sheet of paper coated with more photographic emulsion, then processing and fixing that paper as with the film. (As the paper’s emulsion is not sensitive to the full spectrum of light, this procedure can be carried out under dim red illumination from a safe-light.) Crystals on the paper turn black when exposed to light – as they are through the transparent portions of the negative, which you will recall correspond to the shadows of the image – while unexposed crystals again remain transparent, allowing the white of the paper to show through. Thus the negative is inverted and a positive image results.
Things are a little more complicated with colour, as you might expect. I’ve never processed colour film myself, and I currently have no intention of trying!
The main difference is that the film itself contains multiple layers of emulsion, each sensitive to different parts of the spectrum, and separated by colour filters. When the film is developed, the by-products of the chemical reaction combine with colour couplers to create colour dyes.
An additional processing step is introduced between the development and the fixing: the bleach step. This converts the silver back to silver halide crystals which are then removed during fixing. The colour dyes remain, and it is these that form the image.
Many cinematographers will have heard of a process call bleach bypass, used on such movies as 1984 and Saving Private Ryan. You can probably guess now that this process means skipping or reducing the bleach step, so as to leave the metallic silver in the negative. We’ve seen that this metallic silver forms the entire image in black-and-white photography, so by leaving it in a colour negative you are effectively combining colour and black-and-white images in the same frame, resulting in low colour saturation and increased contrast.
Colour printing paper also contains colour couplers and is processed again with a bleach step. Because of its spectral sensitivity, colour papers must be printed and processed in complete darkness or under a very weak amber light.
In future posts I will cover the black-and-white processing and printing process from a much more practical standpoint, guiding you through it, step by step. I will also look at the creative possibilities of the enlargement process, and we’ll discover where the Photoshop “dodge” and “burn” tools had their origins. For those of you who aren’t Luddites, I’ll delve into how digital sensors capture and process images too!