How Film Works

Over the ten weeks of lockdown to date, I have accumulated four rolls of 35mm film to process. They may have to wait until it is safe for me to visit my usual darkroom in London, unless I decide to invest in the equipment to process film here at home. As this is something I’ve been seriously considering, I thought this would be a good time to remind myself of the science behind it all, by describing how film and the negative process work.

 

Black and White

The first thing to understand is that the terminology is full of lies. There is no celluloid involved in film – at least not any more – and there never has been any emulsion.

However, the word “film” itself is at least accurate; it is quite literally a strip of plastic backing coated with a film of chemicals, even if that plastic is not celluloid and those chemicals are not an emulsion. Celluloid (cellulose mononitrate) was phased out in the mid-twentieth century due to its rampant inflammability, and a variety of other flexible plastics have been used since.

As for “emulsion”, it is in fact a suspension of silver halide crystals in gelatine. The bigger the crystals, the grainier the film, but the more light-sensitive too. When the crystals are exposed to light, tiny specks of metallic silver are formed. This is known as the latent image. Even if we could somehow view the film at this stage without fogging it completely, we would see no visible image as yet.

For that we need to process the film, by bathing it in a chemical developer. Any sufficiently large specks of silver will react with the developer to turn the entire silver halide crystal into black metallic silver. Thus areas that were exposed to light turn black, while unlit areas remain transparent; we now have a negative image.

Before we can examine the negative, however, we must use a fixer to turn the unexposed silver halide crystals into a light-insensitive, water-soluble compound that we can wash away.

Now we can dry our negative. At this stage it can be scanned for digital manipulation, or printed photo-chemically. This latter process involves shining light through the negative onto a sheet of paper coated with more photographic emulsion, then processing and fixing that paper as with the film. (As the paper’s emulsion is not sensitive to the full spectrum of light, this procedure can be carried out under dim red illumination from a safe-light.) Crystals on the paper turn black when exposed to light – as they are through the transparent portions of the negative, which you will recall correspond to the shadows of the image – while unexposed crystals again remain transparent, allowing the white of the paper to show through. Thus the negative is inverted and a positive image results.

 

Colour

Things are a little more complicated with colour, as you might expect. I’ve never processed colour film myself, and I currently have no intention of trying!

The main difference is that the film itself contains multiple layers of emulsion, each sensitive to different parts of the spectrum, and separated by colour filters. When the film is developed, the by-products of the chemical reaction combine with colour couplers to create colour dyes.

An additional processing step is introduced between the development and the fixing: the bleach step. This converts the silver back to silver halide crystals which are then removed during fixing. The colour dyes remain, and it is these that form the image.

Many cinematographers will have heard of a process call bleach bypass, used on such movies as 1984 and Saving Private Ryan. You can probably guess now that this process means skipping or reducing the bleach step, so as to leave the metallic silver in the negative. We’ve seen that this metallic silver forms the entire image in black-and-white photography, so by leaving it in a colour negative you are effectively combining colour and black-and-white images in the same frame, resulting in low colour saturation and increased contrast.

“1984” (DP: Roger Deakins CBE, ASC, BSC)

Colour printing paper also contains colour couplers and is processed again with a bleach step. Because of its spectral sensitivity, colour papers must be printed and processed in complete darkness or under a very weak amber light.

 

Coming Up

In future posts I will cover the black-and-white processing and printing process from a much more practical standpoint, guiding you through it, step by step. I will also look at the creative possibilities of the enlargement process, and we’ll discover where the Photoshop “dodge” and “burn” tools had their origins. For those of you who aren’t Luddites, I’ll delve into how digital sensors capture and process images too!

How Film Works

Cinematic Lighting: An Online Course Available Now

My online course, Cinematic Lighting, is available now on Udemy. It’s an advanced and in-depth guide to arguably the most important part of a director of photography’s job: designing the illumination.

The course is aimed at cinematography students, camera operators looking to move up to DP, corporate/industrial filmmakers looking to move into drama, and indie filmmakers looking to increase their production values.

Rather than demonstrating techniques in isolation in a studio, the course takes place entirely on location. The intent is to show the realities of creating beautiful lighting while dealing with the usual challenges of real independent film production, like time, weather and equipment, as well as meeting the requirements of the script.

Cinematic Lighting consists of four hour-long modules: Day Exterior, Day Interior, Night Interior and Night Exterior. Each module follows the blocking, lighting and shooting of a short scripted scene (inspired by the fantasy web series Ren: The Girl with the Mark) with two actors in full costume. Watch me and my team set up all the fixtures, control the light with flags and rags, and make adjustments when the camera moves around for the coverage. Every step of the way, I explain what I’m doing and why, as well as the alternatives you could consider for your own films. Each module concludes with the final edited scene so that you can see the end result.

Students should already have a grasp of basic cinematography concepts like white balance and depth of field. A familiarity with the principle of three-point lighting will be useful, but not essential.

You will learn:

  • how to create depth and contrast in your shots;
  • how to light for both the master shot and the coverage;
  • how and when to use HMI, fluorescent, LED and traditional tungsten lighting;
  • how to use natural light to your advantage, and how to mould it;
  • how to use a light meter and false colours to correctly expose your image;
  • how to use smoke or haze to create atmosphere, and
  • how to simulate sunlight, moonlight and firelight.

Get lifetime access to Cinematic Lighting now.

Below is a full breakdown of the course content.

 

MODULE 1: DAY EXTERIOR

Learn how to block your scene to make the most of the natural light, and how to modify that light with flags, bounce and diffusion, as well as how to expose your image correctly.

1.1 Principles & Prep

  • What to look for on a recce/scout
  • How to predict the sun path using apps or a compass
  • How to block action relative to the sun
  • Three-point lighting
  • The importance of depth in cinematography
  • When to shoot in cross-light vs. backlight
  • How to get rippling reflections off water

1.2 Blocking for Success

  • Observing a rehearsal with actors Kate and Ivan
  • What to look for in the blocking
  • How to get reflections off a blade
  • When to shoot the master shot
  • How to choose what order to shoot your coverage in
  • Using a white poly/bead-board as bounce

1.3 Exposure

  • Why light meters are still important
  • Dynamic range and log recording
  • How to use an incident meter and a spot reflectance meter
  • The f-stop series
  • How to use false colours
  • How to arrive at the right exposure from all this information
  • How to select the appropriate ND (neutral density) filter
  • Shooting the wide shot

1.4 Shaping the Singles

  • Short- and broad-key lighting
  • Types of reflector
  • Positioning a reflector
  • Paying attention to eye reflections
  • Negative fill
  • How to use 4×4 floppy flags
  • Shooting Ivan’s close-up
  • Using a trace frame
  • “Health bounce”
  • Shooting Kate’s close-up
  • Summary
  • The final edited scene

MODULE 2: DAY INTERIOR

This module introduces some common lighting instruments, demonstrates how to imitate natural light entering a room, and how to create depth and contrast with black-out and smoke.

2.1 Scouting & Equipment

  • Identifying light sources in the room
  • Using apps or a compass to predict how sun will enter through the windows
  • The principle of dark-to-light depth
  • Using curtains to modify interior light
  • Introduction to some common lighting instruments: Dedolights, Kino Flos, an HMI and a Rayzr MC LED panel

2.2 Lighting through a Window

  • Observing the blocking with actors Kate and Ivan
  • Direct lighting using an HMI
  • Controlling contrast with black-out
  • Diffusing the light with a trace frame
  • Bouncing the light off poly/bead-board
  • Bouncing the light off parts of the set
  • Use of a light meter and false colours to set the correct exposure

2.3 Atmosphere

  • Use of smoke or haze to add atmosphere to the scene
  • Reasons to add atmosphere
  • The concept of aerial perspective
  • Shooting the master shot
  • Comparison of the final shot to the other versions demonstrated in 2.2 and 2.3

2.4 Lighting the Reverse

  • Use of viewfinder apps to find a frame and select a lens
  • Challenges of front-light
  • Adjusting the window light to highlight certain areas
  • Demonstrating a “window wrap” using a Kino Flo
  • Using light readings and ND filters to arrive at the correct exposure
  • Shooting the reverse
  • Summary
  • The final edited scene

MODULE 3: NIGHT INTERIOR

Create a moody night-time look indoors using practical sources, toplight, and simulated moonlight and firelight.

3.1 Internal vs. External Light

  • Observing the blocking with actors Kate and Ivan
  • Approaches to lighting a night interior scene
  • Lighting from outside with an HMI “moon”
  • Working with bounced “moonlight” inside the room
  • Choosing an overhead source as the key light

3.2 Working with Toplight

  • Time and safety considerations of working with top-light
  • Rigging a top-light safely
  • Controlling top-light spill on the set walls
  • Using unbleached muslin to soften and warm up the light
  • The inverse square law

3.3 Firelight & Moonlight

  • Working with practical candles
  • Reinforcing candles with a hidden LED fixture
  • Simulating an off-camera fireplace
  • Lighting the view outside the window
  • Bringing moonlight into the room to add colour contrast and depth
  • Shooting the master shot

3.4 Tweaking for the Coverage

  • Checking the blocking for the first single
  • Filling in shadows using additional unbleached muslin
  • Flagging the top-light to control the background
  • Adjusting the external light to maintain colour contrast
  • Shooting Kate’s single
  • Adjusting the fireplace effect to work for a close-up
  • Shooting Ivan’s single
  • Summary
  • The final edited scene

MODULE 4: NIGHT EXTERIOR

Paint with light on the blank canvas of night; set up an artificial moon; create depth, contrast and colour contrast; and use shadows to your advantage.

4.1 Setting the Moon

  • Observing the blocking with actors Kate and Ivan
  • Principles of night exterior lighting
  • Creating believable moonlight
  • Features of HMI lighting
  • Choosing a position and height for the HMI “moon”

4.2 Finessing the Master

  • Use of a practical fire source
  • Reinforcing a practical fire with an LED fixture
  • Colour contrast
  • Using a Kino Flo as an additional soft source
  • Tackling a difficult shadow
  • Reading and adjusting lighting ratios using an incident meter
  • Working with smoke/atmos outdoors
  • Shooting the master shot

4.3 Shooting the Singles

  • Adjusting the existing sources to work for a close-up
  • Shooting Ivan’s single
  • Diffusing the HMI
  • Monitoring exposure using false colours
  • Shooting Kate’s single

4.4 Lighting the Reverse

  • The pros and cons of flipping the backlight
  • Example of cheating the moonlight around
  • Using established sources to your advantage
  • Adding diffusion vs. a gobo to the HMI
  • Creating a “branch-a-loris”
  • Shooting the reverse
  • Summary
  • The final edited scene

APPENDICES

Useful links, a full kit list and a deleted scene

Get lifetime access to Cinematic Lighting now.

Cinematic Lighting: An Online Course Available Now

Where to Place Your Horizon

I once had an argument with a director about the composition of a wide shot. I wanted to put the horizon nearer the top of the frame than the bottom, and he felt that this was the wrong way around. In reality there is no right and wrong in composition, only a myriad of possibilities that are all valid and can all make your viewers feel different ways. In this article I will take a metaphorical ramble through these possibilities, and ponder their effects.

You would think that the most natural position for the horizon would be in the vertical centre of the frame. After all, in our day-to-day life, when we look straight ahead, this is where it appears to be.

In practice, a central horizon is not a popular choice. This article by Art Wolfe, for example, argues that it robs the image of dynamism, sending the eye straight to the horizon rather than letting it wander around the frame. The technique is also at odds with the Rule of Thirds, though as I’ve written before, that’s not a rule I place much stock in.

The talented photographer and vlogger Arian Vila, however, describes the merits of a central horizon when composing for a square aspect ratio. And this is an excellent reminder that the horizon does not exist in a vacuum; like anything else, it must be judged in the context of the aspect ratio and the other compositional elements of the frame.

For Leon Chambers’ Above the Clouds (out now on Amazon Prime and other platforms!), I placed the horizon centrally several times:

This was a deliberate echo of the painting “Above the Clouds” which appears in the film and provides the thematic backbone.

A year or two after shooting Clouds, I came across Photograms of the Year: 1949 in a charity shop. Amongst its pages I found another diptych, one created by the book designer rather than the two entirely separate photographers:

These two images, perfectly paired, demonstrate contrasting horizon placement. At Grey Dawn emphasises the sky by placing the horizon low in the frame, creating a sense of space. Meanwhile, Homeward Bound positions its horizon somewhere beyond the trees near the top of frame, drawing attention to the sand and the wheel ruts and indeed to the figures of the caravan itself, rather than to the destination or surroundings.

Horizon placement is closely tied to two other creative choices: headroom, which I dedicated a whole article to, and lens height, i.e. is this a low or a high angle? Even a GCSE media student can tell you that a low angle imbues power while a high angle implies vulnerability, but these are terms most applicable to closer shots. When we think of horizon placement we are probably concerned with big wides, where creating a mood for the scene or setting is more important than visualising a character’s power or lack thereof.

Breaking Bad is an example of a series that predominantly chooses low horizons to show off the big skies of its New Mexico locations. “[Showrunner] Vince [Gilligan] is a student of cinema and knows movies like the back of his hand,” says DP Michael Slovis, ASC. “It was always in his mind that this was a Western in the style of Sergio Leone and the Italian Neo-realists.”

Incidentally, there’s an amazing desert scene in the episode “Crawl Space” where a sunlit close-up cuts to a big wide. The wide holds as clouds roll over the sun. The action continues and the shot still holds, the line between sunlight and shade visible as it rolls away across the desert, until finally a new line slides under the camera and sun breaks over the actors once more. Only then are we permitted to see the scene up close again.

This creative choice, to set the character’s small concerns against the vast immutability of nature, comes from the same place as the choice to put the horizon low in frame.

Returning to Photograms of the Year: 1949, my eyes light upon another pair of contrasting images:

Despite its title, Towards the Destination shows us little of where the sailors are heading, by placing the horizon high in the frame and focusing on the water and the reflections therein. Rendezvous at Chincoteague, by placing its horizon low in the frame, radiates a feeling of isolation that is in contrast to the meeting of the title.

As we consider the figures in these photographs I am forced to concede that the argument I alluded to in the introduction may have been less about the position of the horizon and more about the position of the actor. I think the director felt that it was unnatural for an person to appear in the top half of the frame rather than the bottom half.

I can see his point. The vision of our naked eyes is definitely framed along the bottom by the ground, while the top remains open and unlimited – outdoors, at least. So if a person is standing on the ground, we naturally expect them to appear low down in an image.

But this – like nose room, the Rule of Thirds, the 180° Rule, short-key lighting, so many things in cinematography – is merely a guideline. There are times when it just isn’t helpful, when it can lead to wasted opportunities.

Here is a shot of mine from The Gong Fu Connection (dir. Ted Duran) where the horizon and the cast are placed in the upper half of frame:

Would it really have been better to frame them lower, losing out on the reflections and the foreround rushes, and gaining just empty sky? I think not. This composition was especially important to me, because the film’s titular connection is all about man and the natural world. By showing the water and greenery, we root the characters in it. A composition with more sky might have made them seem dwarfed by nature, lost in it.

This article has been something of a stream of consciousness, but the point I’m trying to make is this: always consider the content and meaning of your shot; reflecting those in your composition is infinitely more important than adhering to any guidelines.

If you enjoyed this, you may be interested in some of my other articles on composition:

Where to Place Your Horizon

How Analogue Photography Can Make You a Better Cinematographer

With many of us looking for new hobbies to see us through the zombie apocalypse Covid-19 lockdown, analogue photography may be the perfect one for an out-of-work DP. While few of us may get to experience the magic and discipline of shooting motion picture film, stills film is accessible to all. With a range of stocks on the market, bargain second-hand cameras on eBay, seemingly no end of vintage glass, and even home starter kits for processing your own images, there’s nothing to stop you giving it a go.

Since taking them up again In 2018, I’ve found that 35mm and 120 photography have had a positive impact on my digital cinematography. Here are five ways in which I think celluloid photography can help you too sharpen your filmmaking skills.

 

1. Thinking before you click

When you only have 36 shots on your roll and that roll cost you money, you suddenly have a different attitude to clicking the shutter. Is this image worthy of a place amongst those 36? If you’re shooting medium or large-format then the effect is multiplied. In fact, given that we all carry phone cameras with us everywhere we go, there has to be a pretty compelling reason to lug an SLR or view camera around. That’s bound to raise your game, making you think longer and harder about composition and content, to make every frame of celluloid a minor work of art.

 

2. Judging exposure

I know a gaffer who can step outside and tell you what f-stop the light is, using only his naked eye. This is largely because he is a keen analogue photographer. You can expose film by relying on your camera’s built-in TTL (through the lens) meter, but since you can’t see the results until the film is processed, analogue photographers tend to use other methods as well, or instead, to ensure a well-exposed negative. Rules like “Sunny Sixteen” (on a sunny day, set the aperture to f/16 and the shutter speed reciprocal to match the ISO, e.g. 1/200th of a second at ISO 200) and the use of handheld incident meters make you more aware of the light levels around you. A DP with this experience can get their lighting right more quickly.

 

3. Pre-visualising results

We digital DPs can fall into the habit of not looking at things with our eyes, always going straight to the viewfinder or the monitor to judge how things look. Since the optical viewfinder of an analogue camera tells you little more than the framing, you tend to spend less time looking through the camera and more using your eye and your mind to visualise how the image will look. This is especially true when it comes to white balance, exposure and the distribution of tones across a finished print, none of which are revealed by an analogue viewfinder. Exercising your mind like this gives you better intuition and increases your ability to plan a shoot, through storyboarding, for example.

 

4. Grading

If you take your analogue ethic through to post production by processing and printing your own photographs, there is even more to learn. Although detailed manipulation of motion pictures in post is relatively new, people have been doctoring still photos pretty much since the birth of the medium in the mid-19th century. Discovering the low-tech origins of Photoshop’s dodge and burn tools to adjust highlights and shadows is a pure joy, like waving a magic wand over your prints. More importantly, although the printing process is quick, it’s not instantaneous like Resolve or Baselight, so you do need to look carefully at your print, visualise the changes you’d like to make, and then execute them. As a DP, this makes you more critical of your own work and as a colourist, it enables you to work more efficiently by quickly identifying how a shot can be improved.

 

5. Understanding

Finally, working with the medium which digital was designed to imitate gives you a better understanding of that imitation. It was only when I learnt about push- and pull-processing – varying the development time of a film to alter the brightness of the final image – that my understanding of digital ISO really clicked. Indeed, some argue that electronic cameras don’t really have ISO, that it’s just a simulation to help users from an analogue background to understand what’s going on. If all you’ve ever used is the simulation (digital), then you’re unlikely to grasp the concepts in the same way that you would if you’ve tried the original (analogue).

How Analogue Photography Can Make You a Better Cinematographer

Secondary Grades are Nothing New

Last week I posted an article I wrote a while back (originally for RedShark News), entitled “Why You Can’t Relight Footage in Post”. You may detect that this article comes from a slightly anti-colourist place. I have been, for most of my career, afraid of grading – afraid of colourists ruining my images, indignant that my amazing material should even need grading. Arrogance? Ego? Delusion? Perhaps, but I suspect all DPs have felt this way from time to time.

I think I have finally started to let go of this fear and to understand the symbiotic relationship betwixt DP and colourist. As I mentioned a couple of weeks ago, one of the things I’ve been doing to keep myself occupied during the Covid-19 lockdown is learning to grade. This is so that I can grade the dramatic scenes in my upcoming lighting course, but also an attempt to understand a colourist’s job better. The course I’m taking is this one by Matthew Falconer on Udemy. At 31 hours, it takes some serious commitment to complete, commitment I fear I lack. But I’ve got through enough to have learnt the ins and outs of Davinci Resolve, where to start when correcting an image, the techniques of primary and secondary grades, and how to use the scopes and waveforms. I would certainly recommend the course if you want to learn the craft.

As I worked my way through grading the supplied demo footage, I was struck by two similarities. Firstly, as I tracked an actor’s face and brightened it up, I felt like I was in the darkroom dodging a print. (Dodging involves blocking some of the light reaching a certain part of the image when making an enlargement from a film negative, resulting in a brighter patch.) Subtly lifting the brightness and contrast of your subject’s face can really help draw the viewer’s eye to the right part of the image, but digital colourists were hardly the first people to recognise this. Photographers have been dodging – and the opposite, burning – prints pretty much since the invention of the negative process almost 200 years ago.

The second similarity struck me when I was drawing a power curve around an actor’s shirt in order to adjust its colour separately from the rest of the image. I was reminded of this image from Painting with Light, John Alton’s seminal 1949 work on cinematography…

 

The chin scrim is a U-shaped scrim… used to cut the light off hot white collars worn with black dinner jackets.

It’s hard for a modern cinematographer to imagine blocking static enough for such a scrim to be useful, or indeed a schedule generous enough to permit the setting-up of such an esoteric tool. But this was how you did a power window in 1949: in camera.

Sometimes I’ve thought that modern grading, particularly secondaries (which target only specific areas of the image) are unnecessary; after all, we got through a century of cinema just fine without them. But in a world where DPs don’t have the time to set up chin scrims, and can’t possibly expect a spark to follow an actor around with one, adding one in post is a great solution. Our cameras might have more dynamic range than 1940s film stock, meaning that that white collar probably won’t blow out, but we certainly don’t want it distracting the eye in the final grade.

Like I said in my previous post, what digital grading does so well are adjustments of emphasis. This is not to belittle the process at all. Those adjustments of emphasis make a huge difference. And while the laws of physics mean that a scene can’t feasibly be relit in post, they also mean that a chin scrim can’t feasibly follow an actor around a set, and you can’t realistically brighten an actor’s face with a follow spot.

What I’m trying to say is, do what’s possible on set, and do what’s impossible in post. This is how lighting and grading work in harmony.

Secondary Grades are Nothing New

Why You Can’t Re-light Footage in Post

The concept of “re-lighting in post” is one that has enjoyed a popularity amongst some no-budget filmmakers, and which sometimes gets bandied around on much bigger sets as well. If there isn’t the time, the money or perhaps simply the will to light a scene well on the day, the flexibility of RAW recording and the power of modern grading software mean that the lighting can be completely changed in postproduction, so the idea goes.

I can understand why it’s attractive. Lighting equipment can be expensive, and setting it up and finessing it is one of the biggest consumers of time on any set. The time of a single wizard colourist can seem appealingly cost-effective – especially on an unpaid, no-budget production! – compared with the money pit that is a crew, cast, location, catering, etc, etc. Delaying the pain until a little further down the line can seem like a no-brainer.

There’s just one problem: re-lighting footage is fundamentally impossible. To even talk about “re-lighting” footage demonstrates a complete misunderstanding of what photographing a film actually is.

This video, captured at a trillion frames per second, shows the tranmission and reflection of light.

The word “photography” comes from Greek, meaning “drawing with light”. This is not just an excuse for pompous DPs to compare themselves with the great artists of the past as they “paint with light”; it is a concise explanation of what a camera does.

A camera can’t record a face. It can’t record a room, or a landscape, or an animal, or objects of any kind. The only thing a camera can record is light. All photographs and videos are patterns of light which the viewer’s brain reverse-engineers into a three-dimensional scene, just as our brains reverse-engineer the patterns of light on the retinae every moment of every day, to make sense of our surroundings.

The light from this object gets gradually brighter then gradually darker again – therefore it is a curved surface. There is light on the top of that nose but not on the underneath, so it must be sticking out. These oval surfaces are absorbing all the red and blue light and reflecting only green, so it must be plant life. Such are the deductions made continuously by the brain’s visual centre.

A compound lens for a prototype light-field camera by Adobe

To suggest that footage can be re-lit is to suggest that recorded light can somehow be separated from the underlying physical objects off which that light reflected. Now of course that is within the realms of today’s technology; you could analyse a filmed scene and build a virtual 3D model of it to match the footage. Then you could “re-light” this recreated scene, but it would be a hell of a lot of work and would, at best, occupy the Uncanny Valley.

Some day, perhaps some day quite soon, artificial intelligence will be clever enough to do this for us. Feed in a 2D video and the computer will analyse the parallax and light shading to build a moving 3D model to match it, allowing a complete change of lighting and indeed composition.

Volumetric capture is already a functioning technology, currently using a mix of infrared and visible-light cameras in an environment lit as flatly as possible for maximum information – like log footage pushed to its inevitable conclusion. By surrounding the subject with cameras, a moving 3D image results.

Sir David Attenborough getting his volume captured by Microsoft

Such rigs are a type of light-field imaging, a technology that reared its head a few years ago in the form of Lytro, with viral videos showing how depth of field and even camera angle (to a limited extent) could be altered with this seemingly magical system. But even Lytro was capturing light, albeit it in a way that allowed for much more digital manipulation.

Perhaps movies will eventually be captured with some kind of Radar-type technology, bouncing electromagnetic waves outside the visible spectrum off the sets and actors to build a moving 3D model. At that point the need for light will have been completely eliminated from the production process, and the job of the director of photography will be purely a postproduction one.

While I suspect most DPs would prefer to be on a physical set than hunched over a computer, we would certainly make the transition if that was the only way to retain meaningful authorship of the image. After all, most of us are already keen to attend grading sessions to ensure our vision survives postproduction.

The Lytro Illum 2015 CP+ by Morio – own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=38422894

But for the moment at least, lighting must be done on set; re-lighting after the fact is just not possible in any practical way. This is not to take away from the amazing things that a skilled colourist can do, but the vignettes, the split-toning, the power windows, the masking and the tracking – these are adjustments of emphasis.

A soft shadow can be added, but without 3D modelling it can never fall and move as a real shadow would. A face can be brightened, but the quality of light falling on it can’t be changed from soft to hard. The angle of that light can’t be altered. Cinematographers refer to a key-light as the “modelling” light for a reason: because it defines the 3D model which your brain reverse-engineers when it sees the image.

So if you’re ever tempted to leave the job of lighting to postproduction, remember that your footage is literally made of light. If you don’t take the time to get your lighting right, you might as well not have any footage at all.

Why You Can’t Re-light Footage in Post

Cinematic Lighting Course: Coming Soon

I’m using my time in Covid-19 lockdown for a few different things, some more worthy than others. Lying-in is a big one. Watching all of Lost again. Exercising more. But also I’ve got a big editing project to complete, a project of my own, the perfect task to get me through the long days at home.

Last November I shot Cinematic Lighting, a four-hour online course. It’s something I’d been thinking about for a while, especially over the last year or so as my Instagram following has sky-rocketed (about 33,000 at the time of writing). DPs and cinematography students follow my feed for the lighting diagrams I post every Friday, showing exactly how a lighting set-up was achieved, but some commenters had started to ask why I made certain creative decisions. So the idea for the course was born.

Lighting diagram for the Night Exterior module’s master shot

Cinematic Lighting consists of four modules: day exterior, day interior, night interior and night exterior. In each module I light and shoot a half-page scene with two actors, Kate Madison and Ivan Moy, with behind-the-scenes cameras following my every move. As I do it – with the assistance of gaffer Jeremy Dawson and spark Gareth Neal – I explain how and why I’m doing it. Sometimes I demonstrate alternative options I could have chosen. I talk about characterisation and how to match it with lighting. I quote John Alton and Christopher Nolan. I show clips from other productions I’ve shot and tell the stories behind them. I explain how to use a light meter and get your head around f-stops, T-stops and ND filters. I demonstrate the power of smoke. But most importantly I lay my creative process bare as I work.

Setting up a single in the Night Interior module. Photo: Ashram Maharaj

The original intention was for the course to be a reward on the Kickstarter campaign for Ren: The Girl with the Mark‘s second season, but sadly that campaign was unsuccessful. Over the next couple of months I’ll be investigating my options for releasing this course, and rest assured that I’ll let you know as soon as it’s available.

Meanwhile, postproduction work continues on it. The main thing left to do is the grading of the finished dramatic scenes; each module concludes with a polished edit of the scene which I’ve shot. Rather than hire a colourist, I’ve decided it’s time to finally learn a few things about grading myself. To that end, I’ve purchased a Udemy course and am currently learning how to do fancy secondaries in Davinci Resolve – another good use of my lockdown time, I feel. More on this in a future post.

Meanwhile, stay safe and REMAIN INDOORS.

Filming the introduction to the Day Exterior module. Photo: Colin Ramsay

Cinematic Lighting Course: Coming Soon

The Rise of Anamorphic Lenses in TV

Each month I get a digital copy of American Cinematographer to my inbox, filled with illuminating (pun intended) articles about the lighting and lensing of the latest theatrical releases. As a rule of thumb, I only read the articles if I’ve seen the films. Trouble is, I don’t go to the cinema much any more… even before Coronavirus put a stop to all that anyway.

Why? TV is better, simple as that. Better writing, better cinematography, better value for money. (Note: I include streaming services like Netflix and Amazon under the umbrella of “TV” here.) But whereas I can turn to AC to discover the why and how of the cinematography of a movie, there is no equivalent for long-form content. I would love to see a magazine dedicated to the beautiful cinematography of streaming shows, but until then I’ll try to plug the gap myself.

I’d like to start with a look at the increasing use of anamorphic lenses for the small screen. Let’s look at a few examples and try to discover what anamorphic imaging adds to a project.

Lenses with an anamorphic element squeeze the image horizontally, allowing a wider field of view to be captured. The images are restored to their correct proportions in postproduction, but depth of field, bokeh (out of focus areas), barrel distortion and lens flare all retain different characteristics to those obtained with traditional spherical lenses.

 

The Cinematic look

“Doctor Who: The Woman Who Fell to Earth”, DP: Denis Crossan

The venerable Doctor Who, which started off shooting on 405-line black-and-white videotape more than half a century ago, has employed Arri Alexas and Cooke Anamorphic/i glass since the introduction of Jodie Whittaker’s 13th Doctor. “[Director Jamie Childs] suggested we shoot on anamorphic lenses to give it a more filmic look,” says DP Denis Crossan. “You get really nice background falloff and out of focus ellipses on light sources.”

While most viewers will not be able to identify these visual characteristics specifically, they will certainly be aware of a more cinematic feel to the show overall. This is because we associate anamorphic images – even if we do not consciously know them as such – with the biggest of Hollywood blockbusters, everything from Die Hard to Star Trek Beyond.

It’s not just the BBC who are embracing anamorphic. DP Ollie Downey contrasted spherical glass with vintage anamorphics to deliberate effect in “The Commuter”, an episode of the Channel 4/Amazon sci-fi anthology series Electric Dreams.

The story revolves around Ed (Timothy Spall) whose mundane but difficult life turns upside down when he discovers Macon Heights, a town that seems to exist in an alternate reality. “Tim Spall’s character is torn between his real life and the fantastical world of Macon Heights,” Downey explains on his Instagram feed. “We shot Crystal Express Anamorphics for his regular life, and Zeiss Super Speed Mk IIs for Macon Heights.”

The anamorphic process was invented as a way to get a bigger image from the same area of 35mm negative, but in today’s world of ultra-high-resolution digital sensors there is no technical need for anamorphics, only an aesthetic one. In fact, they can actually complicate the process, as Downey notes: “We had to shoot 8K on the Red to be able to punch in to our Crystal Express to extract 16:9 and still deliver 4K to Amazon.”

“Electric Dreams: The Commuter”, DP: Ollie Downey

 

Evoking a period

Back at the BBC, last year’s John le Carré adaptation The Little Drummer Girl uses anamorphic imaging to cement its late 1970s setting. The mini-series revolves around Charmian, an actress who is recruited by Israeli intelligence via the mysterious agent Becker. The truth is distorted throughout, just as the wide anamorphic lenses distort every straight line into a curve.

Reviewing the show for The Independent, Ed Cumming notes that director Park Chan-wook “does not aim to be invisible but to remind you constantly that what you are seeing is a creation. Take the scene at a beachside taverna in Greece, where Charmian and Becker start talking properly to each other. The camera stays still, the focus snaps between him and her.” Such focus pulls are more noticeable in anamorphic because the subject stretches vertically as it defocuses.


The Little Drummer Girl is slavish in its recreation of the period, in camera style as well as production design. Zooms are used frequently, their two-dimensional motion intricately choreographed with the actors who step in and out of multiple planes in the image. Such shots were common in the 70s, but have since fallen very much out of fashion. When once they would have passed unnoticed, a standard part of film grammar, they now draw attention.

“The Little Drummer Girl”, DP: Woo-Hyung Kim

 

Separating worlds

Chilling Adventures of Sabrina, a Netflix Original, also draws attention with its optics. Charting the trials and tribulations of a teenaged witch, the show uses different makes of lenses to differentiate two worlds, just like “The Commuter”.

According to DP David Lazenberg’s website, he mixed modern Panavision G series anamorphics with “Ultragolds”. Information on the latter is hard to find, but they may be related to the Isco Ultra Star adapters which some micro-budget filmmakers have adopted as a cheap way of shooting anamorphic.

The clean, sharp G series glass is used to portray Sabrina’s ordinary life as a small-town teenager, while the Ultragolds appear to be used for any scenes involving witchcraft and magic. Such scenes display extreme blur and distortion at the edges of the frame, making characters squeeze and stretch as the camera pans over them.

“Chilling Adventures of Sabrina: Chapter Ten: The Witching Hour”, DP: Stephen Maier

Unlike the anamorphic characteristics of Doctor Who or “The Commuter”, which are subtle, adding to the stories on a subconscious level, the distortion in Sabrina is extreme enough to be widely noticed by its audience. “Numerous posts on Reddit speak highly of Chilling Adventures of Sabrina’s content and cinematography,” reports Andy Walker, editor of memeburn.com, “but a majority have a collective disdain for the unfocused effect.”

“I hate that blurry s*** on the side of the screen in Sabrina,” is the more blunt appraisal of Twitter user @titanstowerr. Personally I find the effect daring and beautiful, but it certainly distracted me just as it has distracted others, which forces me to wonder if it takes away more from the story than it adds.

And that’s what it all comes down to in the end: are the technical characteristics of the lens facilitating or enhancing the storytelling? DPs today, in both cinema and long-form series, have tremendous freedom to use glass to enhance the viewers’ experience. Yes, that freedom will sometimes result in experiments that alienate some viewers, but overall it can only be a good thing for the expressiveness of the art form.

For more on this topic, see my video test and analysis of some anamorphic lenses.

The Rise of Anamorphic Lenses in TV

5 Steps to Lighting a Forest at Night

EXT. FOREST - NIGHT

A simple enough slug line, and fairly common, but amongst the most challenging for a cinematographer. In this article I’ll break down into five manageable steps my process of lighting woodlands at night.

 

1. Set up the moon.

Forests typically have no artificial illumination, except perhaps practical torches carried by the cast. This means that the DP will primarily be simulating moonlight.

Your “moon” should usually be the largest HMI that your production can afford, as high up and far away as you can get it. (If your production can’t afford an HMI, I would advise against attempting night exteriors in a forest.) Ideally this would be a 12K or 18K on a cherry-picker, but in low-budget land you’re more likely to be dealing with a 2.5K on a triple wind-up stand.

Why is height important? Firstly, it’s more realistic. Real moonlight rarely comes from 15ft off the ground! Secondly, it’s hard to keep the lamp out of shot when you’re shooting towards it. A stand might seem quite tall when you’re right next to it, but as soon as you put it far away, it comes into shot quite easily. If you can use the terrain to give your HMI extra height, or acquire scaffolding or some other means of safely raising your light up, you’ll save yourself a lot of headaches.

In this shot from “The Little Mermaid” (dir. Blake Harris), a 12K HMI on a cherry-picker creates the shafts of moonlight, while another HMI through diffusion provides the frontlight. (This frontlight was orange to represent sunrise, but the scene was altered in the grade to be pure night.)

The size of the HMI is of course going to determine how large an area you can light to a sufficient exposure to record a noise-free image. Using a good low-light camera is going to help you out here. I shot a couple of recent forest night scenes on a Blackmagic Pocket Cinema Camera, which has dual native ISOs, the higher being 3200. Combined with a Speedbooster, this camera required only 1 or 2 foot-candles of illuminance, meaning that our 2.5K HMI could be a good 150 feet away from the action. (See also: “How Big a Light do I Need?”)

 

2. Plan for the reverse.

A fake moon looks great as a backlight, but what happens when it comes time to shoot the reverse? Often the schedule is too tight to move the HMI all the way around to the other side, particularly if it’s rigged up high, so you may need to embrace it as frontlight.

Frontlight is generally flat and undesirable, but it can be interesting when it’s broken up with shadows, and that’s exactly what the trees of a forest will do. Sometimes the pattern of light and dark is so strong and camouflaging that it can be hard to pick out your subject until they move. One day I intend to try this effect in a horror film as a way of concealing a monster.

One thing to look out for with frontlight is unwanted shadows, i.e. those of the camera and boom. Again, the higher up your HMI is, the less of an issue this will be.

If you can afford it, a second HMI set up in the opposite direction is an ideal way to maintain backlight; just pan one off and strike up the other. I’ve known directors to complain that this breaks continuity, but arguably it does the opposite. Frontlight and backlight look very different, especially when smoke is involved (and I’ll come to that in a minute). Isn’t it smoother to intercut two backlit shots than a backlit one and frontlit one? Ultimately it’s a matter of opinion.

An example of cheated moonlight directions in “His Dark Materials” – DP: David Luther

 

3. Consider Ground lights.

One thing I’ve been experimenting with lately is ground lights. For this you need a forest that has at least a little undulation in its terrain. You set up lights directly on the ground, pointed towards camera but hidden from it behind mounds or ridges in the deep background.

Detail from one of my 35mm stills: pedestrians backlit by car headlights in mist. Shot on Ilford Delta 3200

I once tried this with an HMI and it just looked weird, like there was a rave going on in the next field, but with soft lights it is much more effective. Try fluorescent tubes, long LED panels or even rows of festoon lights. When smoke catches them they create a beautiful glow in the background. Use a warm colour to suggest urban lighting in the distance, or leave it cold and it will pass unquestioned as ambience.

Put your cast in front of this ground glow and you will get some lovely silhouettes. Very effective silhouettes can also be captured in front of smoky shafts of hard light from your “moon”.

 

4. Fill in the faces.

All of the above looks great, but sooner or later the director is going to want to see the actors’ faces. Such is the cross a DP must bear.

On one recent project I relied on practical torches – sometimes bounced back to the cast with silver reflectors – or a soft LED ball on a boom pole, following the cast around.

Big-budget movies often rig some kind of soft toplight over the entire area they’re shooting in, but this requires a lot of prep time and money, and I expect it’s quite vulnerable to wind.

A recipe that I use a lot for all kinds of night exteriors is a hard backlight and a soft sidelight, both from the same side of camera. You don’t question where the sidelight is coming from when it’s from the same general direction as the “moon” backlight. In a forest you just have to be careful not to end up with very hot, bright trees near the sidelight, so have flags and nets at the ready.

This shot (from a film not yet released, hence the blurring) is backlit by a 2.5K HMI and side-lit by a 1×1 Aladdin LED with a softbox, both from camera right.

 

5. Don’t forget the Smoke.

Finally, as I’ve already hinted, smoke is very important for a cinematic forest scene. The best options are a gas-powered smoke gun called an Artem or a “Tube of Death”. This latter is a plastic tube connected to a fan and an electric smoke machine. The fan forces smoke into the tube and out of little holes along its length, creating an even spread of smoke.

A Tube of Death in action on the set of “The Little Mermaid”

All smoke is highly suspectible to changes in the wind. An Artem is easier to pick up and move around when the wind changes, and it doesn’t require a power supply, but you will lose time waiting for it to heat up and for the smoke and gas canisters to be changed. Whichever one you pick though, the smoke will add a tremendous amount of depth and texture to the image.

Overall, nighttime forest work scenes may be challenging, but they offer some of the greatest opportunities for moody and creative lighting. Just don’t forget your thermals and your waterproofs!

5 Steps to Lighting a Forest at Night

Kickstarter Now Live for Ren: The Girl with the Mark

Ren: The Girl with the Mark, the fantasy web series with over 8 million views and 14 international awards which I’m the DP on, is crowdfunding new episodes right now. Of all the projects I’ve ever worked on, there are few that I’m as proud of as I am of Ren.

Please head on over to the Kickstarter page and contribute if you can. You can back for as little as £1 (or the equivalent in your local currency – Kickstarter automatically converts it). If you’re not able to back it, please tell others about it, share the social media posts, like and comment. You can find the Ren Facebook page here, the Twitter feed here and the Instagram feed here.

There are many awesome rewards on offer, including DVDs, downloads, collectables and unique on-set experiences.

One of the rewards is getting access to a new Cinematic Lighting course which I’ve made. Across four one-hour modules, I set up, light and shoot little scenes inspired by Ren on real locations with real actors, along the way explaining every decision I make.

Another reward is the Cinematographer Experience, where you get to spend a day on set with me, picking my brains, learning from everyone in the camera and lighting departments, and you get access to the lighting plans and shot lists, and you can take part in a group chat with me after we’ve wrapped.

Other filmmaking-related rewards are the Director Experience and the Downloads & Vlogs package which includes access to exclusive behind-the-scenes video blogs from every day of the shoot.

So check out the Kickstarter now at: https://www.kickstarter.com/projects/mythica/ren2

Kickstarter Now Live for Ren: The Girl with the Mark