Cinematic Lighting: An Online Course Available Now

My online course, Cinematic Lighting, is available now on Udemy. It’s an advanced and in-depth guide to arguably the most important part of a director of photography’s job: designing the illumination.

The course is aimed at cinematography students, camera operators looking to move up to DP, corporate/industrial filmmakers looking to move into drama, and indie filmmakers looking to increase their production values.

Rather than demonstrating techniques in isolation in a studio, the course takes place entirely on location. The intent is to show the realities of creating beautiful lighting while dealing with the usual challenges of real independent film production, like time, weather and equipment, as well as meeting the requirements of the script.

Cinematic Lighting consists of four hour-long modules: Day Exterior, Day Interior, Night Interior and Night Exterior. Each module follows the blocking, lighting and shooting of a short scripted scene (inspired by the fantasy web series Ren: The Girl with the Mark) with two actors in full costume. Watch me and my team set up all the fixtures, control the light with flags and rags, and make adjustments when the camera moves around for the coverage. Every step of the way, I explain what I’m doing and why, as well as the alternatives you could consider for your own films. Each module concludes with the final edited scene so that you can see the end result.

Students should already have a grasp of basic cinematography concepts like white balance and depth of field. A familiarity with the principle of three-point lighting will be useful, but not essential.

You will learn:

  • how to create depth and contrast in your shots;
  • how to light for both the master shot and the coverage;
  • how and when to use HMI, fluorescent, LED and traditional tungsten lighting;
  • how to use natural light to your advantage, and how to mould it;
  • how to use a light meter and false colours to correctly expose your image;
  • how to use smoke or haze to create atmosphere, and
  • how to simulate sunlight, moonlight and firelight.

Get lifetime access to Cinematic Lighting now.

Below is a full breakdown of the course content.

 

MODULE 1: DAY EXTERIOR

Learn how to block your scene to make the most of the natural light, and how to modify that light with flags, bounce and diffusion, as well as how to expose your image correctly.

1.1 Principles & Prep

  • What to look for on a recce/scout
  • How to predict the sun path using apps or a compass
  • How to block action relative to the sun
  • Three-point lighting
  • The importance of depth in cinematography
  • When to shoot in cross-light vs. backlight
  • How to get rippling reflections off water

1.2 Blocking for Success

  • Observing a rehearsal with actors Kate and Ivan
  • What to look for in the blocking
  • How to get reflections off a blade
  • When to shoot the master shot
  • How to choose what order to shoot your coverage in
  • Using a white poly/bead-board as bounce

1.3 Exposure

  • Why light meters are still important
  • Dynamic range and log recording
  • How to use an incident meter and a spot reflectance meter
  • The f-stop series
  • How to use false colours
  • How to arrive at the right exposure from all this information
  • How to select the appropriate ND (neutral density) filter
  • Shooting the wide shot

1.4 Shaping the Singles

  • Short- and broad-key lighting
  • Types of reflector
  • Positioning a reflector
  • Paying attention to eye reflections
  • Negative fill
  • How to use 4×4 floppy flags
  • Shooting Ivan’s close-up
  • Using a trace frame
  • “Health bounce”
  • Shooting Kate’s close-up
  • Summary
  • The final edited scene

MODULE 2: DAY INTERIOR

This module introduces some common lighting instruments, demonstrates how to imitate natural light entering a room, and how to create depth and contrast with black-out and smoke.

2.1 Scouting & Equipment

  • Identifying light sources in the room
  • Using apps or a compass to predict how sun will enter through the windows
  • The principle of dark-to-light depth
  • Using curtains to modify interior light
  • Introduction to some common lighting instruments: Dedolights, Kino Flos, an HMI and a Rayzr MC LED panel

2.2 Lighting through a Window

  • Observing the blocking with actors Kate and Ivan
  • Direct lighting using an HMI
  • Controlling contrast with black-out
  • Diffusing the light with a trace frame
  • Bouncing the light off poly/bead-board
  • Bouncing the light off parts of the set
  • Use of a light meter and false colours to set the correct exposure

2.3 Atmosphere

  • Use of smoke or haze to add atmosphere to the scene
  • Reasons to add atmosphere
  • The concept of aerial perspective
  • Shooting the master shot
  • Comparison of the final shot to the other versions demonstrated in 2.2 and 2.3

2.4 Lighting the Reverse

  • Use of viewfinder apps to find a frame and select a lens
  • Challenges of front-light
  • Adjusting the window light to highlight certain areas
  • Demonstrating a “window wrap” using a Kino Flo
  • Using light readings and ND filters to arrive at the correct exposure
  • Shooting the reverse
  • Summary
  • The final edited scene

MODULE 3: NIGHT INTERIOR

Create a moody night-time look indoors using practical sources, toplight, and simulated moonlight and firelight.

3.1 Internal vs. External Light

  • Observing the blocking with actors Kate and Ivan
  • Approaches to lighting a night interior scene
  • Lighting from outside with an HMI “moon”
  • Working with bounced “moonlight” inside the room
  • Choosing an overhead source as the key light

3.2 Working with Toplight

  • Time and safety considerations of working with top-light
  • Rigging a top-light safely
  • Controlling top-light spill on the set walls
  • Using unbleached muslin to soften and warm up the light
  • The inverse square law

3.3 Firelight & Moonlight

  • Working with practical candles
  • Reinforcing candles with a hidden LED fixture
  • Simulating an off-camera fireplace
  • Lighting the view outside the window
  • Bringing moonlight into the room to add colour contrast and depth
  • Shooting the master shot

3.4 Tweaking for the Coverage

  • Checking the blocking for the first single
  • Filling in shadows using additional unbleached muslin
  • Flagging the top-light to control the background
  • Adjusting the external light to maintain colour contrast
  • Shooting Kate’s single
  • Adjusting the fireplace effect to work for a close-up
  • Shooting Ivan’s single
  • Summary
  • The final edited scene

MODULE 4: NIGHT EXTERIOR

Paint with light on the blank canvas of night; set up an artificial moon; create depth, contrast and colour contrast; and use shadows to your advantage.

4.1 Setting the Moon

  • Observing the blocking with actors Kate and Ivan
  • Principles of night exterior lighting
  • Creating believable moonlight
  • Features of HMI lighting
  • Choosing a position and height for the HMI “moon”

4.2 Finessing the Master

  • Use of a practical fire source
  • Reinforcing a practical fire with an LED fixture
  • Colour contrast
  • Using a Kino Flo as an additional soft source
  • Tackling a difficult shadow
  • Reading and adjusting lighting ratios using an incident meter
  • Working with smoke/atmos outdoors
  • Shooting the master shot

4.3 Shooting the Singles

  • Adjusting the existing sources to work for a close-up
  • Shooting Ivan’s single
  • Diffusing the HMI
  • Monitoring exposure using false colours
  • Shooting Kate’s single

4.4 Lighting the Reverse

  • The pros and cons of flipping the backlight
  • Example of cheating the moonlight around
  • Using established sources to your advantage
  • Adding diffusion vs. a gobo to the HMI
  • Creating a “branch-a-loris”
  • Shooting the reverse
  • Summary
  • The final edited scene

APPENDICES

Useful links, a full kit list and a deleted scene

Get lifetime access to Cinematic Lighting now.

Cinematic Lighting: An Online Course Available Now

Where to Place Your Horizon

I once had an argument with a director about the composition of a wide shot. I wanted to put the horizon nearer the top of the frame than the bottom, and he felt that this was the wrong way around. In reality there is no right and wrong in composition, only a myriad of possibilities that are all valid and can all make your viewers feel different ways. In this article I will take a metaphorical ramble through these possibilities, and ponder their effects.

You would think that the most natural position for the horizon would be in the vertical centre of the frame. After all, in our day-to-day life, when we look straight ahead, this is where it appears to be.

In practice, a central horizon is not a popular choice. This article by Art Wolfe, for example, argues that it robs the image of dynamism, sending the eye straight to the horizon rather than letting it wander around the frame. The technique is also at odds with the Rule of Thirds, though as I’ve written before, that’s not a rule I place much stock in.

The talented photographer and vlogger Arian Vila, however, describes the merits of a central horizon when composing for a square aspect ratio. And this is an excellent reminder that the horizon does not exist in a vacuum; like anything else, it must be judged in the context of the aspect ratio and the other compositional elements of the frame.

For Leon Chambers’ Above the Clouds (out now on Amazon Prime and other platforms!), I placed the horizon centrally several times:

This was a deliberate echo of the painting “Above the Clouds” which appears in the film and provides the thematic backbone.

A year or two after shooting Clouds, I came across Photograms of the Year: 1949 in a charity shop. Amongst its pages I found another diptych, one created by the book designer rather than the two entirely separate photographers:

These two images, perfectly paired, demonstrate contrasting horizon placement. At Grey Dawn emphasises the sky by placing the horizon low in the frame, creating a sense of space. Meanwhile, Homeward Bound positions its horizon somewhere beyond the trees near the top of frame, drawing attention to the sand and the wheel ruts and indeed to the figures of the caravan itself, rather than to the destination or surroundings.

Horizon placement is closely tied to two other creative choices: headroom, which I dedicated a whole article to, and lens height, i.e. is this a low or a high angle? Even a GCSE media student can tell you that a low angle imbues power while a high angle implies vulnerability, but these are terms most applicable to closer shots. When we think of horizon placement we are probably concerned with big wides, where creating a mood for the scene or setting is more important than visualising a character’s power or lack thereof.

Breaking Bad is an example of a series that predominantly chooses low horizons to show off the big skies of its New Mexico locations. “[Showrunner] Vince [Gilligan] is a student of cinema and knows movies like the back of his hand,” says DP Michael Slovis, ASC. “It was always in his mind that this was a Western in the style of Sergio Leone and the Italian Neo-realists.”

Incidentally, there’s an amazing desert scene in the episode “Crawl Space” where a sunlit close-up cuts to a big wide. The wide holds as clouds roll over the sun. The action continues and the shot still holds, the line between sunlight and shade visible as it rolls away across the desert, until finally a new line slides under the camera and sun breaks over the actors once more. Only then are we permitted to see the scene up close again.

This creative choice, to set the character’s small concerns against the vast immutability of nature, comes from the same place as the choice to put the horizon low in frame.

Returning to Photograms of the Year: 1949, my eyes light upon another pair of contrasting images:

Despite its title, Towards the Destination shows us little of where the sailors are heading, by placing the horizon high in the frame and focusing on the water and the reflections therein. Rendezvous at Chincoteague, by placing its horizon low in the frame, radiates a feeling of isolation that is in contrast to the meeting of the title.

As we consider the figures in these photographs I am forced to concede that the argument I alluded to in the introduction may have been less about the position of the horizon and more about the position of the actor. I think the director felt that it was unnatural for an person to appear in the top half of the frame rather than the bottom half.

I can see his point. The vision of our naked eyes is definitely framed along the bottom by the ground, while the top remains open and unlimited – outdoors, at least. So if a person is standing on the ground, we naturally expect them to appear low down in an image.

But this – like nose room, the Rule of Thirds, the 180° Rule, short-key lighting, so many things in cinematography – is merely a guideline. There are times when it just isn’t helpful, when it can lead to wasted opportunities.

Here is a shot of mine from The Gong Fu Connection (dir. Ted Duran) where the horizon and the cast are placed in the upper half of frame:

Would it really have been better to frame them lower, losing out on the reflections and the foreround rushes, and gaining just empty sky? I think not. This composition was especially important to me, because the film’s titular connection is all about man and the natural world. By showing the water and greenery, we root the characters in it. A composition with more sky might have made them seem dwarfed by nature, lost in it.

This article has been something of a stream of consciousness, but the point I’m trying to make is this: always consider the content and meaning of your shot; reflecting those in your composition is infinitely more important than adhering to any guidelines.

If you enjoyed this, you may be interested in some of my other articles on composition:

Where to Place Your Horizon

How Analogue Photography Can Make You a Better Cinematographer

With many of us looking for new hobbies to see us through the zombie apocalypse Covid-19 lockdown, analogue photography may be the perfect one for an out-of-work DP. While few of us may get to experience the magic and discipline of shooting motion picture film, stills film is accessible to all. With a range of stocks on the market, bargain second-hand cameras on eBay, seemingly no end of vintage glass, and even home starter kits for processing your own images, there’s nothing to stop you giving it a go.

Since taking them up again In 2018, I’ve found that 35mm and 120 photography have had a positive impact on my digital cinematography. Here are five ways in which I think celluloid photography can help you too sharpen your filmmaking skills.

 

1. Thinking before you click

When you only have 36 shots on your roll and that roll cost you money, you suddenly have a different attitude to clicking the shutter. Is this image worthy of a place amongst those 36? If you’re shooting medium or large-format then the effect is multiplied. In fact, given that we all carry phone cameras with us everywhere we go, there has to be a pretty compelling reason to lug an SLR or view camera around. That’s bound to raise your game, making you think longer and harder about composition and content, to make every frame of celluloid a minor work of art.

 

2. Judging exposure

I know a gaffer who can step outside and tell you what f-stop the light is, using only his naked eye. This is largely because he is a keen analogue photographer. You can expose film by relying on your camera’s built-in TTL (through the lens) meter, but since you can’t see the results until the film is processed, analogue photographers tend to use other methods as well, or instead, to ensure a well-exposed negative. Rules like “Sunny Sixteen” (on a sunny day, set the aperture to f/16 and the shutter speed reciprocal to match the ISO, e.g. 1/200th of a second at ISO 200) and the use of handheld incident meters make you more aware of the light levels around you. A DP with this experience can get their lighting right more quickly.

 

3. Pre-visualising results

We digital DPs can fall into the habit of not looking at things with our eyes, always going straight to the viewfinder or the monitor to judge how things look. Since the optical viewfinder of an analogue camera tells you little more than the framing, you tend to spend less time looking through the camera and more using your eye and your mind to visualise how the image will look. This is especially true when it comes to white balance, exposure and the distribution of tones across a finished print, none of which are revealed by an analogue viewfinder. Exercising your mind like this gives you better intuition and increases your ability to plan a shoot, through storyboarding, for example.

 

4. Grading

If you take your analogue ethic through to post production by processing and printing your own photographs, there is even more to learn. Although detailed manipulation of motion pictures in post is relatively new, people have been doctoring still photos pretty much since the birth of the medium in the mid-19th century. Discovering the low-tech origins of Photoshop’s dodge and burn tools to adjust highlights and shadows is a pure joy, like waving a magic wand over your prints. More importantly, although the printing process is quick, it’s not instantaneous like Resolve or Baselight, so you do need to look carefully at your print, visualise the changes you’d like to make, and then execute them. As a DP, this makes you more critical of your own work and as a colourist, it enables you to work more efficiently by quickly identifying how a shot can be improved.

 

5. Understanding

Finally, working with the medium which digital was designed to imitate gives you a better understanding of that imitation. It was only when I learnt about push- and pull-processing – varying the development time of a film to alter the brightness of the final image – that my understanding of digital ISO really clicked. Indeed, some argue that electronic cameras don’t really have ISO, that it’s just a simulation to help users from an analogue background to understand what’s going on. If all you’ve ever used is the simulation (digital), then you’re unlikely to grasp the concepts in the same way that you would if you’ve tried the original (analogue).

How Analogue Photography Can Make You a Better Cinematographer

How to Make a Zoetrope for 35mm Contact Prints

Are you an analogue photographer looking for a different way to present your images? Have you ever thought about shooting a sequence of stills and reanimating them in a zoetrope, an optical device from the Victorian era that pre-figured cinema? That is exactly what I decided to do as a project to occupy myself during the zombie apocalypse Covid-19 lockdown. Contact prints are aesthetically pleasing in themselves, and I wanted to tap into the history of the zoetrope by creating a movie-like continuous filmstrip of sequential images and bringing them to life.

In the first part of my blog about this project,  I covered the background and setting up a time-lapse of my cherry tree as content for the device. This weekend I shot the final image of the time-lapse, the last of the blossom having dropped. No-one stole my camera while it sat in my front garden for three weeks, and I was blessed with consistently sunny weather until the very last few days, when I was forced to adjust the exposure time to give me one or two extra stops. I’ll be interested to see how the images have come out, once I can get into the darkroom.

Meanwhile, I’ve been constructing the zoetrope itself, following this excellent article on Reframing Photography. Based on this, I’ve put together my own instructions specifically for making a device that holds 18 frames of contact-printed 35mm film. I chose a frame count of 18 for a few reasons:

  1. The resultant diameter, 220mm, seemed like a comfortable size, similar to a table lamp.
  2. Two image series of 18 frames fit neatly onto a 36 exposure film.
  3. Negatives are commonly cut into strips of six frames for storage and contact-printing, so a number divisible by six makes constructing the image loop a little more convenient.

 

You Will Need

  • Contact sheet containing 18 sequential 35mm images across three rows
  • A1 sheet of 300gsm card, ideally black
  • PVA glue
  • Ruler (the longer the better)
  • Set square
  • Compass
  • Pencil & eraser
  • Scissors
  • Craft knife or stanley knife
  • Paper clips or clothes pegs for clamping while glue dries
  • Rotating stand like a lazy susan or record player

 

Making the image loop

First, cut out the three rows of contact prints, leaving a bit of blank paper at one end of each row for overlap. Now glue them together into one long strip of 18 sequential images. The strip should measure 684mm plus overlap, because a 35mm negative or contact print measures 38mm in width including the border on one side: 38×18=684.

Glue the strip together into a loop with the images on the inside. This loop should have a diameter of 218mm. Note that we must make our zoetrope’s drum to a slightly bigger diameter, or the image loop won’t fit inside it. We’ll use our image loop to check the size of the drum; that’s why we’ve made it first. (If you don’t have your images ready yet, use an old contact sheet – as I did – or any strip of paper or light card of the correct size, 35mmx684mm.)

 

Making the side wall

Cut a strip of the black card measuring 723x90mm. This will be the side wall of your drum. Wrap this strip around your image loop, as tightly as you can without distorting the circular shape of the image loop. Mark where the card strip overlaps itself to find the circumference of the drum, which will be slightly bigger than the 684mm circumference of the image loop. In my case the drum circumference was 688mm – as illustrated in the diagram above. (You can click on it to enlarge it.)

Now we can measure and cut out the slots. We need one slot per image, and Reframing Photography recommends a 1/8″ width, which we’ll round to 3mm. As with making a pinhole, a smaller slot means a sharper but darker image, while a bigger slot means a brighter but blurrier one.

So our slots will be 3x35mm (the same height as the images), but how far apart should they be? They need to be evenly spaced around the circumference, so in my case 688÷18=38.2mm, i.e. a gap of 35.2mm between each slot and then 3mm for the slot itself. If your drum circumference is different to mine, you’ll have to do your own maths to work out the spacing.

(It was impossible to measure 38.2mm accurately, but I made a spreadsheet to give me values for the cumulative slot positions to the nearest millimetre: 38, 76, 115, 153, 191, 229, 268, 306, 344, 382, 420, 459, 497, 535, 573, 612, 650 and 688.)

Mark out your 18 slots, positioning them 15mm from the top of the side wall and 40mm from the bottom, then cut them out carefully using a knife and a ruler.

Now you can glue your side wall into a loop, using paper clips or clothes peg to hold it while the glue dries. I recommend double-checking your image loop fits inside beforehand. (Do not glue your image loop into the drum; this way you can swap it out for another image series whenever you like.)

 

Making the connector

The connector, as the name suggests, will connect the side wall to the base of the drum. (When I made a prototype, I tried skipping this stage, simply building the connecting teeth into the side wall, but this made it much harder to keep the drum a neat circle.)

Go back to your black card and cut another strip measuring 725x60mm. Score it all the way along the middle (i.e. 30mm from the edge) so that it can be folded in two, long-ways. Now cut triangular teeth into one half of the strip. Each triangle should have a 30mm base along the scored line.

As with the side wall, you should check the circumference of the connector to ensure that it will fit around the side wall and image loop, and adjust it if necessary. My connector’s circumference, as shown on the diagram above, was 690mm.

Glue the strip into a loop, clamping it with clips or pegs while it dries. Again, it doesn’t hurt to double-check that it still fits around the side wall first.

 

Making the base

Use a compass to draw a circle of 220mm in diameter on your remaining card, and cut it out. (If your connector is signficantly different in circumference to mine, divide that circumference by pi [3.14] to find the diameter that will work for you.)

Now you can glue the connector to the base. I suggest starting with a single tooth, putting a bottle of water or something heavy on it to keep it in place while it dries, then do the tooth directly opposite. Once that’s dry, do the ones at 90° and so on. This way you should prevent distortions creeping into the shape of the circle as you go around.

When that’s all dry, apply glue all around the inside of the upright section of the connector. Squish your side wall into a kidney bean shape to fit it inside the connector, then allow it to expand to its usual shape. If you have made it a tight enough fit, it will naturally press against the glue and the connector.

 

Making it Spin

The critical part of your zoetrope, the drum, is now complete. But to animate the images, you need to make it spin. There are a few ways you can do this:

  • Mount it on an old record player, making a hole in the centre of the base for the centre spindle.
  • Mount it on a rotating cake decoration stand or lazy susan.
  • Make your own custom stand.

I chose the latter, ordering some plywood discs cut to size, an unfinished candlestick and a lazy susan bearing, then assembling and varnishing them before gluing my drum to the top.

How to Make a Zoetrope for 35mm Contact Prints

Secondary Grades are Nothing New

Last week I posted an article I wrote a while back (originally for RedShark News), entitled “Why You Can’t Relight Footage in Post”. You may detect that this article comes from a slightly anti-colourist place. I have been, for most of my career, afraid of grading – afraid of colourists ruining my images, indignant that my amazing material should even need grading. Arrogance? Ego? Delusion? Perhaps, but I suspect all DPs have felt this way from time to time.

I think I have finally started to let go of this fear and to understand the symbiotic relationship betwixt DP and colourist. As I mentioned a couple of weeks ago, one of the things I’ve been doing to keep myself occupied during the Covid-19 lockdown is learning to grade. This is so that I can grade the dramatic scenes in my upcoming lighting course, but also an attempt to understand a colourist’s job better. The course I’m taking is this one by Matthew Falconer on Udemy. At 31 hours, it takes some serious commitment to complete, commitment I fear I lack. But I’ve got through enough to have learnt the ins and outs of Davinci Resolve, where to start when correcting an image, the techniques of primary and secondary grades, and how to use the scopes and waveforms. I would certainly recommend the course if you want to learn the craft.

As I worked my way through grading the supplied demo footage, I was struck by two similarities. Firstly, as I tracked an actor’s face and brightened it up, I felt like I was in the darkroom dodging a print. (Dodging involves blocking some of the light reaching a certain part of the image when making an enlargement from a film negative, resulting in a brighter patch.) Subtly lifting the brightness and contrast of your subject’s face can really help draw the viewer’s eye to the right part of the image, but digital colourists were hardly the first people to recognise this. Photographers have been dodging – and the opposite, burning – prints pretty much since the invention of the negative process almost 200 years ago.

The second similarity struck me when I was drawing a power curve around an actor’s shirt in order to adjust its colour separately from the rest of the image. I was reminded of this image from Painting with Light, John Alton’s seminal 1949 work on cinematography…

 

The chin scrim is a U-shaped scrim… used to cut the light off hot white collars worn with black dinner jackets.

It’s hard for a modern cinematographer to imagine blocking static enough for such a scrim to be useful, or indeed a schedule generous enough to permit the setting-up of such an esoteric tool. But this was how you did a power window in 1949: in camera.

Sometimes I’ve thought that modern grading, particularly secondaries (which target only specific areas of the image) are unnecessary; after all, we got through a century of cinema just fine without them. But in a world where DPs don’t have the time to set up chin scrims, and can’t possibly expect a spark to follow an actor around with one, adding one in post is a great solution. Our cameras might have more dynamic range than 1940s film stock, meaning that that white collar probably won’t blow out, but we certainly don’t want it distracting the eye in the final grade.

Like I said in my previous post, what digital grading does so well are adjustments of emphasis. This is not to belittle the process at all. Those adjustments of emphasis make a huge difference. And while the laws of physics mean that a scene can’t feasibly be relit in post, they also mean that a chin scrim can’t feasibly follow an actor around a set, and you can’t realistically brighten an actor’s face with a follow spot.

What I’m trying to say is, do what’s possible on set, and do what’s impossible in post. This is how lighting and grading work in harmony.

Secondary Grades are Nothing New

Why You Can’t Re-light Footage in Post

The concept of “re-lighting in post” is one that has enjoyed a popularity amongst some no-budget filmmakers, and which sometimes gets bandied around on much bigger sets as well. If there isn’t the time, the money or perhaps simply the will to light a scene well on the day, the flexibility of RAW recording and the power of modern grading software mean that the lighting can be completely changed in postproduction, so the idea goes.

I can understand why it’s attractive. Lighting equipment can be expensive, and setting it up and finessing it is one of the biggest consumers of time on any set. The time of a single wizard colourist can seem appealingly cost-effective – especially on an unpaid, no-budget production! – compared with the money pit that is a crew, cast, location, catering, etc, etc. Delaying the pain until a little further down the line can seem like a no-brainer.

There’s just one problem: re-lighting footage is fundamentally impossible. To even talk about “re-lighting” footage demonstrates a complete misunderstanding of what photographing a film actually is.

This video, captured at a trillion frames per second, shows the tranmission and reflection of light.

The word “photography” comes from Greek, meaning “drawing with light”. This is not just an excuse for pompous DPs to compare themselves with the great artists of the past as they “paint with light”; it is a concise explanation of what a camera does.

A camera can’t record a face. It can’t record a room, or a landscape, or an animal, or objects of any kind. The only thing a camera can record is light. All photographs and videos are patterns of light which the viewer’s brain reverse-engineers into a three-dimensional scene, just as our brains reverse-engineer the patterns of light on the retinae every moment of every day, to make sense of our surroundings.

The light from this object gets gradually brighter then gradually darker again – therefore it is a curved surface. There is light on the top of that nose but not on the underneath, so it must be sticking out. These oval surfaces are absorbing all the red and blue light and reflecting only green, so it must be plant life. Such are the deductions made continuously by the brain’s visual centre.

A compound lens for a prototype light-field camera by Adobe

To suggest that footage can be re-lit is to suggest that recorded light can somehow be separated from the underlying physical objects off which that light reflected. Now of course that is within the realms of today’s technology; you could analyse a filmed scene and build a virtual 3D model of it to match the footage. Then you could “re-light” this recreated scene, but it would be a hell of a lot of work and would, at best, occupy the Uncanny Valley.

Some day, perhaps some day quite soon, artificial intelligence will be clever enough to do this for us. Feed in a 2D video and the computer will analyse the parallax and light shading to build a moving 3D model to match it, allowing a complete change of lighting and indeed composition.

Volumetric capture is already a functioning technology, currently using a mix of infrared and visible-light cameras in an environment lit as flatly as possible for maximum information – like log footage pushed to its inevitable conclusion. By surrounding the subject with cameras, a moving 3D image results.

Sir David Attenborough getting his volume captured by Microsoft

Such rigs are a type of light-field imaging, a technology that reared its head a few years ago in the form of Lytro, with viral videos showing how depth of field and even camera angle (to a limited extent) could be altered with this seemingly magical system. But even Lytro was capturing light, albeit it in a way that allowed for much more digital manipulation.

Perhaps movies will eventually be captured with some kind of Radar-type technology, bouncing electromagnetic waves outside the visible spectrum off the sets and actors to build a moving 3D model. At that point the need for light will have been completely eliminated from the production process, and the job of the director of photography will be purely a postproduction one.

While I suspect most DPs would prefer to be on a physical set than hunched over a computer, we would certainly make the transition if that was the only way to retain meaningful authorship of the image. After all, most of us are already keen to attend grading sessions to ensure our vision survives postproduction.

The Lytro Illum 2015 CP+ by Morio – own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=38422894

But for the moment at least, lighting must be done on set; re-lighting after the fact is just not possible in any practical way. This is not to take away from the amazing things that a skilled colourist can do, but the vignettes, the split-toning, the power windows, the masking and the tracking – these are adjustments of emphasis.

A soft shadow can be added, but without 3D modelling it can never fall and move as a real shadow would. A face can be brightened, but the quality of light falling on it can’t be changed from soft to hard. The angle of that light can’t be altered. Cinematographers refer to a key-light as the “modelling” light for a reason: because it defines the 3D model which your brain reverse-engineers when it sees the image.

So if you’re ever tempted to leave the job of lighting to postproduction, remember that your footage is literally made of light. If you don’t take the time to get your lighting right, you might as well not have any footage at all.

Why You Can’t Re-light Footage in Post

Shooting a Time-lapse for a Zoetrope

Two years ago I made Stasis, a series of photographs that explored the confluence of time, space and light. Ever since then I’ve been meaning to follow it up with another photography project along similar lines, but haven’t got around to it. Well, with Covid-19 there’s not much excuse for not getting around to things any more.

Example of a zoetrope

So I’ve decided to make a zoetrope – a Victorian optical device which produces animation inside a spinning drum. The user looks through slits in the side of the drum to one of a series of images around the inside. When the drum is set spinning – usually by hand – the images appear to become one single moving picture. The slits passing rapidly through the user’s vision serve the same purpose as a shutter in a film projector, intermittently blanking out the image so that the persistence of vision effect kicks in.

Typically zoetropes contain drawn images, but they have been known to contain photographed images too. Eadward Muybridge, the father of cinema, reanimated some of his groundbreaking image series using zoetropes (though he favoured his proprietary zoopraxiscope) in the late nineteenth century. The device is thus rich with history and a direct antecedent of all movie projectors and the myriad devices capable of displaying moving images today.

This history, its relevance to my profession, and the looping nature of the animation all struck a chord with me. Stasis was to some extent about history repeating, so a zoetrope project seemed like it would sit well alongside it. Here though, history would repeat on a very small scale. Such a time loop, in which nothing can ever progress, feels very relevant under Covid-19 lockdown!

With that in mind, I decided that the first sequence I would shoot for the zoetrope would be a time-lapse of the cherry tree outside my window.  I chose a camera position at the opposite end of the garden, looking back at my window and front door – my lockdown “prison” – through the branches of the tree. (The tree was just about to start blooming.)

The plan is to shoot one exposure every day for at least the next 18 days, maybe more if necessary to capture the full life of the blossom. Ideally I want to record the blossom falling so that my sequence will loop neatly, although the emergence of leaves may interfere with that.

To make the whole thing a little more fun and primitive, I decided to shoot using the pinhole I made a couple of years ago. Since I plan to mount contact prints inside the zoetrope rather than enlargements, that’ll mean I’ve created and exhibited a motion picture without ever once putting the image through a lens.

I’m shooting on Ilford HP5+, a black-and-white stock with a published ISO of 400. My girlfriend bought me five roles for Christmas, which means I can potentially make ten 18-frame zoetrope inserts. I won’t be able to develop or print any of them until the lockdown ends, but that’s okay.

My first image was shot last Wednesday, a sunny day. The Sunny 16 rule tells me that at f/16 on a sunny day, my exposure should be equal to my ISO, i.e. 1/400th of a second for ISO 400. My pinhole has an aperture of f/365, which I calculated when I made it, so it’s about nine stops slower than f/16. Therefore I need to multiply that 1/400th of a second exposure time by two to the power of nine, which is 1.28 – call it one second for simplicity. ( I used my Sekonic incidence/reflectance meter to check the exposure, because it’s always wise to be sure when you haven’t got the fall-back of a digital monitor.)

One second is the longest exposure my Pentax P30t can shoot without switching to Bulb mode and timing it manually. It’s also about the longest exposure that HP5+ can do without the dreaded reciprocity failure kicking in. So all round, one second was a good exposure time to aim for.

The camera is facing roughly south, meaning that the tree is backlit and the wall of the house (which fills the background) is in shadow. This should make the tree stand out nicely. Every day may not be as sunny as today, so the light will inevitably change from frame to frame of the animation. I figured that maintaining a consistent exposure on the background wall would make the changes less jarring than trying to keep the tree’s exposure consistent.

I’ve been taking spot readings every day, and keeping the wall three-and-a-half stops under key, while the blossoms are about one stop over. I may well push the film – i.e. give it extra development time – if I end up with a lot of cloudy days where the blossoms are under key, but so far I’ve managed to catch the sun every time.

All this exposure stuff is great practice for the day when I finally get to shoot real motion picture film, should that day ever come, and it’s pretty useful for digital cinematography too.

Meanwhile, I’ve also made a rough prototype of the zoetrope itself, but more on that in a future post. Watch this space.

Shooting a Time-lapse for a Zoetrope

Cinematic Lighting Course: Coming Soon

I’m using my time in Covid-19 lockdown for a few different things, some more worthy than others. Lying-in is a big one. Watching all of Lost again. Exercising more. But also I’ve got a big editing project to complete, a project of my own, the perfect task to get me through the long days at home.

Last November I shot Cinematic Lighting, a four-hour online course. It’s something I’d been thinking about for a while, especially over the last year or so as my Instagram following has sky-rocketed (about 33,000 at the time of writing). DPs and cinematography students follow my feed for the lighting diagrams I post every Friday, showing exactly how a lighting set-up was achieved, but some commenters had started to ask why I made certain creative decisions. So the idea for the course was born.

Lighting diagram for the Night Exterior module’s master shot

Cinematic Lighting consists of four modules: day exterior, day interior, night interior and night exterior. In each module I light and shoot a half-page scene with two actors, Kate Madison and Ivan Moy, with behind-the-scenes cameras following my every move. As I do it – with the assistance of gaffer Jeremy Dawson and spark Gareth Neal – I explain how and why I’m doing it. Sometimes I demonstrate alternative options I could have chosen. I talk about characterisation and how to match it with lighting. I quote John Alton and Christopher Nolan. I show clips from other productions I’ve shot and tell the stories behind them. I explain how to use a light meter and get your head around f-stops, T-stops and ND filters. I demonstrate the power of smoke. But most importantly I lay my creative process bare as I work.

Setting up a single in the Night Interior module. Photo: Ashram Maharaj

The original intention was for the course to be a reward on the Kickstarter campaign for Ren: The Girl with the Mark‘s second season, but sadly that campaign was unsuccessful. Over the next couple of months I’ll be investigating my options for releasing this course, and rest assured that I’ll let you know as soon as it’s available.

Meanwhile, postproduction work continues on it. The main thing left to do is the grading of the finished dramatic scenes; each module concludes with a polished edit of the scene which I’ve shot. Rather than hire a colourist, I’ve decided it’s time to finally learn a few things about grading myself. To that end, I’ve purchased a Udemy course and am currently learning how to do fancy secondaries in Davinci Resolve – another good use of my lockdown time, I feel. More on this in a future post.

Meanwhile, stay safe and REMAIN INDOORS.

Filming the introduction to the Day Exterior module. Photo: Colin Ramsay
Cinematic Lighting Course: Coming Soon

The Rise of Anamorphic Lenses in TV

Each month I get a digital copy of American Cinematographer to my inbox, filled with illuminating (pun intended) articles about the lighting and lensing of the latest theatrical releases. As a rule of thumb, I only read the articles if I’ve seen the films. Trouble is, I don’t go to the cinema much any more… even before Coronavirus put a stop to all that anyway.

Why? TV is better, simple as that. Better writing, better cinematography, better value for money. (Note: I include streaming services like Netflix and Amazon under the umbrella of “TV” here.) But whereas I can turn to AC to discover the why and how of the cinematography of a movie, there is no equivalent for long-form content. I would love to see a magazine dedicated to the beautiful cinematography of streaming shows, but until then I’ll try to plug the gap myself.

I’d like to start with a look at the increasing use of anamorphic lenses for the small screen. Let’s look at a few examples and try to discover what anamorphic imaging adds to a project.

Lenses with an anamorphic element squeeze the image horizontally, allowing a wider field of view to be captured. The images are restored to their correct proportions in postproduction, but depth of field, bokeh (out of focus areas), barrel distortion and lens flare all retain different characteristics to those obtained with traditional spherical lenses.

 

The Cinematic look

“Doctor Who: The Woman Who Fell to Earth”, DP: Denis Crossan

The venerable Doctor Who, which started off shooting on 405-line black-and-white videotape more than half a century ago, has employed Arri Alexas and Cooke Anamorphic/i glass since the introduction of Jodie Whittaker’s 13th Doctor. “[Director Jamie Childs] suggested we shoot on anamorphic lenses to give it a more filmic look,” says DP Denis Crossan. “You get really nice background falloff and out of focus ellipses on light sources.”

While most viewers will not be able to identify these visual characteristics specifically, they will certainly be aware of a more cinematic feel to the show overall. This is because we associate anamorphic images – even if we do not consciously know them as such – with the biggest of Hollywood blockbusters, everything from Die Hard to Star Trek Beyond.

It’s not just the BBC who are embracing anamorphic. DP Ollie Downey contrasted spherical glass with vintage anamorphics to deliberate effect in “The Commuter”, an episode of the Channel 4/Amazon sci-fi anthology series Electric Dreams.

The story revolves around Ed (Timothy Spall) whose mundane but difficult life turns upside down when he discovers Macon Heights, a town that seems to exist in an alternate reality. “Tim Spall’s character is torn between his real life and the fantastical world of Macon Heights,” Downey explains on his Instagram feed. “We shot Crystal Express Anamorphics for his regular life, and Zeiss Super Speed Mk IIs for Macon Heights.”

The anamorphic process was invented as a way to get a bigger image from the same area of 35mm negative, but in today’s world of ultra-high-resolution digital sensors there is no technical need for anamorphics, only an aesthetic one. In fact, they can actually complicate the process, as Downey notes: “We had to shoot 8K on the Red to be able to punch in to our Crystal Express to extract 16:9 and still deliver 4K to Amazon.”

“Electric Dreams: The Commuter”, DP: Ollie Downey

 

Evoking a period

Back at the BBC, last year’s John le Carré adaptation The Little Drummer Girl uses anamorphic imaging to cement its late 1970s setting. The mini-series revolves around Charmian, an actress who is recruited by Israeli intelligence via the mysterious agent Becker. The truth is distorted throughout, just as the wide anamorphic lenses distort every straight line into a curve.

Reviewing the show for The Independent, Ed Cumming notes that director Park Chan-wook “does not aim to be invisible but to remind you constantly that what you are seeing is a creation. Take the scene at a beachside taverna in Greece, where Charmian and Becker start talking properly to each other. The camera stays still, the focus snaps between him and her.” Such focus pulls are more noticeable in anamorphic because the subject stretches vertically as it defocuses.


The Little Drummer Girl is slavish in its recreation of the period, in camera style as well as production design. Zooms are used frequently, their two-dimensional motion intricately choreographed with the actors who step in and out of multiple planes in the image. Such shots were common in the 70s, but have since fallen very much out of fashion. When once they would have passed unnoticed, a standard part of film grammar, they now draw attention.

“The Little Drummer Girl”, DP: Woo-Hyung Kim

 

Separating worlds

Chilling Adventures of Sabrina, a Netflix Original, also draws attention with its optics. Charting the trials and tribulations of a teenaged witch, the show uses different makes of lenses to differentiate two worlds, just like “The Commuter”.

According to DP David Lazenberg’s website, he mixed modern Panavision G series anamorphics with “Ultragolds”. Information on the latter is hard to find, but they may be related to the Isco Ultra Star adapters which some micro-budget filmmakers have adopted as a cheap way of shooting anamorphic.

The clean, sharp G series glass is used to portray Sabrina’s ordinary life as a small-town teenager, while the Ultragolds appear to be used for any scenes involving witchcraft and magic. Such scenes display extreme blur and distortion at the edges of the frame, making characters squeeze and stretch as the camera pans over them.

“Chilling Adventures of Sabrina: Chapter Ten: The Witching Hour”, DP: Stephen Maier

Unlike the anamorphic characteristics of Doctor Who or “The Commuter”, which are subtle, adding to the stories on a subconscious level, the distortion in Sabrina is extreme enough to be widely noticed by its audience. “Numerous posts on Reddit speak highly of Chilling Adventures of Sabrina’s content and cinematography,” reports Andy Walker, editor of memeburn.com, “but a majority have a collective disdain for the unfocused effect.”

“I hate that blurry s*** on the side of the screen in Sabrina,” is the more blunt appraisal of Twitter user @titanstowerr. Personally I find the effect daring and beautiful, but it certainly distracted me just as it has distracted others, which forces me to wonder if it takes away more from the story than it adds.

And that’s what it all comes down to in the end: are the technical characteristics of the lens facilitating or enhancing the storytelling? DPs today, in both cinema and long-form series, have tremendous freedom to use glass to enhance the viewers’ experience. Yes, that freedom will sometimes result in experiments that alienate some viewers, but overall it can only be a good thing for the expressiveness of the art form.

For more on this topic, see my video test and analysis of some anamorphic lenses.

The Rise of Anamorphic Lenses in TV

5 Steps to Lighting a Forest at Night

EXT. FOREST - NIGHT

A simple enough slug line, and fairly common, but amongst the most challenging for a cinematographer. In this article I’ll break down into five manageable steps my process of lighting woodlands at night.

 

1. Set up the moon.

Forests typically have no artificial illumination, except perhaps practical torches carried by the cast. This means that the DP will primarily be simulating moonlight.

Your “moon” should usually be the largest HMI that your production can afford, as high up and far away as you can get it. (If your production can’t afford an HMI, I would advise against attempting night exteriors in a forest.) Ideally this would be a 12K or 18K on a cherry-picker, but in low-budget land you’re more likely to be dealing with a 2.5K on a triple wind-up stand.

Why is height important? Firstly, it’s more realistic. Real moonlight rarely comes from 15ft off the ground! Secondly, it’s hard to keep the lamp out of shot when you’re shooting towards it. A stand might seem quite tall when you’re right next to it, but as soon as you put it far away, it comes into shot quite easily. If you can use the terrain to give your HMI extra height, or acquire scaffolding or some other means of safely raising your light up, you’ll save yourself a lot of headaches.

In this shot from “The Little Mermaid” (dir. Blake Harris), a 12K HMI on a cherry-picker creates the shafts of moonlight, while another HMI through diffusion provides the frontlight. (This frontlight was orange to represent sunrise, but the scene was altered in the grade to be pure night.)

The size of the HMI is of course going to determine how large an area you can light to a sufficient exposure to record a noise-free image. Using a good low-light camera is going to help you out here. I shot a couple of recent forest night scenes on a Blackmagic Pocket Cinema Camera, which has dual native ISOs, the higher being 3200. Combined with a Speedbooster, this camera required only 1 or 2 foot-candles of illuminance, meaning that our 2.5K HMI could be a good 150 feet away from the action. (See also: “How Big a Light do I Need?”)

 

2. Plan for the reverse.

A fake moon looks great as a backlight, but what happens when it comes time to shoot the reverse? Often the schedule is too tight to move the HMI all the way around to the other side, particularly if it’s rigged up high, so you may need to embrace it as frontlight.

Frontlight is generally flat and undesirable, but it can be interesting when it’s broken up with shadows, and that’s exactly what the trees of a forest will do. Sometimes the pattern of light and dark is so strong and camouflaging that it can be hard to pick out your subject until they move. One day I intend to try this effect in a horror film as a way of concealing a monster.

One thing to look out for with frontlight is unwanted shadows, i.e. those of the camera and boom. Again, the higher up your HMI is, the less of an issue this will be.

If you can afford it, a second HMI set up in the opposite direction is an ideal way to maintain backlight; just pan one off and strike up the other. I’ve known directors to complain that this breaks continuity, but arguably it does the opposite. Frontlight and backlight look very different, especially when smoke is involved (and I’ll come to that in a minute). Isn’t it smoother to intercut two backlit shots than a backlit one and frontlit one? Ultimately it’s a matter of opinion.

An example of cheated moonlight directions in “His Dark Materials” – DP: David Luther

 

3. Consider Ground lights.

One thing I’ve been experimenting with lately is ground lights. For this you need a forest that has at least a little undulation in its terrain. You set up lights directly on the ground, pointed towards camera but hidden from it behind mounds or ridges in the deep background.

Detail from one of my 35mm stills: pedestrians backlit by car headlights in mist. Shot on Ilford Delta 3200

I once tried this with an HMI and it just looked weird, like there was a rave going on in the next field, but with soft lights it is much more effective. Try fluorescent tubes, long LED panels or even rows of festoon lights. When smoke catches them they create a beautiful glow in the background. Use a warm colour to suggest urban lighting in the distance, or leave it cold and it will pass unquestioned as ambience.

Put your cast in front of this ground glow and you will get some lovely silhouettes. Very effective silhouettes can also be captured in front of smoky shafts of hard light from your “moon”.

 

4. Fill in the faces.

All of the above looks great, but sooner or later the director is going to want to see the actors’ faces. Such is the cross a DP must bear.

On one recent project I relied on practical torches – sometimes bounced back to the cast with silver reflectors – or a soft LED ball on a boom pole, following the cast around.

Big-budget movies often rig some kind of soft toplight over the entire area they’re shooting in, but this requires a lot of prep time and money, and I expect it’s quite vulnerable to wind.

A recipe that I use a lot for all kinds of night exteriors is a hard backlight and a soft sidelight, both from the same side of camera. You don’t question where the sidelight is coming from when it’s from the same general direction as the “moon” backlight. In a forest you just have to be careful not to end up with very hot, bright trees near the sidelight, so have flags and nets at the ready.

This shot (from a film not yet released, hence the blurring) is backlit by a 2.5K HMI and side-lit by a 1×1 Aladdin LED with a softbox, both from camera right.

 

5. Don’t forget the Smoke.

Finally, as I’ve already hinted, smoke is very important for a cinematic forest scene. The best options are a gas-powered smoke gun called an Artem or a “Tube of Death”. This latter is a plastic tube connected to a fan and an electric smoke machine. The fan forces smoke into the tube and out of little holes along its length, creating an even spread of smoke.

A Tube of Death in action on the set of “The Little Mermaid”

All smoke is highly suspectible to changes in the wind. An Artem is easier to pick up and move around when the wind changes, and it doesn’t require a power supply, but you will lose time waiting for it to heat up and for the smoke and gas canisters to be changed. Whichever one you pick though, the smoke will add a tremendous amount of depth and texture to the image.

Overall, nighttime forest work scenes may be challenging, but they offer some of the greatest opportunities for moody and creative lighting. Just don’t forget your thermals and your waterproofs!

5 Steps to Lighting a Forest at Night