Large Format

Recently I shot my second online cinematography course, provisionally titled The Secrets of Cinematography, which will be out in the spring. In it I shoot on a Z Cam E2-F6, a full-frame camera. As far as I can remember, the only other time I’ve shot full-frame was for the miniature map in Above the Clouds, captured on a Canon 5D Mk III. So I thought this would be a good time to post a few facts about large-format, and a good excuse to get some more use out of this graphic I created for the course…

First of all, some definitions:

  • Super-35, about 24x14mm, has been the standard sensor size since digital cinematography took off in the noughties. It’s based on an analogue film standard which has its own complex history that I won’t go into here.
  • A large-format digital cinema camera is any that has a sensor larger than Super-35. It’s not to be confused with large-format still photography, which uses much bigger sensors/film than any currently existing for moving images.
  • Full-frame is a subset of large-format. Confusingly, this term does come from still photography, where it is used to identify digital sensors that are the same size as a frame of 35mm stills film: 36x24mm.

These are the differences you will notice shooting in large-format versus Super-35:

  • Lenses will have a wider field of view. (You’ll need to make sure your chosen lenses have a large enough image circle to cover your sensor.)
  • If you increase your focal length to get the same field of view you would have had on Super-35, perspective will be rendered exactly the same as it was on Super-35…
  • … but the depth of field will be shallower…
  • … and you may see more imperfections at the edges of frame where the lens is working harder. (A Super-35 sensor would crop these imperfections out.)
  • Picture noise will probably be finer and less noticeable due to the photosites being larger and more sensitive.

In the case of full-frame, the crop factor is 1.4. This means you should multiply your Super-35 focal length by 1.4 to find the lens that will give you the same field of view on a full-frame camera. (Some examples are given in the graphic above.) It also means that you can multiply your Super-35 T-stop by 1.4 to find the full-frame T-stop to match the depth of field.

For a detailed comparison of Super-35 and full-frame, check out this test by Manuel Luebbers.

Large Format

What I Learnt from DSLRs

RedShark News recently published an article called “The DSLR is now dead”, based on the fact that the Canon 1D X Mark III will be the last flagship DSLR from the company and that mirrorless cameras are now first choice for most photographers. This prompted me to reflect on some of the things I learnt when I bought my first (and only) DSLR.

It was 2011, and I documented some of the challenges my new Canon 600D created for me in this blog post. But what the DSLR did really well was to introduce me to a workflow very similar in many ways to the bigger productions I’m working on now. Previously I had shot everything on prosumer camcorders, so the following things were new to me with DSLRs and have been constant ever since.

 

Shallow Depth of Field

I had been used to everything being in focus, so not really thinking about my aperture setting, just turning the iris dial until the exposure looked right. My Canon 600D set me on a journey of understanding f-stops, and eventually choosing a target stop to shoot at for focus reasons and then using lighting or ND filters to achieve that stop.

 

Prime Lenses

My lenses

Although for several years I owned a Canon XL1-S, which had interchangeable lenses, I only ever owned a couple of zooms for it. As far as I’m aware, no prime lenses to fit the XL1-S’s proprietary mount were ever made, so prime lenses were completely new to me when I got my 600D. As with aperture, it forced me to think about what field of view and degree of perspective or compression I wanted, select the appropriate lens, and then place the camera accordingly, rather than lazily zooming to get the desired framing.

 

Dual-System Sound

It’s weird now to think that I used to be tethered to the sound recordist before I switched to DSLR shooting. At the time I was doing most of my own editing as well, so syncing the sound was a pain in the arse, but it was a valuable introduction to this industry-standard way of working. It’s also weird to think that clapperboards were optional for me before this.

 

Building a camera rig

All my cameras before the 600D had a built-in viewfinder, handgrip, shoulder mount (if the camera was large enough to need one) and lens (except the XL1-S), and there was no need to add an external battery plate or a follow-focus. The idea that a camera rig needed to be built, and that it could be customised to suit different operators and situations, was a novel one to me. I have to say that I still prefer cameras that have more things built in, like the Alexa Classic. A good part of the reason I rarely use Reds is because they don’t come with viewfinders. Why anyone ever thinks a viewfinder is an optional part of a camera is utterly beyond me. It’s an important point of stabilising contact for handheld work, and your face shields it completely from extraneous light, unlike a monitor.

 

Tapeless recording

The 600D was my first camera to record to memory cards rather than magnetic tape. It was certainly scary to have to wipe the cards during a shoot, being careful to back everything up a couple of times first. Data wrangling was a tricky thing to deal with on the kind of tiny-crewed productions I was usually doing back then, but of course now it’s completely normal. Just last week I shot my new cinematography course and had the fun of staying up until 2:30am after a long day of shooting, to make sure all the footage was safely ingested! More on that course soon.

What I Learnt from DSLRs

5 Things You Didn’t Know About the Iris in Your Lens

Inside a lens, amongst the various glass elements, is an ingenious mechanism which we call the iris. Just like your biological iris, it controls the amount of light passing through the pupil to form an image. I’ve written about the iris’s use to control exposure before, and its well-known side effect of controlling depth of field. But here are five things that aren’t so commonly known about irises.

 

1. f-stops and the entrance pupil

This image shows the exit pupil because it’s seen through the rear element of the lens. A view through the front element would show the entrance pupil.

The f-number of a lens is the ratio of the focal length to the diameter of the aperture, but did you know that it isn’t the actual diameter of the aperture that’s used in this calculation? It’s the apparent diameter as viewed through the front of the lens. A lens might have a magnifying front element, causing the aperture to appear larger than its physical size, or a reducing one, causing it to appear smaller. Either way, it’s this apparent aperture – known as the entrance pupil – which is used to find the f-number.

 

2. No-parallax point

The no-parallax point of a lens is located at its entrance pupil. Sometimes called the nodal point, although that’s technically something different, this is the point around which the camera must pan and tilt if you want to eliminate all parallax. This is important for forced perspective work, for panoramas stitched together from multiple shots, and other types of VFX.

 

3. Focus

If you need to check your focal distance with a tape measure, many cameras have a handy Phi symbol on the side indicating where the sensor plane is located so that you can measure from that point. But technically you should be measuring to the entrance pupil. The sensor plane marker is just a convenient shortcut because the entrance pupil is in a different place for every lens and changes when the lens is refocused or zoomed. In most cases the depth of field is large enough for the shortcut to give perfectly acceptable results, however.

 

4. Bokeh shape

The bokeh of a 32mm Cooke S4 wide open at T2 (left) and stopped down to T2.8 (right). Note also the diffraction spikes visible in the righthand image.

The shape of the entrance pupil determines the shape of the image’s bokeh (out of focus areas), most noticeable in small highlights such as background fairy lights. The pupil’s shape is determined both by the number of iris blades and the shape of their edges. The edges are often curved to approximate a circle when the iris is wide open, but form more of a polygon when stopped down. For example, a Cooke S4 produces octagonal bokeh at most aperture settings, indicating eight iris blades. Incidentally, an anamorphic lens has a roughly circular aperture like any other lens, but the entrance pupil (and hence the bokeh) is typically oval because of the anamorphosing effect of the front elements.

 

5. Diffraction spikes

When the edge of an iris blade is straight or roughly straight, it spreads out the light in a perpendicular direction, creating a diffraction spike. The result is a star pattern around bright lights, typically most visible at high f-stops. Every blade produces a pair of spikes in opposite directions, so the number of points in the star is equal to twice the number of iris blades – as long as that number is odd. If the number of blades is even, diffraction spikes from opposite sides of the iris overlap, so the number of apparent spikes is the same as the number of blades, as in the eight-pointed Cooke diffraction pictured above right.

5 Things You Didn’t Know About the Iris in Your Lens

Using Depth of Field Creatively

“The Handmaid’s Tale: Offred” (2017, DP: Colin Watkinson, ASC, BSC)

When DSLR video exploded onto the indie filmmaking scene a decade ago, film festivals were soon awash with shorts with ultra-blurry backgrounds. Now that we have some distance from that first novelty of large-sensor cinematography we can think more intelligently about how depth of field – be it shallow or deep – is best used to help tell our stories.

First, let’s recap the basics. Depth of field is the distance between the nearest and farthest points from camera that are in focus. The smaller the depth of field, the less the subject has to move before they go out of focus, and the blurrier any background and foreground objects appear. On the other hand, a very large depth of field may make everything from the foreground to infinity acceptably sharp.

Depth of field varying with aperture
Everyone’s favourite time machine at f/5 (left) and f/1.8 (right)

Depth of field is affected by four things: sensor (or film) size, focal length (i.e. lens length), focal distance, and aperture. In the days of tiny Mini-DV sensors, I was often asked by a director to zoom in (increase the focal length) to decrease the depth of field, but sometimes that was counter-productive because it meant moving the camera physically further away, thus increasing the focal distance, thus increasing the depth of field.

It was the large 35mm sensors of DSLRs, compared with the smaller 1/3” or 2/3” chips of traditional video cameras, that made them so popular with filmmakers. Suddenly the shallow depth of field seen in a Super-35 movie could be achieved on a micro-budget. It is worth noting for the purists, however, that a larger sensor technically makes for a deeper depth of field. The shallower depth of field associated with larger sensors is actually a product of the longer lenses required to obtain the same field of view.

Once a camera is selected and filming is underway, aperture is the main tool that DPs tend to use to control depth of field. A small aperture (large f- or T-number) gives a large depth of field; a large aperture (small f- or T-number) gives a narrow depth of field. What all those early DSLR filmmakers, high on bokeh, failed to notice is that aperture is, and always has been, a creative choice. Plenty of directors and DPs throughout the history of cinema have chosen deep focus when they felt it was the best way of telling their particular story.

“Citizen Kane” (1941, DP: Gregg Toland, ASC)

One of the most famous deep-focus films is 1941’s Citizen Kane, frequently voted the greatest movie ever made. First-time director Orson Welles came from a theatre background, and instructed DP Gregg Toland to keep everything in focus so that the audience could choose what to look at just as they could in a theatre. “What if they don’t look at what they’re supposed to look at?” Welles was apparently asked. “If that happens, I would be a very bad director,” was his reply.

Stanley Kubrick was also fond of crisp backgrounds. The infamous f/0.7 NASA lenses used for the candlelight scenes in Barry Lyndon were a rare and extreme exception borne of low-light necessity. A typical Kubrick shot has a formal, symmetrical composition with a single-point perspective and everything in focus right into the distance. Take the barracks in Full Metal Jacket, for example, where the background soldiers are just as sharp as the foreground ones. Like Welles, Kubrick’s reasons may have lain in a desire to emulate traditional art-forms, in this case paintings, where nothing is ever blurry.

“Full Metal Jacket” (1987, DP: Douglas Milsome, ASC, BSC)

The Indiana Jones trilogy was shot at a surprisingly slow stop by the late, great Douglas Slocombe. “I prefer to work in the aperture range of T14-T14.5 when I am shooting an anamorphic film like Raiders,” he said at the time. “The feeling of depth contributed to the look.” Janusz Kamiński continued that deep-focus look, shooting at T8-T11 when he inherited the franchise for Kingdom of the Crystal Skull.

At the other end of the aperture scale, the current Hulu series The Handmaid’s Tale makes great creative use of a shallow depth of field, creating a private world for the oppressed protagonist which works in tandem with voiceovers to put the viewer inside her head, the only place where she is free.

A director called James Reynolds had a similar idea in mind when I shot his short film, Exile Incessant. He wanted to photograph closed-minded characters with shallow focus, and show the more tolerant characters in deep focus, symbolising their openness and connection with the world. (Unfortunately the tiny lighting budget made deep focus impossible, so we instead achieved the symbolism by varying the harshness of the lighting.)

“Ren: The Girl with the Mark” (2016, DP: Neil Oseman)

One production where I did vary the depth of field was Ren: The Girl with the Mark, where I chose f/4 as my standard working stop, but reduced it to as little as f/1.4 when the lead character was bonding with the mysterious spirit inside her. It was the same principle again of separating the subject from the world around her.

Depth of field is a fantastic creative tool, and one which we are lucky to have so much control over with today’s cameras. But it will always be most effective when it’s used expressively, not just aesthetically.

Using Depth of Field Creatively

Exposure Part 1: Aperture

This is the first in a series of posts where I will look in detail at the four means of controlling the brightness of a digital video image: aperture, neutral density (ND) filters, shutter angle and ISO. It is not uncommon for newer cinematographers to have only a partial understanding of these topics, enough to get by in most situations; that was certainly the case with me for many years. The aim of this series is to give you an understanding of the underlying mechanics which will enable you to make more informed creative decisions.

You can change any one of the four factors, or any combination of them, to reach your desired level of exposure. However, most of them will also affect the image in other ways; for example, aperture affects depth of field. One of the key responsibilities of the director of photography is to use each of the four factors not just to create the ideal exposure, but to make appropriate use of these “side effects” as well.

 

f-stops and t-stops

The most common way of altering exposure is to adjust the aperture, a.k.a. the iris, sometimes described as changing “the stop”. Just like the pupil in our eyes, the aperture of a photographic lens is a (roughly) circular opening which can be expanded or contracted to permit more or less light through to the sensor.

You will have seen a series of numbers like this printed on the sides of lenses:

1      1.4      2      2.8      4      5.6      8      11      16      22     32

These are ratios – ratios of the lens’ focal length to its iris diameter. So a 50mm lens with a 25mm diameter iris is at f/2. Other lengths of lens would have different iris diameters at f/2 (e.g. 10mm diameter for a 20mm lens) but they would all produce an image of the same brightness. That’s why we use f-stops to talk about iris rather than diameters.

But why not label a lens 1, 2, 3, 4…? Why 1, 1.2, 2, 2.8…? These magic numbers are f-stops. A lens set to f/1.4 will let in twice as much light as (or “one stop more than”) a lens set to f/2, which in turn will let in twice as much as one set to f/2.8, and so on. Conversely, a lens set to f/2.8 will let in half as much light as (or “one stop less than”) a lens set to f/2, and so on. (Note that a number between any of these f-stops, e.g. f/1.8, is properly called an f-number, but not an f-stop.) These doublings or halvings – technically known as a base-2 logarithmic scale – are a fundamental concept in exposure, and mimic our eyes’ response to light.

If you think back to high-school maths and the πr² squared formula for calculating the area of a circle from its radius, the reason for the seemingly random series of numbers will start to become clear. Letting in twice as much light requires twice as much area for those light rays to fall on, and remember that the f-number is the ratio of the focal length to the iris diameter, so you can see how square roots are going to get involved and why f-stops aren’t just plain old round numbers.

If you’re shooting with a cine lens, rather than a stills lens, you’ll see the same series of numbers on the barrel, but here they are T-stops rather than f-stops. T-stops are f-stops adjusted to compensate for the light transmission efficiency. Two different lenses set to, say, f/2 will not necessarily produce equally bright images, because some percentage of light travelling through the elements will always be lost, and that percentage will vary depending on the quality of the glass and the number of elements. A lens with 100% light transmission would have the same f-number and T-number, but in practice the T-number will always be a little bigger than the f-number. For example, Cooke’s 15-40mm zoom is rated at a maximum aperture of T2 or f/1.84.

 

Fast and slow lenses

When buying or renting a lens, one of the first things you will want to know is its maximum aperture. Lenses are often described as being fast (larger maximum aperture, denoted by a smaller f- or T-number like T1.4) or slow (smaller maximum aperture, denoted by a bigger f- or T-number like T4). These terms come from the fact that the shutter speed would need to be faster or slower to capture the same amount of light… but more on that later in the series.

Faster lenses are generally more expensive, but that expense may well be outweighed by the savings made on lighting equipment. Let’s take a simple example, and imagine an interview lit by a 4-bank Kino Flo and exposed at T2.8. If our lens can open one stop wider (known as stopping up) to T2 then we double the amount of light reaching the sensor. We can therefore halve the level of light – by turning off two of the Kino Flo’s tubes or by renting a cheaper 2-bank unit in the first place. If we can stop up further, to T1.4, then we only need one Kino tube to achieve the same exposure.

 

Side effects

One of the first things that budding cinematographers learn is that wider apertures make for a smaller depth of field, i.e. the range of distances within which a subject will be in focus is smaller. In simple terms, the background of the image is blurrier when the depth of field is shallower.

It is often tempting to go for the shallowest possible depth of field, because it feels more cinematic and helps conceal shortcomings in the production design, but that is not the right look for every story. A DP will often choose a stop to shoot at based on the depth of field they desire. That choice of stop may affect the entire lighting budget; if you want to shoot at a very slow T14 like Douglas Slocombe did for the Indiana Jones trilogy, you’re going to need several trucks full of lights!

There is another side effect of adjusting the aperture which is less obvious. Lenses are manufactured to perform best in the middle of their iris range. If you open a lens up to its maximum aperture or close it down to its minimum, the image will soften a little. Therefore another advantage of faster lenses is the ability to get further away from their maximum aperture (and poorest image quality) with the same amount of light.

Finally it is worth noting that the appearance of bokeh (out of focus areas) and lens flares also changes with aperture. The Cooke S4 range, for example, renders out-of-focus highlights as circles when wide open, but as octagons when stopped down. With all lenses, the star pattern seen around bright light sources will be stronger when the aperture is smaller. You should shoot tests – like these I conducted in 2017 – if these image artefacts are a critical part of your film’s look.

Next time we’ll look at how we can use ND filters to control exposure without compromising our choice of stop.

Learn how to use exposure practically with my Cinematic Lighting online couse. Enter voucher code INSTA90 for an amazing 90% off.

Exposure Part 1: Aperture

What Does “Cinematic” Mean?

Earlier this year I undertook a personal photography project called Stasis. I deliberately set out to do something different to my cinematography work, shooting in portrait, taking the paintings of Dutch seventeenth century masters as my inspiration, and eschewing traditional lighting fixtures in favour of practical sources. I was therefore a little disappointed when I began showing the images to people and they described them as “cinematic”.

An image from “Stasis”

This experience made me wonder just what people mean by that word, “cinematic”. It’s a term I’ve heard – and used myself – many times during my career. We all seem to have some vague idea of what it means, but few of us are able to define it. 

Dictionaries are not much help either, with the Oxford English Dictionary defining it simply as “relating to the cinema” or “having qualities characteristic of films”. But what exactly are those qualities?

Shallow depth of field is certainly a quality that has been widely described as cinematic. Until the late noughties, shallow focus was the preserve of “proper” movies. The size of a 35mm frame (or of the digital cinema sensors which were then emerging) meant that backgrounds could be thrown way out of focus while the subject remained crisp and sharp. The formats which lower-budget productions had thereto been shot on – 2/3” CCDs and Super-16 film – could not achieve such an effect. 

Then the DSLR revolution happened, putting sensors as big as – or bigger than – those of Hollywood movies into the hands of anyone with a few hundred pounds to spare. Suddenly everyone could get that “cinematic” depth of field. 

My first time utilising the shallow depth of field of a DSLR, on a never-completed feature back in 2011.

Before long, of course, ultra-shallow depth of field became more indicative of a low-budget production trying desperately to look bigger than of something truly cinematic. Gradually young cinematographers started to realise that their idols chose depth of field for storytelling reasons, rather than simply using it because they could. Douglas Slocombe, OBE, BSC, ASC, cinematographer of the original Indiana Jones trilogy, was renowned for his deep depth of field, typically shooting at around T5.6, while Janusz Kaminski, ASC, when shooting Kingdom of the Crystal Skull, stopped down as far as T11.

There was also a time when progressive scan – the recording of discrete frames rather than alternately odd and even horizontal lines to make an interlaced image – was considered cinematic. Now it is standard in most types of production, although deviations from the norm of 24 or 25 frames per second, such as the high frame rate of The Hobbit, still make audiences think of reality TV or news, rejecting it as “uncinematic”.

Other distinctions in shooting style between TV/low-budget film and big-budget film have slipped away too. The grip equipment that enables “cinematic” camera movement – cranes, Steadicams and other stabilisers – is accessible now in some form to most productions. Meanwhile the multi-camera shooting which was once the preserve of TV, looked down upon by filmmakers, has spread into movie production.

A direct comparison may help us drill to the core of what is “cinematic”. Star Trek: Generations, the seventh instalment in the sci-fi film franchise, went into production in spring 1994, immediately after the final TV season of Star Trek: The Next Generation wrapped. The movie shot on the same sets, with the same cast and even the same acquisition format (35mm film) as the TV series. It was directed by David Carson, who had helmed several episodes of the TV series, and whose CV contained no features at that point.

Yet despite all these constants, Star Trek: Generations is more cinematic than the TV series which spawned it. The difference lies with the cinematographer, John A. Alonzo, ASC, one of the few major crew members who had not worked on the TV show, and whose experience was predominantly in features. I suspect he was hired specifically to ensure that Generations looked like a movie, not like TV.

The main thing that stands out to me when comparing the film and the series is the level of contrast in the images. The movie is clearly darker and moodier than the TV show. In fact I can remember my schoolfriend Chris remarking on this at the time – something along the lines of, “Now it’s a movie, they’re in space but they can only afford one 40W bulb to light the ship.” 

The bridge of the Enterprise D as seen on TV (top) and in the “Generations” movie (bottom).

It was a distinction borne of technical limitations. Cathode ray tube TVs could only handle a dynamic range of a few stops, requiring lighting with low contrast ratios, while a projected 35mm print could reproduce much more subtlety. 

Today, film and TV is shot on the same equipment, and both are viewed on a range of devices which are all good at dealing with contrast (at least compared with CRTs). The result is that, with contrast as with depth of field, camera movement and progressive scan, the distinction between the cinematic and the uncinematic has reduced. 

The cinematography of “Better Call Saul” owes much to film noir.

In fact, I’d argue that it’s flipped around. To my eye, many of today’s TV series – and admittedly I’m thinking of high-end ones like The Crown, Better Call Saul or The Man in the High Castle, not Eastenders – look more cinematic than modern movies. 

As my friend Chris had realised, the flat, high-key look of Star Trek: The Next Generation was actually far more realistic than that of its cinema counterpart. And now movies seem to have moved towards realism in the lighting, which is less showy and not so much moody for the sake of being moody, while TV has become more daring and stylised.

A typically moody and contrasty shot from “The Crown”

The Crown, for examples, blasts a 50KW Soft Sun through the window in almost every scene, bathing the monarchy in divine light to match its supposed divine right, while Better Call Saul paints huge swathes of rich, impenetrable black across the screen to represent the rotten soul of its antihero. 

Film lighting today seems to strive for naturalism in the most part. Top DPs like recent Oscar-winner Roger Deakins, CBE, ASC, BSC,  talk about relying heavily on practicals and using fewer movie fixtures, and fellow nominee Rachel Morrison, ASC, despite using a lot of movie fixtures, goes to great lengths to make the result look unlit. Could it be that film DPs feel they can be more subtle in the controlled darkness of a cinema, while TV DPs choose extremes to make their vision clear no matter what device it’s viewed on or how much ambient light contaminates it?

“Mudbound”, shot by Rachel Morrison, ASC

Whatever the reason, contrast does seem to be the key to a cinematic look. Even though that look may no longer be exclusive to movies released in cinemas, the perception of high contrast being linked to production value persists. The high contrast of the practically-lit scenes in my Stasis project is – as best I can tell – what makes people describe it as cinematic.

What does all of this mean for a filmmaker? Simply pumping up the contrast in the grade is not the answer. Contrast should be built into the lighting, and used to reveal and enhance form and depth. The importance of good production design, or at least good locations, should not be overlooked; shooting in a friend’s white-walled flat will kill your contrast and your cinematic look stone dead. 

A shot of mine from “Forever Alone”, a short film where I was struggling to get a cinematic look out of the white-walled location.

Above all, remember that story – and telling that story in the most visually appropriate way – is the essence of cinema. In the end, that is what makes a film truly cinematic.

SaveSave

What Does “Cinematic” Mean?

Lensing Ren – episode 2

Here’s my video breaking down the cinematography of episode two of Ren: The Girl with the Mark. This week I discuss lighting Ren’s house, tweaking wide-shot lighting for close-ups, and depth of field.

Here is the lighting plan for Ren’s house:

Rens-house-1080

And here is a video blog from the set of Ren’s house:

Check out the article I wrote during the shoot about lighting Ren and Dagron’s house if you’re still hungry for details.

If you want to know more about using kinoflos as indirect window light, have a look at Lighting Technique #3: The Window Wrap.

Want to know more about Depth of Field? This post will give you the basics.

Check back next Saturday for another instalment of Lensing Ren, and meanwhile watch the next episode of Ren itself from Tuesday at 8pm GMT at rentheseries.com

Lensing Ren – episode 2

Giving Yourself Somewhere to Go

On the recce for The Second Shepherd's Play. Photo: Douglas Morse
On the recce for The Second Shepherd’s Play. Photo: Douglas Morse

As a cinematographer, it can often be tempting to make your shots look as slick and beautiful as possible. But that’s not always right for the story. And sometimes it can leave you nowhere to go.

Currently I’m shooting The Second Shepherds’ Play, a medieval comedy adaptation, for director Douglas Morse. The story starts in the mud and drizzle of three shepherds’ daily drudge, and in a Python-esque twist ends up in the nativity. The titular trio develop from a base, selfish, almost animalistic state to something much more divine.

So, much as my instincts filming the opening scenes yesterday were to have a shallow depth of field and bounce boards everywhere to put a sparkle in the shepherds’ eyes, this wouldn’t have been right for this stage of the film. We had to have somewhere to go, so I shot at around f9 all day with unmodified natural, overcast light. As we get towards the end of the story – we’re shooting roughly in story order – I’ll start to use eyelight and more sculpted illumination and reduce the depth of field, as well as switching from handheld to sticks.

Grading episode one of Ren
Grading episode one of Ren

Similarly, grading episode one of Ren the other day, it was important to keep things bright and cheerful, so that later episodes could be colder and darker by comparison when things go wrong for our heroes. And playing the long game, I lit Ren herself with soft, shadowless light for most of the first season, so that as she develops from innocence to more of an action heroine in later seasons, her lighting can get harder and moodier.

Like all heads of department on a production, DPs are storytellers, and it all comes down to doing what’s right for the story, and what’s right for that moment in the story.

Giving Yourself Somewhere to Go