Luna 3: Photographing the Far Side of the Moon without Digital Technology

The far side of the moon (frame 29) as shot by Luna 3

It is 1959. Just two years have passed since the launch of the USSR’s Sputnik 1 satellite blew the starting whistle for the Space Race. Sputnik 2, carrying poor Laika the dog, and the American satellite Explorer 1 swiftly followed. Crewed spaceflight is still a couple of years away, but already the eyes of the world’s superpowers have turned to Earth’s nearest neighbour: the moon.

Early attempts at sending probes to the moon were disastrous, with the first three of America’s Pioneer craft crashing back to Earth, while a trio of Soviet attempts exploded on launch. Finally the USSR’s Luna 1 – intended to crash-land on the surface – at least managed a fly-by. Luna 2 reached its target, becoming the first man-made object on the moon in September 1959.

The stage is now set for Luna 3. Its mission: to photograph the far side of the moon.

Luna 3

Our planet and its natural satellite are in a state known as tidal locking, meaning that the moon takes the same length of time to circle the earth as it does to rotate around its own axis. The result is that the same side of the moon always faces us here on Earth. Throughout all of human history, the far side has been hidden to us.

But how do you take a photograph a quarter of a million miles away and return that image to Earth with 1950s technology?

At this point in time, television has been around for twenty years or so. But the images are transient, each frame dancing across the tube of a TV camera at, say, Alexandra Palace, oscillating through the air as VHF waves, zapping down a wire from an aerial, and ultimately driving the deflecting coils of a viewer’s cathode ray tube to paint that image on the phosphorescent screen for a 50th of a second. And then it’s gone forever.

For a probe on the far side of the moon, with 74 million million million tonnes of rock between it and the earthbound receiving station, live transmission is not an option. The image must somehow be captured and stored.

Video tape recorders have been invented by 1959, but the machines are enormous and expensive. At the BBC, most non-live programmes are still recorded by pointing a film camera at a live TV monitor.

And it is film that will make Luna 3’s mission possible. Enemy film in fact, which the USSR recovered, unexposed, from a CIA spy balloon. Resistant to radiation and extremes of temperature, the 35mm isochromatic stock is chosen by Soviet scientists to be loaded into Luna 3’s AFA-Ye1 camera, part of its Yenisey-2 imaging system.

Luna 3 launches on October 4th, 1959 from Baikonur Cosmodrome in what will one day be Kazakhstan. A modified R-7 rocket inserts the probe into a highly elliptical Earth orbit which, after some over-heating and communications issues are resolved, brings it within range of the moon three days later.

The mission has been timed so that the far side of the moon is in sunlight when Luna 3 reaches it. A pioneering three-axis stabilisation system points the craft (and thus the camera, which cannot pan independently) at the side of the moon which no-one has seen before. A photocell detects the bright surface and triggers the Yenisey-2 system. Alternating between 200mm f/5.6 and 500mm f/9.5 lenses, the camera exposes 29 photographs on the ex-CIA film.

The AFA-Ye1 camera

Next that film must be processed, and Luna 3 can’t exactly drop it off at Snappy Snaps. In fact, the Yenisey-2 system contains a fully automated photo lab which develops, fixes and dries the film, all inside a 1.3x1m cylinder tumbling through the vacuum of space at thousands of miles per hour.

Now what? Returning a spacecraft safely to Earth is beyond human ability in 1959, though the following year’s Vostok missions will change all that. Once Luna 3 has swung around the moon and has line of sight to the receiving stations on Earth, the photographic negatives must be converted to radio broadcasts.

To that end, Yenisey-2 incorporates a cathode ray tube which projects a beam of light through the negative, scanning it at a 1,000-line resolution. A photocell on the other side receives the beam, producing a voltage inversely proportional to the density of the negative. This voltage frequency-modulates a radio signal in the same way that fax machines use frequency-modulated audio to send images along phone lines.

Attempts to transmit the photographs begin on October 8th, and after several failures, 17 images are eventually reconstructed by the receiving stations in Crimea and Kamchatka. They are noisy, they are blocky, they are monochromatic, but they show a sight that has been hidden from human eyes since the dawn of time. Featuring many more craters and mountains and many fewer “seas” than the side we’re used to, Luna 3’s pictures prompt a complete rethink of the moon’s history.

Its mission accomplished, the probe spirals in a decaying orbit until it finally burns up in Earth’s atmosphere. In 1961, Yuri Gagarin’s historic flight will capture the public imagination, and unmanned space missions will suddenly seem much less interesting.

But next time you effortlessly WhatsApp a photo to a friend, spare a thought for the remarkable engineering that one day sent never-before-seen photographs across the gulf of space without the aid of digital imaging.

One of the 500mm exposures
Luna 3: Photographing the Far Side of the Moon without Digital Technology

24fps or 25fps, which is best?

The monitor overlays here show how “Annabel Lee” was shot at 24fps with a shutter angle of 172.8 to prevent flickering of non-incandescent light sources, a typical recipe for UK filmmakers today.

An article of mine from 2014 weighing the merits of shooting at 24 vs. 25 frames per second has recently been getting a lot of hits. I’m surprised that there’s still so much uncertainty around this issue, because for me it’s pretty clear-cut these days.

When I started out making films at the turn of the millennium, 25fps (or its interlaced variant, 50i) was the only option for video. The tapes ran at that speed and that was that. Cathode ray tube TVs were similarly inflexible, as was PAL DVD when it emerged.

Film could be shot at 24fps, and generally was for theatrical movies, since most cinema projectors only run at that speed, but film for television was shot at 25fps.

Three big technological shifts occurred in the late noughties: the delivery of video over the internet, flat-screen TVs and tapeless cameras. All of these support multiple frame rates, so gradually we found that we had a choice. At the start of a shoot, as a DP I would have to ask which frame rate to set.

The frame rate and resolution menu from my old Canon 600D, the first time I owned a camera that could shoot 24fps.

Americans and others in NTSC regions are in a different situation. Their TV standard of 30fps has a discernibly different look to the international movie standard of 24fps, so the choice of frame rate is as much creative as it is technical. I don’t think anyone can tell the difference between 24 and 25fps, even on a subconscious level, so in Europe it seems we must decide on a purely technical basis.

But in fact, the decision is as much about what people are used to as anything else. I shot a feature film pilot once on 35mm at 25fps and it really freaked out the lab simply because they weren’t used to it.

I shot the 35mm pilot for “The Dark Side of the Earth” (2008) at 25fps because tapes still played a part in postproduction at that time. Today I would not hesitate to shoot at 24.

And what people seem to be most used to and comfortable with in the UK today is 24fps. It offers the most compatibility with digital cinemas and Blu-ray without needing frame rate conversion. (Some cinemas can play 25fps DCPs, and Blu-rays support 25fps in a 50i wrapper which might not play in a lot of US machines, but 24 is always a safer bet for these formats.)

Historically, flickering of non-incandescent light sources and any TV screens in shot was a problem when shooting 24fps in the UK. These days it’s very easy to set your shutter to 172.8° (if your camera measures it as an angle) or 1/50th (if your camera measures it as an interval). This ensures that every frame – even though there are 24 of them per second – captures 1/50th of a second, in sync with the 50Hz mains supply.

 

The Times when 25fps is best

There are some situations in which 25fps is still the best or only option though, most notably when you’re shooting something intended primarily for broadcast on a traditional TV channel in the UK or Europe. The same goes if your primary distribution is on PAL DVD, which I know is still the case for certain types of corporate and educational videos.

Once I was puzzled by a director’s monitor not working on a short film shoot, and discovered that it didn’t support 24fps signals, so I had to choose 25 as my frame rate for that film. So it might be worth checking your monitors if you haven’t shot 24fps with them before.

“Finding Hope” was shot at 25fps simply because the director’s monitor wouldn’t accept 24fps signals.

Finally, if your film contains a lot of archive material or stock footage at 25fps, it makes sense to match that frame rate.

Whichever frame rate you ultimately choose, always discuss it with your postproduction team ahead of time to make sure that you’re all on the same page.

24fps or 25fps, which is best?

“Superman II” Retrospective

At Christmas 1978, when Superman: The Movie opened to enthusiastic reviews and record-breaking box office, it was no surprise that a sequel was in the works. What was unusual was that the majority of that sequel had already been filmed, and stranger still, much of it would be re-filmed before Superman II hit cinemas two years later.

Jerry Siegel and Joe Shuster’s comic-book icon had made several superhuman leaps to the screen by the 1970s, but Superman: The Movie was the first big-budget feature film. Producer Pierre Spengler and executive producer father/son team Alexander and Ilya Salkind purchased the rights from DC Comics in 1974 and made a deal to finance not one but two Superman movies on the understanding that Warner Bros. would buy the finished products. Salkind senior had unintentionally pioneered back-to-back shooting the previous year when he decided to split The Three Musketeers – originally intended as a three-hour epic – into two shorter films.

After packaging Superman I and II with A-listers Marlon Brando (as Kryptonian patriarch Jor-El) and Gene Hackman (as the villainous Lex Luthor), the producers hired The Omen director Richard Donner to helm the massive production. Donner cast the unknown Christopher Reeve in the title role, while John Williams was signed to compose what would prove to be one of the most famous soundtracks in cinematic history. Like many big genre productions of the time – Star Wars and Alien to name but two – Superman set up camp in England, with cameras rolling for the first time on March 24th, 1977.

Tom Mankeiwicz (creative consultant), Marlon Brando (Jor-El), Richard Donner (director), Pierre Spengler (producer). Brando is dressed in black to isolate his head for the Fortress of Solitude hologram effects.

“We were shooting scenes from the two films simultaneously, according to production conveniences,” explained creative consultant Tom Mankiewicz in a 2001 documentary. “So when we had Gene Hackman we were shooting scenes from II and scenes from I, or when we were in the Daily Planet we were shooting scenes from both pictures in the Daily Planet, while you were in that set.”

Today – largely thanks to Peter Jackson’s Lord of the Rings trilogy – we are used to enormous, multi-year productions with crew numbers in four figures, but the scale of the dual Superman shoot was unprecedented at the time, eventually reaching nineteen months in duration. It was originally scheduled for eight.

“Dick [Donner] never in the course of the picture got a budget; he never got a schedule,” claimed Mankiewicz. “He was constantly told that he was over schedule, way over budget, but nobody told him what that budget was or how much he was over that budget.”

Given that overspends were funded by Warner Bros. in return for more distribution rights, Spengler and the Salkinds were watching the value of their huge investment trickle away. So despite Donner’s popularity with the rest of the cast and crew, his relationship with the producers became ever more strained, to the point where they weren’t even on speaking terms.

Richard Lester directed iconic Swinging Sixties films like “A Hard Day’s Night” and “The Knack… and How to Get It”.

Ilya Salkind suggested bringing in The Three Musketeers director Richard Lester, who agreed on condition that he would be paid monies still owed to him from that earlier film. By some accounts his role on Superman was that of a mediator between the director and the producers, by others he was a co-producer, second unit director or even a back-up director in case Donner cracked under the pressure of the endless shoot. “Where does this leave… Donner?” asked a newspaper report of the time. “‘Nervous,’ a cast member says.”

Eventually, with the first movie’s release date looming, the filmmakers decided on a change of plan. Superman II would be placed on the back burner in order to prioritise finishing Superman: The Movie – and get it earning money as quickly as possible. At this point, three quarters of the sequel was already in the can, including all scenes featuring Brando and Hackman, both of whom had had contractual wrap dates to meet.

Superman: The Movie was a hit, but Donner would not direct the remainder of its sequel. “They have to want me to do it,” he said of the producers at the time. “It has to be on my terms and I don’t mean financially, I mean control.” Of Spengler specifically, Donner was reported to bluntly state, “If he’s on it – I’m not.”

And indeed Donner was not. The Salkinds had no intention of acceding to his demands. Instead, the former mediator Richard Lester was hired to complete Superman II, and Donner received a telegram telling him that his services were no longer required. “I was ready to get on an airplane and kill,” he recalled years later, “because they were taking my baby away from me.”

Master of miniatures Derek Meddings wets down the New York street. Many miniature effects were reshot simply so Lester could claim directorship of them.

Meanwhile Brando was trying (unsuccessfully) to sue the producers over royalties, and demanded a significant cut of the box office gross from the sequel. Rather than pay this, the producers elected to re-film his scenes, replacing Jor-El with Superman’s mother Lara, as played by Susannah York.

It was far from the only reshooting of Superman II footage that took place. Ironically, given the earlier budget concerns, Lester was permitted to redo large chunks of Donner’s material with a rewritten script in order to earn a credit as director under guild rules. Major changes included a new opening sequence on the Eiffel Tower, Lois Lane’s realisation of Clark Kent’s true identity after he trips and falls into a fireplace, and a different ending in which a magic kiss from Clark erases that realisation from her memory.

Some of the reshoots included Lex Luthor material, but Hackman declined to return out of loyalty to Donner; the result is the fairly obvious use of a double in the climactic Fortress of Solitude scene. The deaths of Geoffrey Unsworth and John Barry, plus creative differences between Lester and John Williams, meant that the sequel team also featured a new DP (Robert Paynter), production designer (Peter Murton) and composer (Ken Thorne) respectively, although significant contributions from all of the original HODs remain in the finished film.

Comparing his own directing style with Donner’s, Lester told interviewers, “I think that Donner was emphasising a kind of grandiose myth… There was a type of epic quality which isn’t in my nature… I’m more quirky and I play around with slightly more unexpected silliness.” Indeed his material is characterised by visual gags and a generally less serious approach, which he would continue into Superman III (1983).

Although some of the unused Donner scenes were incorporated into TV screenings over the years, it was not until the 2001 DVD restoration of the first movie that interest began to build in a release for the full, unseen version of the sequel. When Brando’s footage was rediscovered a few years later, it could finally become a reality.

Footage from Margot Kidder’s 35mm screen test was incorporated into the Donner Cut to show Lois Lane discovering Clark Kent’s true identity. Although causing some minor continuity errors, the scene is far more intelligent than Lester’s rug-tripping revelation.

“I don’t think there is [another] film that had so much footage shot and not used,” remarked editor Michael Thau. A vast cataloguing and restoration effort was undertaken to make useable the footage which had been sitting in Technicolor’s London vault for a quarter of a century. Donner and Mankiewicz returned to oversee and approve the process, which used only the minimum of Lester material necessary to tell a complete story, plus footage from Reeve’s and Margot Kidder’s 35mm screen tests.

Released on DVD in 2006, the Donner Cut suffers from the odd cheap visual effect used to plug plot holes, and a familiar turning-back-time ending which was originally scripted for the sequel but moved to the first film at the last minute. However, for fans of Superman: The Movie, this version of Superman II is much closer in tone and ties in much better in story terms too. The Donner Cut is also less silly than the theatrical version, though it must be said that Lester’s humour contributed in no small part to the sequel’s original success.

Whichever version you prefer, 40 years on from its first release, Superman II is still a fun and thrilling adventure with impressive visuals and an utterly believable central performance from the late, great Christopher Reeve.

“Superman II” Retrospective

Finding the Positive in 2020

This year hasn’t been great for anyone. It certainly hasn’t for me. Even as I’m writing this I’m hearing the news that, in a staggeringly foreseeable U-turn, the Christmas bubble rules have been severely restricted. So how to wrap up this stinker of a year?

I considered making this article about the pandemic’s impact on the film and TV industries and speculating about which changes might be permanent. But who needs more doom and gloom? Not me.

Instead, here are six positive things that I accomplished this year:

  1. We shot the final block of War of the Worlds: The Attack in February/March, and I was recently shown a top-secret trailer which is very exciting. There is plenty of post work still to do on this modern-day reimagining of the H.G. Wells classic, but hopefully it will see the light of day in a year or so.
  2. After a couple of lax years, I got back to blogging regularly. This site now has a staggering 1,250 posts!
  3. I completed and released my first online course, Cinematic Lighting, which has proven very popular. It currently has over 1,000 students and a star rating which has consistently hovered around 4.5 out of 5.
  4. I made a zoetrope and shot several 35mm timelapses and animations for it, which was a nice lockdown project. Even though the animations didn’t come out that well they were fun to do, and the zoetrope itself is just a cool object to keep on the shelf.
  5. I wrote my first article for British Cinematographer, the official magazine of the BSC, which will be published on January 15th. In the process I got to interview (albeit by email) several DPs I’ve admired for a while including David Higgs BSC, Colin Watkinson ASC, BSC and Benedict Spence.
  6. The lockdown gave me the time to update my showreel. Who knows if I’ll ever work again as a DP, but if not, at least I can look back with pride at some of the images I’ve captured over the years.

Despite the restrictions, I hope all my readers manage to find some joy, love and comfort over the festive period. And if not, just consume a lot of mulled wine and chocolate; it’s the next best thing.

In a tradition I’ve neglected for a few years, I’ll leave you with a rundown of my ten favourite blog posts from 2020.

  1. “The Rise of Anamorphic Lenses in TV” – a look at some of the shows embracing oval bokeh and horizontal flares
  2. “5 Steps to Lighting a Forest at Night” – breaking down how to light a place that realistically shouldn’t have any light
  3. Above the Clouds: The Spoiler Blogs” – including how we faked a plane scene with a tiny set-piece in the director’s living room
  4. “Working with White Walls” – analysing a couple of short films where I found ways to make the white-walled locations look more cinematic
  5. “10 Clever Camera Tricks in Aliens – the genius of vintage James Cameron
  6. “The Cinematography of Chernobyl – how DP Jakob Ihre used lighting and lensing to tell this horrifying true story
  7. “5 Things Bob Ross Can Teach Us About Cinematography” – who knew that the soft-spoken painter had so much movie-making widsom?
  8. “5 Ways to Fake Firelight” – a range of ways to simulate the dynamics of flames, from the low-tech to the cutting edge
  9. “A Post-lockdown Trip to the Cinema” – an account of the projection sacrilege commited against a classic movie in my first fleapit trip of the Covid era
  10. “Exposure” (four-part series) – in-depth explanations of aperture, ND filters, shutter angles and ISO
Finding the Positive in 2020

A Cinematographer’s Guide to Looking Good on a Webcam

This night shot is lit by a desk-lamp bounced off the wall behind and to the left of camera, plus the monitor light and the Christmas lights you can see at frame left. The background is lit by a reading lamp bounced off the ceiling (just out of frame right) and more Christmas lights.

It may be the beginning of the end for Covid-19, but it doesn’t look like home working and video calls are going away any time soon. We’re very lucky that we have such technology in the midst of a global pandemic, but let’s be honest: webcams don’t always make us look our best. Having lit and photographed movies for 20 years, I’d like to share a few tips to improve your image on Zoom, WhatsApp, Google Meet or whatever your video call software of choice is.

Firstly, low camera angles are not flattering to many people. Wherever possible, set up your webcam so that it’s at eye-level or a little above. If you’re using a laptop, this might mean stacking a few books under the device. Consider investing in a laptop stand that will raise the monitor and camera up if you’re going to be doing this a lot.

Avoid placing the camera too close yourself. A medium close-up works best for most video calls, head and shoulders at the closest, or down to your waist if you like to gesticulate a lot. Follow the classic rules of composition and make the most of your camera’s resolution by framing your head near the top of the shot, rather than leaving a lot of empty headroom above yourself.

It’s important to be aware of automatic exposure if you want to look your best on a webcam. Your camera and/or software continually assess the average luminance in the frame and alter the shutter speed or electronic gain to achieve what they think is the correct exposure. Since webcams have very poor dynamic range – they can’t handle a great deal of contrast within the frame – you should think carefully about what elements in your shot could sway the auto-exposure.

For example, a bright white wall, window or table lamp in the background will cause the camera to reduce its exposure, darkening the overall image and perhaps turning you into a silhouette. Even the colour of top you’re wearing can be a factor. If you have a pale skin tone and you’re wearing a black top – prompting the camera to increase its exposure – you might well find that your face bleaches out.

The black hoodie causes the automatic exposure to rise, bleaching out my face.
The lighter tone of this t-shirt enables the automatic exposure to produce a balanced image.

This brings us to lighting. Most of us are used to lighting our homes and workspaces so that we can see what we’re doing comfortably, rather than worrying about how the light is falling on our own faces.

The clearest and most flattering type of lighting is generally a large, soft source roughly in front of and slightly above us, so if possible position your computer or webcam in front of a window. If direct sunlight comes in through this window, that is less ideal; try to cut it off with your curtains. The indirect light of sky and clouds is much softer and less likely to confuse the auto-exposure.

If you have little or no natural light to work with, the main source of light on your face might well be the monitor you’re looking at. In this case, what you have on your screen can make a huge difference. A blank white Word document is going to light you much more effectively than a paused Netflix frame from a horror movie.

Monitor light can leave you looking blue and ghostly, so consider placing a strategic window of pale orange colour on your virtual desktop to warm up your skin tone. Try adjusting the monitor’s brightness or switching to a darker desktop theme if your monitor is bleaching your face out completely.

Of course, your screen is not just a light source. You need to be able to use it for actually viewing things too, so a better solution is not to rely on it for light. Instead, create another soft source in front of and slightly above you by pointing a desk-lamp at the wall above your monitor. (If the wall is a dark or saturated colour, pin up something white to reflect the light.) The larger the patch of wall the lamp illuminates, the more softly your face will be lit.

You may find that your background now looks very dim, because little of the light from your monitor – or bouncing off the wall behind your monitor – is reaching it. Worse still, the auto-exposure might react to this dim background by over-exposing your face. In this case, use a second lamp to illuminate the background.

Often the room’s main ceiling light will do the job here, though it will likely result in an image that has an overall flat look to it. That might be just what you need for a professional video call, but if not, feel free to get creative with your background. Use table lamps to pick out certain areas, string up fairy lights, or whatever you feel best reflects your personality and profession.

The main thing is to get your “key light” right first – that’s the soft source in front of you that keeps you lit nicely. Everything after that is just icing on the cake.

This moodier shot has the much the same set-up as the image at the top of this post, but with a brighter light in the background and a dimmer light in the foreground.
A Cinematographer’s Guide to Looking Good on a Webcam

“A Cliché for the End of the World”

Photo: Catherine Ashenden

In August 2019 Jonnie Howard, director of The Knowledge, approached me about shooting an unusual short film with him. A Cliché for the End of the World is only two minutes long, but Jonnie wanted to shoot it as two unbroken takes which would be presented side by side. Each take would follow one character, starting and ending with them next to each other, but separating in the middle.

My first thought was that the two takes would have to be shot concurrently, but to squeeze two cameras into the small location and keep each out of the other’s frame would have been impossible. Instead, we settled on shooting with a single camera. After capturing 18 takes of the first side, Jonnie reviewed the footage with his editor Kat and selected one to use. We then shot the other side, with Kat calling out cues that would keep the actors in sync with the selected “master” take. (It took 18 takes to get this side in the can as well, partly because of getting the cues right and partly because of the difficulties Steadicam op Luke Oliver had in manoeuvring up the narrow staircase.)

The film had to be lit in a way that worked for both sides, with the camera starting in the living room looking towards the kitchen, moving up the stairs, through the landing and into the bedroom.

The HMI skips off the floor (left); Jeremy creates the dynamic look of TV light (right)

Working as usual to the general principle of lighting from the back, I set up a 2.5K HMI outside the kitchen window to punch a shaft of sunlight into the room. I angled this steeply so that it would not reach the actors directly, but instead bounce off the floor and light them indirectly. (See my article on lighting through windows.)

Gaffer Jeremy Dawson blacked out the living room windows to keep the foreground dark. He used an LED panel set to 6,600K (versus our camera’s white balance of 5,600K) to simulate an off-screen TV, waving a piece of black wrap in front of it to create dynamics.

The HMI outside (left); the diffused Dedo in the loft (right)

Next we needed to bring up the light levels for the actor’s journey up the stairs, which were naturally darker. Jeremy and spark Gareth Neal opened the loft hatch on the landing and rigged an LED Dedo inside, aimed at the darkest part of the staircase. They diffused this with some kind of net curtain I think.

To brighten the landing we set up a diffused 2×4 Kino Flo in the spare room and partially closed the door to give the light some shape. Both this and the loft Dedo were a couple of stops under key so as not to look too artificial.

Luke Oliver balances Jonnie’s C200 on his Steadicam rig.

All that remained was the bedroom. The characters were to end up sitting on the bed facing the window. Originally the camera in both takes was to finish facing them, with the window behind it, but this would have meant shadowing the actors, not to mention that space between the bed and the window was very limited. After some discussion between me, Jonnie, Luke, the cast, and production designer Amanda Stekly, we ended up moving the bed so that the camera could shoot the actors from behind, looking towards the window. This of course made for much more interesting and dimensional lighting.

The window looked out onto the street, and with a narrow pavement and no permission from the council, rigging a light outside was out of the question. Furthermore, we knew that the sun was going to shine right into that window later in the day, seriously messing with our continuity. Unfortunately all we could do was ask Amanda to dress in a net curtain. This took the worst of the harshness out of any direct sun and hopefully disguised the natural changes in light throughout the day at least a little.

When the sun did blast in through the window at about 6pm, we added a layer of unbleached muslin behind the net curtain to soften it further. We doubled this as the angle of the sun got more straight-on, then removed it entirely when the sun vanished behind the rooftops opposite at 7pm. About 20 minutes later we rigged a daylight LED panel in the room, bouncing off the ceiling, as a fill to counteract the diminishing natural light. We wrapped just as it was becoming impossible to match to earlier takes.

We were shooting in RAW on a Canon C200, which should give some grading latitude to help match takes from different times of day. The split-screen nature of the film means that the match needs to be very close though!

As I write this, the film is still in postproduction, and I very much look forward to seeing how it comes out. I’ll leave you with the start and end frames from slate 2, take 17, with a very quick and dirty grade.

“A Cliché for the End of the World”

The Hardest Shot I’ve Ever Done

It is night. We Steadicam into a moonlit bedroom, drifting across a window – where a raven is visible on the outside ledge, tapping at the glass with its beak – and land on a sleeping couple. The woman, Annabel, wakes up and goes to the window, causing the bird to flee. Crossing over to her far shoulder, we rest on Annabel’s reflection for a moment, before racking focus to another woman outside, maybe 200ft away, running towards a cliff. All in one shot.

Such was the action required in a scene from Annabel Lee, the most ambitious short I’ve ever been involved with. Based on Edgar Allen Poe’s poem, the film was the brainchild of actor Angel Parker, who plays the titular character. It was directed by Amy Coop, who had already to decided to shoot on an Alexa Mini with Cooke Anamorphics before I was even hired.

Working with animals has its own difficulties, but for me as director of photography the challenges of this particular shot were:

  1. Making the bedroom appear moonlit by the single window, without any lamps being visible at any point in the Steadicam move.
  2. Lighting the view outside.
  3. Ensuring the live raven read on camera even though the shot was quite wide.
  4. Making Annabel bright enough that her reflection would read, without washing out the rest of the scene.
  5. Blocking the camera in concert with Annabel’s move so that its reflection would not be seen.

I left that last one in the capable hands of Steadicam op Rupert Peddle, along with Angel and Amy. What they ended up doing was timing Angel’s move so that she would block the window from camera at the moment that the camera’s reflection would have appeared.

Meanwhile, I put my head together with gaffer Bertil Mulvad to tackle the other four challenges. We arrived at a set-up using only three lights:

  1. A LiteMat 1 above the window (indoors) which served to light Annabel and her reflection, as well as reaching to the bed.
  2. Another LED source outside the window to one side, lighting the raven.
  3. A nine-light Maxibrute on a cherry-picker, side-lighting the woman outside and the cliffs. This was gelled with CTB to match the daylight LEDs.

Unfortunately the outside LED panel backlit the window glass, which was old and kept fogging up, obscuring the raven. With hindsight that panel might have been better on the other side of the window (left rather than right, but still outside), even though it would have created some spill problems inside. (To be honest, this would have made the lighting direction more consistent with the Maxibrute “moonlight” as well. It’s so easy to see this stuff after the fact!)

Everything else worked very well, but editor Jim Page did have to cut in a close-up of the raven, without which you’d never have known it was there.

The Hardest Shot I’ve Ever Done

Exposure Part 4: ISO

So far in this series we have seen how we can adjust exposure using aperture, which affects depth of field, ND filters, which can help us retain the depth of field we want, and shutter angle, which affects motion blur and flickering of certain light sources. In this final part we’ll look at ISO, perhaps the most misunderstood element of exposure, if indeed we can technically classify it as part of exposure at all!

 

What is ISO?

The acronym stands for International Organization for Standardization, the body which in 1974 combined the old ASA (American Standards Association) units of film speed with the German DIN standard. That’s why you’ll often hear the terms ISO and ASA used interchangeably.

Two different cameras filming the same scene with the same filters, aperture and shutter settings will not necessarily produce an image of equal brightness, because the ways that their electronics convert light into video signals are different. That is why we need ISO, which defines the relationship between the amount of light reaching the sensor (or film) and the brightness of the resulting image.

For example, a common ISO to shoot at today is 800. One way of defining ISO 800 is that it’s the setting required to correctly expose a key-light of 12 foot-candles with a lens set to T2.8 and a 180° shutter at 24fps (1/48th of a second).

If we double the ISO we double the effective sensitivity of the camera, or halve the amount of light it requires. So at ISO 1600 we would only need 6 foot-candles of light (all the other settings being the same), and at ISO 3200 we would need just 3 foot-candles. Conversely, at ISO 400 we would need about 25 foot-candles, or 50 at ISO 200.

 

A Flawed Analogy

Note that I said “effective” sensitivity. This is an important point. In the photochemical world, ISO indeed denotes the light sensitivity of the film stock. It is tempting to see digital ISO as representing the sensitivity of the sensor, and changing the ISO as analogous to loading a different film stock. But in reality the sensitivity of a digital sensor is fixed, and the ISO only determines the amount of gain applied to the sensor data before it is processed (which may happen in camera if you’re shooting linear or log, or in post if you’re shooting RAW).

So a better analogy is that altering the ISO is like altering how long the lab develops the exposed film negative for. This alters the film’s exposure index (EI), hence some digital cameras using the term EI in their menus instead of ISO or ASA.

We can take this analogy further. Film manufacturers specify a recommended development time, an arbitrary period designed to produce the optimal image. If you increase (push) or decrease (pull) the development time you will get a lighter or darker image respectively, but the quality of the image will be reduced in various ways. Similarly, digital camera manufacturers specify a native ISO, which is essentially the recommended amount of gain applied to the sensor data to produce what the manufacturer feels is the best image, and if you move away from that native ISO you’ll get a subjectively “lower quality” image.

Compare the graininess/smoothness of the blacks in these images from my 2017 tests. Click to enlarge.

The most obvious side effect of increasing the ISO is more noticeable noise in the image. It’s exactly the same as turning up the volume on an amplifier; you hear more hiss because the noise floor is being boosted along with the signal itself.

I remember the days of Mini-DV cameras, which instead of ISO had gain; my Canon XL1 had gain settings of -3dB, +6dB and +12dB. It was the exact same thing, just with a different name. What the XL1 called 0dB of gain was what we call the native ISO today.

 

ISO and Dynamic range

At this point we need to bring in the concept of dynamic range. Let’s take the Arri Alexa as an example. This camera has a dynamic range of 14 stops. At its native ISO of 800, those 14 stops of dynamic range are equally distributed above and below “correct” exposure (known as middle grey), so you can overexpose by up to seven stops, and underexpose by up to seven stops, without losing detail.

If you change the Alexa’s ISO, those limits of under- and overexposure still apply, but they’re shifted around middle grey. For example, at 400 ISO you have eight stops of detail below middle grey, but only six above it. This means that, assuming you adjust your iris, shutter or filters to compensate for the change in ISO, you can trade-off highlight detail for shadow detail, or vice versa.

Imagine underexposing a shot by one stop and bringing it back up in post. You increase the highlight detail, because you’re letting half the light through to the sensor, reducing the risk of clipped whites, but you also increase the noise when you bring it up in post. This is basically what you’re doing when you increase your ISO, except that if you’re recording in linear or log then the restoration of brightness and increase in gain happen within the camera, rather than in post with RAW.

Note the increased detail in the bulb at higher ISOs. Click to enlarge..

We can summarise all this as follows:

Doubling the ISO…

  • increases overall brightness by one stop, and
  • increases picture noise.

Then adjusting the exposure to compensate (e.g. closing the iris one stop)…

  • restores overall brightness to its original value,
  • gives you one more stop of detail in the highlights, and
  • gives you one less stop of detail in the shadows.

Alternatively, halving the ISO…

  • decreases overall brightness by one stop, and
  • decreases picture noise.

Then adjusting the exposure to compensate (e.g. opening the iris one stop)…

  • restores overall brightness to its original value,
  • gives you one less stop of detail in the highlights, and
  • gives you one more stop of detail in the shadows.

 

Conclusion

This brings me to the end of my exposure series. We’ve seen that choosing the “correct” exposure is a balancing act, taking into account not just the intended brightness of the image but also the desired depth of field, bokeh, lens flares, motion blur, flicker prevention, noise and dynamic range. I hope this series has helped you to make the best creative decisions on your next production.

See also: “6 Ways to Judge Exposure”

Exposure Part 4: ISO

Exposure Part 3: Shutter

In the first two parts of this series we saw how exposure can be controlled using the lens aperture – with side effects including changes to the depth of field – and neutral density (ND) filters. Today we will look at another means of exposure control: shutter angle.

 

The Physical Shutters of Film Cameras

As with aperture, an understanding of what’s going on under the hood is useful, and that begins with celluloid. Let’s imagine we’re shooting on film at 24fps, the most common frame rate. The film can’t move continuously through the gate (the opening behind the lens where the focused light strikes the film) or we would end up recording just a long vertical streak of light. The film must remain stationary long enough to expose an image, before being moved on by a distance of four perforations (the standard height of a 35mm film frame) so that the next frame can be exposed. Crucially, light must not hit the film while it is being moved, or vertical streaking will occur.

Joram van Hartingsveldt, CC BY-SA 3.0

This is where the shutter comes in. The shutter is a portion of a disc that spins in front of the gate. The standard shutter angle is 180°, meaning that the shutter is a semi-circle. We always describe shutter angles by the portion of the disc which is missing, so a 270° shutter (admitting 1.5x the light of a 180° shutter) is a quarter of a circle, and a 90° shutter (admitting half the light of a 180° shutter) is three-quarters.

The shutter spins continuously at the same speed as the frame rate – so at 24fps the shutter makes 24 revolutions per second. So with a 180° shutter, each 24th of a second is divided into two halves, i.e. 48ths of a second:

  • During one 48th of a second, the missing part of the shutter is over the gate, allowing the light to pass through and the stationary film to be exposed.
  • During the other 48th of a second, the shutter blocks the gate to prevent light hitting the film as it is advanced. The shutter has a mirrored surface so that light from the lens is reflected up the viewfinder, allowing the camera operator to see what they’re shooting.

 

Intervals vs. Angles

If you come from a stills or ENG background, you may be more used to talking about shutter intervals rather than angles. The two things are related as follows:

Frame rate x (360 ÷ shutter angle) = shutter interval denominator

For example, 24 x (360 ÷ 180) = 48 so a film running at 24fps, shot with a 180° shutter, shows us only a 48th of a second’s worth of light on each frame. This has been the standard frame rate and shutter angle in cinema since the introduction of sound in the late 1920s. The amount of motion blur captured in a 48th of a second is the amount that we as an audience have been trained to expect from motion pictures all our lives.

A greater (larger shutter angle, longer shutter interval) or lesser (smaller shutter angle, shorter shutter interval) amount of motion blur looks unusual to us and thus can be used to creative effect. Saving Private Ryan features one of the best-known examples of a small shutter angle in its D-day landing sequence, where the lack of motion blur creates a crisp, hyper-real effect that draws you into the horror of the battle. The effect has been endlessly copied since then, to the point that it now feels almost mandatory to shoot action scenes with a small shutter angle.

Large shutter angles are less common, but the extra motion blur can imply a drugged, fatigued or dream-like state.

In today’s digital environment, only the Arri Alexa Studio has a physical shutter. In other cameras, the sensor’s photo-sites are allowed to charge with light over a certain period of time – still referred to as the shutter interval, even though no actual shutter is involved. The same principles apply and the same 180° angle of the virtual shutter is standard. The camera will allow you to select a shutter angle/interval from a number of options, and on some models like the Canon C300 there is a menu setting to switch between displaying the shutter setting as an angle or an interval.

 

When to Change the Shutter Angle

Sometimes it is necessary to change the shutter angle to avoid flickering. Some luminous devices, such as TV screens and monitors, or HMI lighting not set to flicker-free mode, will appear to strobe, pulse or roll on camera. This is due to them turning on and off multiple times per second, in sync with the alternating current of the mains power supply, but not necessarily in sync with the shutter. For example, if you shoot a domestic fluorescent lamp in the UK, where the mains AC cycles at 50Hz, your 1/48th (180° at 24fps) shutter will be out of sync and the lamp will appear to throb or flicker on camera. The solution is to set the shutter to 172.8° (1/50th), which is indeed what most DPs do when shooting features in the UK. Round multiples of the AC frequency like 1/100th will also work.

You may notice that I have barely mentioned exposure so far in this article. This is because, unlike stills photographers, DPs rarely use the shutter as a means of adjusting exposure. An exception is that we may increase the shutter angle when the daylight is fading, to grab an extra shot. By doubling the shutter angle from 172.8° to 345.6° we double the light admitted, i.e. we gain one stop. As long as there isn’t any fast movement, the extra motion blur is likely to go unnoticed by the audience.

One of the hallmarks of amateur cinematography is that sunny scenes have no motion blur, due to the operator (or the camera’s auto mode) decreasing the shutter interval to avoid over-exposure. It is preferable to use ND filters to cut light on bright days, as covered in part two of this series.

For the best results, the 180° (or thereabouts) shutter angle should be retained when shooting slow motion as well. If your camera displays intervals rather than angles, ideally your interval denominator should be double the frame rate. So if you want to shoot at 50fps, set the shutter interval to 1/100th. For 100fps, set the shutter to 1/200th, and so on.

If you do need to change the shutter angle for creative or technical reasons, you will usually want to compensate with the aperture. If you halve the time the shutter is open for, you must double the area of the aperture to maintain the same exposure, and vice versa. For example, if your iris was set to T4 and you change the shutter from 180° to 90° you will need to stop up to T2.8. (Refer back to my article on aperture if you need to refresh your memory about T-stops.)

In the final part of this series we’ll get to grips with ISO.

Learn more about exposure in my online course, Cinematic Lighting. Until this Thursday (19/11/20) you can get it for the special price of £15.99 by using the voucher code INSTA90.

Exposure Part 3: Shutter

Exposure Part 2: Neutral Density (ND) Filters

In the first part of this series, I explained the concepts of f-stops and T-stops, and looked at how aperture can be used to control exposure. We saw that changing the aperture causes side effects, most noticeably altering the depth of field.

How can we set the correct exposure without compromising our depth of field? Well, as we’ll see later in this series, we can adjust the shutter angle and/or ISO, but both of those have their own side effects. More commonly a DP will use neutral density (ND) filters to control the amount of light reaching the lens. These filters get their name from the fact that they block all wavelengths of light equally, so they darken the image without affecting the colour.

 

When to use an ND Filter

Let’s look at an example. Imagine that I want to shoot at T4; this aperture gives a nice depth of field, on the shallow side but not excessively so. My subject is very close to a bright window and my incident light meter is giving me a reading of f/11. (Although I’m aiming for a T-stop rather an f-stop, I can still use the f-number my meter gives me; in fact if my lens were marked in f-stops then my exposure would be slightly off because the meter does not know the transmission efficiency of my lens.) Let’s remind ourselves of the f-stop/T-stop series before we go any further:

1      1.4      2      2.8      4      5.6      8      11      16      22     32

By looking at this series, which can be found printed on any lens barrel or permanently displayed on a light meter’s screen, I can see that f/11 (or T11) is three stops down from f/4 (or T4) – because 11 is three numbers to the right of 4 in the series. To achieve correct exposure at T4 I’ll need to cut three stops of light. I can often be seen on set counting the stops like this on my light meter or on my fingers. It is of course possible to work it out mathematically or with an app, but that’s not usually necessary. You quickly memorise the series of stops with practice.

 

What Strength of filter to choose

Some ND filters are marked in stops, so I could simply select a 3-stop ND and slide it into my matte box or screw it onto my lens. Other times – the built-in ND filters on the Sony FS7, for example – they’re defined by the fraction of light they let through. So the FS7’s 1/4 ND cuts two stops; the first stop halves the light – as we saw in part of one of this series – and the second stop halves it again, leaving us a quarter of the original amount. The 1/16 setting cuts four stops.

However, most commonly, ND filters are labelled in optical density. A popular range of ND filters amongst professional cinematographers are those made by Tiffen, and a typical set might be labelled as follows:

.3      .6      .9      1.2

That’s the optical density, a property defined as the natural logarithm of the ratio of the quantity of light entering the filter to the quantity of light exiting it on the other side. A .3 ND reduces the light by half because 10 raised to the power of -0.3 is about 0.5, and reducing light by half, as we’ve previously established, means dropping one stop.

If that maths is a bit much for you, don’t worry. All you really need to do is multiply the number of stops you want to cut by 0.3 to find the filter you need. So, going back to my example with the bright window, to get from T11 to T4, i.e. to cut three stops, I’ll pick the .9 ND.

It’s far from intuitive at first, but once you get your head around it, and memorise the f-stops, it’s not too difficult. Trust me!

Here are a couple more examples:

  • Light meter reads f/8 and you want to shoot at T5.6. That’s a one stop difference. (5.6 and 8 are right next to each other in the stop series, as you’ll see if you scroll back to the top.) 1 x 0.3 = 0.3 so you should use the .3 ND.
  • Light meter reads f/22 and you want to shoot at T2.8. That’s a six stop difference (scroll back up and count them), and 6 x 0.3 = 1.8, so you need a 1.8 ND filter. If you don’t have one, you need to stack two NDs in your matte box that add up to 1.8, e.g. a 1.2 and a .6.

 

Variations on a Theme

Variable ND filters are also available. These consist of two polarising filters which can be rotated against each other to progressively lighten or darken the image. They’re great for shooting guerilla-style with a small crew. You can set your iris where you want it for depth of field, then expose the image by eye simply by turning the filter. On the down side, they’re hard to use with a light meter because there is often little correspondence between the markings on the filter and stops. They can also have a subtle adverse effect on skin tones, draining a person’s apparent vitality, as some of the light which reflects off human skin is polarised.

IR pollution increases with successively stronger ND filters (left to right) used on a Blackmagic Micro Cinema Camera. The blue dyes in this costume evidently reflect a large amount of IR.

Another issue to look out for with ND filters is infra-red (IR). Some filters cut only the visible wavelengths of light, allowing IR to pass through. Some digital sensors will interpret this IR as visible red, resulting in an image with a red colour cast which can be hard to grade out because different materials will be affected to different degrees. Special IR ND filters are available to eliminate this problem.

These caveats aside, ND filters are the best way to adjust exposure (downwards at least) without affecting the image in any other way.

In the next part of this series I’ll look at shutter angles, what they mean, how they affect exposure and what the side effects are.

Learn how to use ND filters practically with my Cinematic Lighting online couse. Enter voucher code INSTA90 for an amazing 90% off.

Exposure Part 2: Neutral Density (ND) Filters