Last week saw the UK premier of The Knowledge, an art installation film, at the FLUX Exhibition hosted by Chelsea College of Arts. Conceived by award-winning, multi-disciplinary artist Ian Wolter,The Knowledge comments on the topical issue of artificial intelligence threatening jobs. It takes the form of a fake game show, pitting a team of traditional London cabbies (schooled in the titular Knowledge) against a team of smart-phoning minicab drivers. Although shot entirely on stage, the film’s central conceit is that the teams are each guiding a driver across London, to see whether technology or human experience will bring its car to the finish line first.
You can see a couple of brief promos on Vimeo here. It’s a unique project, and one that I knew would be an interesting challenge as soon as I heard of it from my friend Amanda Stekly, producer and production designer. This week and next I’ll describe the creative and technical decisions that went into photographing the piece, beginning this week with the camera side of things.
I had never shot a multi-camera studio production like this before, so my first move was to sit down with my regular 1st AC and steadicam operator Rupert Peddle, and his friend Jack D’Souza-Toulson. Jack has extensive experience operating as part of a multi-camera team for live TV and events. This conversation answered such basic questions as, could the operators each pull their own focus? (yes) and allowed me to form the beginnings of a plan for crew and kit.
Ian and Amanda wanted the film to have a dated look, and referenced such eighties quiz shows as 3-2-1 and Blankety Blank. Director Jonnie Howard and I knew that we had to supply the finished film in HD, which ruled out shooting on vintage analogue video cameras. Interlaced recording was rejected for similar reasons, though if memory serves, I did end up shooting at a shutter angle of 360 degrees to produce a more fluid motion suggestive of interlaced material.
I was very keen that the images should NOT look cinematic. Jonnie was able to supply two Canon C100s – which I’ve always thought have a sharp, “video-ish” look – and L-series glass. I set these to 1600 ISO to give us the biggest possible depth of field. For the remaining two cameras, I chose ENG models, a Canon XF-300 (owned by Rupert) and XF-305. In an ideal world, all four cameras would have been ENG models, to ensure huge depth of field and an overall TV look, but some compromise was necessary for budget reasons, and at least they all used Canon sensors. We hired a rack of four matching 9″ monitors so we could ensure a consistent look on set.
One Canon C100, with an L-series zoom, was mounted on a pedestal and outfitted with Rupert’s follow focus system, allowing Jack to pull focus from the panning handle. The other C100 would shoot a locked-off wide, and was the first camera to be set up. A 14mm Samyang lens made the set look huge, and I placed it low down to emphasise the map in the foreground, and to make it easy for the other cameras to shoot over it. Once that frame was set, we taped a large V shape on the floor to indicate the edges of the wide shot. As long as the lights and other cameras stayed out of that area, they would be safe.
Generally Jack’s pedestal-mounted C100 followed the host, Robert Jezek, or captured the interesting moving shots, while Rupert and the third operator, Jimmy Buchanan, cross-shot the two teams on the XF-100 and XF-105. No filtration was used, except for a four-point star filter on one camera when glitter canons are fired at the end of the game. This cheesiness was inspired by the 3-2-1 clips I watched for research, in which star filters were used for the tacky sequences showing the prizes on offer.
Next week I’ll discuss lighting the show. Meanwhile, find out more about Ian’s work at ianwolter.com.
Last week I looked at the science of colour: what it is, how our eyes see it, and how cameras see and process it. Now I’m going to look at colour theory – that is, schemes of mixing colours to produce aesthetically pleasing results.
The Colour wheel
The first colour wheel was drawn by Sir Isaac Newton in 1704, and it’s a precursor of the CIE diagram we met last week. It’s a method of arranging hues so that useful relationships between them – like primaries and secondaries, and the schemes we’ll cover below – can be understood. As we know from last week, colour is in reality a linear spectrum which we humans perceive by deducing it from the amounts of light triggering our red, green and blue cones, but certain quirks of our visual system make a wheel in many ways a more useful arrangement of the colours than a linear spectrum.
One of these quirks is that our long (red) cones, although having peak sensitivity to red light, have a smaller peak in sensitivity at the opposite (violet) end of the spectrum. This may be what causes our perception of colour to “wrap around”.
Another quirk is in the way that colour information is encoded in the retina before being piped along the optic nerve to the brain. Rather than producing red, green and blue signals, the retina compares the levels of red to green, and of blue to yellow (the sum of red and green cones), and sends these colour opponency channels along with a luminance channel to the brain.
You can test these opposites yourself by staring at a solid block of one of the colours for around 30 seconds and then looking at something white. The white will initially take on the opposing colour, so if you stared at red then you will see green.
19th century physiologist Ewald Hering was the first to theorise about this colour opponency, and he designed his own colour wheel to match it, having red/green on the vertical axis and blue/yellow on the horizontal.
Today we are more familiar with the RGB colour wheel, which spaces red, green and blue equally around the circle. But both wheels – the first dealing with colour perception in the eye-brain system, and the second dealing with colour representation on an RGB screen – are relevant to cinematography.
On both wheels, colours directly opposite each other are considered to cancel each other out. (In RGB they make white when combined.) These pairs are known as complementary colours.
A complementary scheme provides maximum colour contrast, each of the two hues making the other more vibrant. Take “The Snail” by modernist French artist Henri Matisse, which you can currently see at the Tate Modern; Matisse placed complementary colours next to each other to make them all pop.
In cinematography, a single pair of complementary colours is often used, for example the yellows and blues of Aliens‘ power loader scene:
Or this scene from Life on Mars which I covered on my YouTube show Lighting I Like:
I frequently use a blue/orange colour scheme, because it’s the natural result of mixing tungsten with cool daylight or “moonlight”.
And then of course there’s the orange-and-teal grading so common in Hollywood:
Amélie uses a less common complementary pairing of red and green:
An analogous colour scheme uses hues adjacent to each other on the wheel. It lacks the punch and vibrancy of a complementary scheme, instead having a harmonious, unifying effect. In the examples below it seems to enhance the single-mindedness of the characters. Sometimes filmmakers push analogous colours to the extreme of using literally just one hue, at which point it is technically monochrome.
There are other colour schemes, such as triadic, but complementary and analogous colours are by far the most common in cinematography. In a future post I’ll look at the psychological effects of individual colours and how they can be used to enhance the themes and emotions of a film.
Colour is a powerful thing. It can identify a brand, imply eco-friendliness, gender a toy, raise our blood pressure, calm us down. But what exactly is colour? How and why do we see it? And how do cameras record it? Let’s find out.
The Meaning of “Light”
One of the many weird and wonderful phenomena of our universe is the electromagnetic wave, an electric and magnetic oscillation which travels at 186,000 miles per second. Like all waves, EM radiation has the inversely-proportional properties of wavelength and frequency, and we humans have devised different names for it based on these properties.
EM waves with a low frequency and therefore a long wavelength are known as radio waves or, slightly higher in frequency, microwaves; we used them to broadcast information and heat ready-meals. EM waves with a high frequency and a short wavelength are known as x-rays and gamma rays; we use them to see inside people and treat cancer.
In the middle of the electromagnetic spectrum, sandwiched between infrared and ultraviolet, is a range of frequencies between 430 and 750 terahertz (wavelengths 400-700 nanometres). We call these frequencies “light”, and they are the frequencies which the receptors in our eyes can detect.
If your retinae were instead sensitive to electromagnetic radiation of between 88 and 91 megahertz, you would be able to see BBC Radio 2. I’m not talking about magically seeing into Ken Bruce’s studio, but perceiving the FM radio waves which are encoded with his silky-smooth Scottish brogue. Since radio waves can pass through solid objects though, perceiving them would not help you to understand your environment much, whereas light waves are absorbed or reflected by most solid objects, and pass through most non-solid objects, making them perfect for building a picture of the world around you.
Within the range of human vision, we have subdivided and named smaller ranges of frequencies. For example, we describe light of about 590-620nm as “orange”, and below about 450nm as “violet”. This is all colour really is: a small range of wavelengths (or frequencies) of electromagnetic radiation, or a combination of them.
In the eye of the beholder
The inside rear surfaces of your eyeballs are coated with light-sensitive cells called rods and cones, named for their shapes.
The human eye has about five or six million cones. They come in three types: short, medium and long, referring to the wavelengths to which they are sensitive. Short cones have peak sensitivity at about 420nm, medium at 530nm and long at 560nm, roughly what we call blue, green and red respectively. The ratios of the three cone types vary from person to person, but short (blue) ones are always in the minority.
Rods are far more numerous – about 90 million per eye – and around a hundred times more sensitive than cones. (You can think of your eyes as having dual native ISOs like a Panasonic Varicam, with your rods having an ISO six or seven stops faster than your cones.) The trade-off is that they are less temporally and spatially accurate than cones, making it harder to see detail and fast movement with rods. However, rods only really come into play in dark conditions. Because there is just one type of rod, we cannot distinguish colours in low light, and because rods are most sensitive to wavelengths of 500nm, cyan shades appear brightest. That’s why cinematographers have been painting night scenes with everything from steel grey to candy blue light since the advent of colour film.
The three types of cone are what allow us – in well-lit conditions – to havecolour vision. This trichromatic vision is not universal, however. Many animals have tetrachromatic (four channel) vision, and research has discovered some rare humans with it too. On the other hand, some animals, and “colour-blind” humans, are dichromats, having only two types of cone in their retinae. But in most people, perceptions of colour result from combinations of red, green and blue. A combination of red and blue light, for example, appears as magenta. All three of the primaries together make white.
Compared with the hair cells in the cochlea of your ears, which are capable of sensing a continuous spectrum of audio frequencies, trichromacy is quite a crude system, and it can be fooled. If your red and green cones are triggered equally, for example, you have no way of telling whether you are seeing a combination of red and green light, or pure yellow light, which falls between red and green in the spectrum. Both will appear yellow to you, but only one really is. That’s like being unable to hear the difference between, say, the note D and a combination of the notes C and E. (For more info on these colour metamers and how they can cause problems with certain types of lighting, check out Phil Rhode’s excellent article on Red Shark News.)
Mimicking your eyes, video sensors also use a trichromatic system. This is convenient because it means that although a camera and TV can’t record or display yellow, for example, they can produce a mix of red and green which, as we’ve just established, is indistinguishable from yellow to the human eye.
Rather than using three different types of receptor, each sensitive to different frequencies of light, electronic sensors all rely on separating different wavelengths of light before they hit the receptors. The most common method is a colour filter array (CFA) placed immediately over the photosites, and the most common type of CFA is the Bayer filter, patented in 1976 by an Eastman Kodak employee named Dr Bryce Bayer.
The Bayer filter is a colour mosaic which allows only green light through to 50% of the photosites, only red light through to 25%, and only blue to the remaining 25%. The logic is that green is the colour your eyes are most sensitive to overall, and that your vision is much more dependent on luminance than chrominance.
The resulting image must be debayered (or more generally, demosaiced) by an algorithm to produce a viewable image. If you’re recording log or linear then this happens in-camera, whereas if you’re shooting RAW it must be done in post.
This system has implications for resolution. Let’s say your sensor is 2880×1620. You might think that’s the number of pixels, but strictly speaking it isn’t. It’s the number of photosites, and due to the Bayer filter no single one of those photosites has more than a third of the necessary colour information to form a pixel of the final image. Calculating that final image – by debayering the RAW data – reduces the real resolution of the image by 20-33%. That’s why cameras like the Arri Alexa or the Blackmagic Cinema Camera shoot at 2.8K or 2.5K, because once it’s debayered you’re left with an image of 2K (cinema standard) resolution.
Your optic nerve can only transmit about one percent of the information captured by the retina, so a huge amount of data compression is carried out within the eye. Similarly, video data from an electronic sensor is usually compressed, be it within the camera or afterwards. Luminance information is often prioritised over chrominance during compression.
You have probably come across chroma subsampling expressed as, for example, 444 or 422, as in ProRes 4444 (the final 4 being transparency information, only relevant to files generated in postproduction) and ProRes 422. The three digits describe the ratios of colour and luminance information: a file with 444 chroma subsampling has no colour compression; a 422 file retains colour information only in every second pixel; a 420 file, such as those on a DVD or BluRay, contains one pixel of blue info and one of red info (the green being derived from those two and the luminance) to every four pixels of luma.
Whether every pixel, or only a fraction of them, has colour information, the precision of that colour info can vary. This is known as bit depth or colour depth. The more bits allocated to describing the colour of each pixel (or group of pixels), the more precise the colours of the image will be. DSLRs typically record video in 24-bit colour, more commonly described as 8bpc or 8 bits per (colour) channel. Images of this bit depth fall apart pretty quickly when you try to grade them. Professional cinema cameras record 10 or 12 bits per channel, which is much more flexible in postproduction.
The third attribute of recorded colour is gamut, the breadth of the spectrum of colours. You may have seen a CIE (Commission Internationale de l’Eclairage) diagram, which depicts the range of colours perceptible by human vision. Triangles are often superimposed on this diagram to illustrate the gamut (range of colours) that can be described by various colour spaces. The three colour spaces you are most likely to come across are, in ascending order of gamut size: Rec.709, an old standard that is still used by many monitors; P3, used by digital cinema projectors; and Rec.2020. The latter is the standard for ultra-HD, and Netflix are already requiring that some of their shows are delivered in it, even though monitors capable of displaying Rec.2020 do not yet exist. Most cinema cameras today can record images in Rec.709 (known as “video” mode on Blackmagic cameras) or a proprietary wide gamut (“film” mode on a Blackmagic, or “log” on others) which allows more flexibility in the grading suite. Note that the two modes also alter the recording of luminance and dynamic range.
To summarise as simply as possible: chroma subsampling is the proportion of pixels which have colour information, bit depth is the accuracy of that information and gamut is the limits of that info.
That’s all for today. In future posts I will look at how some of the above science leads to colour theory and how cinematographers can make practical use of it.
Exposing the image correctly is one of the most important parts of a cinematographer’s job. Choosing the T-stop can be a complex technical and creative decision, but fortunately there are many ways we can measure light to inform that decision.
First, let’s remind ourselves of the journey light makes: photons are emitted from a source, they strike a surface which absorbs some and reflects others – creating the impressions of colour and shade; then if the reflected light reaches an eye or camera lens it forms an image. We’ll look at the various ways of measuring light in the order the measurements occur along this light path, which is also roughly the order in which these measurements are typically used by a director of photography.
1. Photometrics data
You can use data supplied by the lamp manufacturer to calculate the exposure it will provide, which is very useful in preproduction when deciding what size of lamps you need to hire. There are apps for this, such as the Arri Photometrics App, which allows you to choose one of their fixtures, specify its spot/flood setting and distance from the subject, and then tells you the resulting light level in lux or foot-candles. An exposure table or exposure calculation app will translate that number into a T-stop at any given ISO and shutter interval.
2. Incident meter
Some believe that light meters are unnecessary in today’s digital landscape, but I disagree. Most of the methods listed below require the camera, but the camera may not always be handy – on a location recce, for example. Or during production, it would be inconvenient to interrupt the ACs while they’re rigging the camera onto a crane or Steadicam. This is when having a light meter on your belt becomes very useful.
An incident meter is designed to measure the amount of light reaching the subject. It is recognisable by its white dome, which diffuses and averages the light striking its sensor. Typically it is used to measure the key, fill and backlight levels falling on the talent. Once you have input your ISO and shutter interval, you hold the incident meter next to the actor’s face (or ask them to step aside!) and point it at each source in turn, shading the dome from the other sources with your free hand. You can then decide if you’re happy with the contrast ratios between the sources, and set your lens to the T-stop indicated by the key-light reading, to ensure correct exposure of the subject’s face.
3. Spot meter (a.k.a. reflectance meter)
Now we move along the light path and consider light after it has been reflected off the subject. This is what a spot meter measures. It has a viewfinder with which you target the area you want to read, and it is capable of metering things that would be impractical or impossible to measure with an incident meter. If you had a bright hillside in the background of your shot, you would need to drive over to that hill and climb it to measure the incident light; with a spot meter you would simply stand at the camera position and point it in the right direction. A spot meter can also be used to measure light sources themselves: the sky, a practical lamp, a flame and so on.
But there are disadvantages too. If you spot meter a Caucasian face, you will get a stop that results in underexposure, because a Caucasian face reflects quite a lot of light. Conversely, if you spot meter an African face, you will get a stop that results in overexposure, because an African face reflects relatively little light. For this reason a spot meter is most commonly used to check whether areas of the frame other than the subject – a patch of sunlight in the background, for example – will blow out.
Your smartphone can be turned into a spot meter with a suitable app, such as Cine Meter II, though you will need to configure it using a traditional meter and a grey card. With the addition of a Luxiball attachment for your phone’s camera, it can also become an incident meter.
The remaining three methods of judging exposure which I will cover all use the camera’s sensor itself to measure the light. Therefore they take into account any filters you’re using as well transmission loss within the lens (which can be an issue when shooting on stills glass, where the marked f-stops don’t factor in transmission loss).
4. Monitors and viewfinders
In the world of digital image capture, it can be argued that the simplest and best way to judge exposure is to just observe the picture on the monitor. The problem is, not all screens are equal. Cheap monitors can misrepresent the image in all kinds of ways, and even a high-end OLED can deceive you, displaying shadows blacker than any cinema or home entertainment system will ever match. There are only really two scenarios in which you can reliably judge exposure from the image itself: if you’ve owned a camera for a while and you’ve become very familiar with how the images in the viewfinder relate to the finished product; or if the monitor has been properly calibrated by a DIT (Digital Imaging Technician) and the screen is shielded from light.
Most cameras and monitors have built-in tools which graphically represent the luminance of the image in a much more accurate way, and we’ll look at those next. Beware that if you’re monitoring a log or RAW image in Rec.709, these tools will usually take their data from the Rec.709 image.
5. Waveforms and histograms
These are graphs which show the prevalence of different tones within the frame. Histograms are the simplest and most common. In a histogram, the horizontal axis represents luminance and the vertical axis shows the number of pixels which have that luminance. It makes it easy to see at a glance whether you’re capturing the greatest possible amount of detail, making best use of the dynamic range. A “properly” exposed image, with a full range of tones, should show an even distribution across the width of the graph, with nothing hitting the two sides, which would indicate clipped shadows and highlights. A night exterior would have a histogram crowded towards the left (darker) side, whereas a bright, low contrast scene would be crowded on the right.
A waveform plots luminance on the vertical axis, with the horizontal axis matching the horizontal position of those luminance values within the frame. The density of the plotting reveals the prevalence of the values. A waveform that was dense in the bottom left, for example, would indicate a lot of dark tones on the lefthand side of frame. Since the vertical (luminance) axis represents IRE (Institute of Radio Engineers) values, waveforms are ideal when you need to expose to a given IRE, for example when calibrating a system by shooting a grey card. Another common example would be a visual effects supervisor requesting that a green screen be lit to 50 IRE.
6. Zebras and false colours
Almost all cameras have zebras, a setting which superimposes diagonal stripes on parts of the image which are over a certain IRE, or within a certain range of IREs. By digging into the menus you can find and adjust what those IRE levels are. Typically zebras are used to flag up highlights which are clipping (theoretically 100 IRE), or close to clipping.
Exposing an image correctly is not just about controlling highlight clipping however, it’s about balancing the whole range of tones – which brings us to false colours. A false colour overlay looks a little like a weather forecaster’s temperature map, with a code of colours assigned to various luminance values. Clipped highlights are typically red, while bright areas still retaining detail (known as the “knee” or “shoulder”) are yellow. Middle grey is often represented by green, while pink indicates the ideal level for caucasian skin tones (usually around 55 IRE). At the bottom end of the scale, blue represents the “toe” – the darkest area that still has detail – while purple is underexposed. The advantage of zebras and false colours over waveforms and histograms is that the former two show you exactly where the problem areas are in the frame.
I hope this article has given you a useful overview of the tools available for judging exposure. Some DPs have a single tool they rely on at all times, but many will use all of these methods at one time or another to produce an image that balances maximising detail with creative intent. I’ll leave you with a quote from the late, great Douglas Slocombe, BSC who ultimately used none of the above six methods!
I used to use a light meter – I used one for years. Through the years I found that, as schedules got tighter and tighter, I had less and less time to light a set. I found myself not checking the meter until I had finished the set and decided on the proper stop. It would usually say exactly what I thought it should. If it didn’t, I wouldn’t believe it, or I would hold it in such a way as to make it say my stop. After a time I decicded this was ridiculous and stopped using it entirely. The “Raiders” pictures were all shot without a metre. I just got used to using my eyes.
This is a book that caught my eye following my recent photography project, Stasis. In that project I made some limited explorations of the relationship between time, space and light, so Motion Studies: Time, Space and Eadweard Muybridge, to give it its full title, seemed like it would be on my current wavelength.
Like me a few weeks ago, you might be vaguely aware of Muybridge as the man who first photographed a trotting horse sharply enough to prove that all four of its legs left the ground simultaneously. You may have heard him called “The Father of Cinema”, because he was the first person to shoot a rapid sequence of images of a moving body, and the first person to reanimate those images on a screen.
Born in Kingston-on-Thames in 1830, Muybridge emigrated to San Francisco in the 1850s where, following a stint as a book seller and a near-fatal accident in a runaway carriage, he took up landscape photography. He shot spectacular views of Yosemite National Park and huge panoramas of his adopted city. In 1872 he was commissioned by the railroad tycoon Leland Stanford to photograph his racehorse Occident in motion. This developed into a vast project for Muybridge over the next decade or so, ultimately encompassing over 100,000 photos of humans and other animals in motion.
Much of his early work was accomplished on mammoth wet plates, 2ft wide, that had to be coated with emulsion just before exposure and developed quickly afterwards, necessitating a travelling darkroom tent. To achieve the quick exposures he needed to show the limbs of a trotting horse without motion blur, he had to develop new chemistry and – with John Isaacs – a new electromagnetic shutter. The results were so different to anything that had been photographed before, that they were initially met with disbelief in some quarters, particularly amongst painters, who were eventually forced to recognise that they had been incorrectly portraying horse’s legs. Artists still use Muybridge’s motion studies today as references for dynamic anatomy.
To “track” with the animals in motion, Muybridge used a battery of regularly-spaced cameras, each triggered by the feet of the subject pulling on a wire or thread as they passed. Sometimes he would surround a subject with cameras and trigger them all simultaneously, to get multiple angles on the same moment in time. Does that sound familiar? Yes, Muybridge invented Bullet Time over a century before The Matrix.
Muybridge was not the first person to project images in rapid succession to create the illusion of movement, but he was the first person to display photographed (rather than drawn) images in a such a way, to deconstruct motion and reassemble it elsewhere like a Star Trek transporter. In 1888 Muybridge met with Thomas Edison and discussed collaborating on a system to combine motion pictures with wax cylinder audio recordings, but nothing came of this idea which was decades ahead of its time. The same year, French inventor Louis Le Prince shot Roundhay Garden Scene, the oldest known film. A few years later, Edison patented his movie camera, and the Lumière brothers screened their world-changing Workers Leaving the Lumière Factory. The age of cinema had begun.
Although Muybridge is the centre of Solnit’s book, there is a huge amount of context. The author’s thesis is that Muybridge represents a turning point, a divider between the world he was born into – a world in which people and information could only travel as fast as they or a horse could walk or run, a world where every town kept its own time, where communities were close-knit and relatively isolated – and the world which innovations like his helped to create – the world of speed, of illusions, of instantaneous global communication, where physical distance is no barrier. Solnit draws a direct line from Muybridge’s dissection of time and Stanford’s dissection of space to the global multimedia village we live in today. Because of all this context, the book feels a little slow to get going, but as the story continues and the threads draw together, the value of it becomes clear, elucidating the meaning and significance of Muybridge’s work.
I can’t claim to have ever been especially interested in history, but I found the book a fascinating lesson on the American West of the late nineteenth century, as well as a thoughtful analysis of the impact photography and cinematography have had on human culture and society. As usual, I’m reviewing this book a little late (it was first published in 2003!), but I heartily recommend checking it out if you’re at all interested in experimental photography or the origins of cinema.
Having lately shot my first roll of black-and-white film in a decade, I thought now would be a good time to delve into the story of monochrome image-making and the various reasons artists have eschewed colour.
I found the recent National Gallery exhibition, Monochrome: Painting in Black and White, a great primer on the history of the unhued image. Beginning with examples from medieval religious art, the exhibition took in grisaille works of the Renaissance before demonstrating the battle between painting and early photography, and finishing with monochrome modern art.
Several of the pictures on display were studies or sketches which were generated in preparation for colour paintings. Ignoring hue allowed the artists to focus on form and composition, and this is still one of black-and-white’s great strengths today: stripping away chroma to heighten other pictorial effects.
What fascinated me most in the exhibition were the medieval religious paintings in the first room. Here, old testament scenes in black-and-white were painted around a larger, colour scene from the new testament; as in the modern TV trope, the flashbacks were in black-and-white. In other pictures, a colour scene was framed by a monochrome rendering of stonework – often incredibly realistic – designed to fool the viewer into thinking they were seeing a painting in an architectural nook.
During cinema’s long transition from black-and-white to colour, filmmakers also used the two modes to define different layers of reality. When colour processes were still in their infancy and very expensive, filmmakers selected particular scenes to pick out in rainbow hues, while the surrounding material remained in black-and-white like the borders of the medieval paintings. By 1939 the borders were shrinking, as The Wizard of Oz portrayed Kansas, the ordinary world, in black-and-white, while rendering Oz – the bulk of the running time – in colour.
Michael Powell, Emeric Pressburger and legendary Technicolor cinematographer Jack Cardiff, OBE, BSC subverted expectations with their 1946 fantasy-romance A Matter of Life and Death, set partly on Earth and partly in heaven. Says Cardiff in his autobiography:
Quite early on I had said casually to Michael Powell, “Of course heaven will be in colour, won’t it?” And Michael replied, “No. Heaven will be in black and white.” He could see I was startled, and grinned: “Because everyone will expect heaven to be in colour, I’m doing it in black-and-white.”
Ironically Cardiff had never shot in black-and-white before, and he ultimately captured the heavenly scenes on three-strip Technicolor, but didn’t have the colour fully developed, resulting in a pearlescent monochrome.
Meanwhile, DPs like John Alton, ASC were pushing greyscale cinematography to its apogee with a genre that would come to be known as film noir. Oppressed Jews like Alton fled the rising Nazism of Europe for the US, bringing German Expressionism with them. The result was a trend of hardboiled thrillers lit with oppressive contrast, harsh shadows, concealing silhouettes and dramatic angles, all of which were heightened by the lack of distracting colour.
Alton himself had a paradoxical relationship with chroma, famously stating that “black and white are colours”. While he is best known today for his noir, his only Oscar win was for his work on the Technicolor musical An American in Paris, the designers of which hated Alton for the brightly-coloured light he tried to splash over their sets and costumes.
It wasn’t just Alton that was moving to colour. Soon the economics were clear: chromatic cinema was more marketable and no longer prohibitively expensive. The writing was on the wall for black-and-white movies, and by the end of the sixties they were all but gone.
I was brought up in a world of default colour, and the first time I can remember becoming aware of black-and-white was when Schindler’s List was released in 1993. I can clearly recall a friend’s mother refusing to see the film because she felt she wouldn’t be getting her money’s worth if there was no colour. She’s not alone in this view, and that’s why producers are never keen to green-light monochrome movies. Spielberg only got away with it because his name was proven box office gold.
A few years later, Jonathan Frakes and his DP Matthew F. Leonetti, ASC wanted to shoot the holodeck sequence of Star Trek: First Contact in black-and-white, but the studio deemed test footage “too experimental”. For the most part, the same attitude prevails today. Despite being marketed as a “visionary” director ever since Pan’s Labyrinth, Guillermo del Toro’s vision of The Shape of Water as a black-and-white film was rejected by financiers. He only got the multi-Oscar-winning fairytale off the ground by reluctantly agreeing to shoot in colour.
Yet there is reason to be hopeful about black-and-white remaining an option for filmmakers. In 2007 MGM denied Frank Darabont the chance to make The Mist in black-and-white, but they permitted a desaturated version on the DVD. Darabont had this to say:
No, it doesn’t look real. Film itself [is a] heightened recreation of reality. To me, black-and-white takes that one step further. It gives you a view of the world that doesn’t really exist in reality and the only place you can see that representation of the world is in a black-and-white movie.
In 2016, a “black and chrome” version of Mad Max: Fury Road was released on DVD and Blu-Ray, with director George Miller saying:
The best version of “Road Warrior” [“Mad Max 2”] was what we called a “slash dupe,” a cheap, black-and-white version of the movie for the composer. Something about it seemed more authentic and elemental. So I asked Eric Whipp, the [“Fury Road”] colourist, “Can I see some scenes in black-and-white with quite a bit of contrast?” They looked great. So I said to the guys at Warners, “Can we put a black-and-white version on the DVD?”
The following year, Logan director James Mangold’s black-and-white on-set photos proved so popular with the public that he decided to create a monochrome version of the movie. “The western and noir vibes of the film seemed to shine in the form, and there was not a trace of the modern comic hero movie sheen,” he said. Most significantly, the studio approved a limited theatrical release for Logan Noir, presumably seeing the extra dollar-signs of a second release, rather than the reduced dollar-signs of a greyscale picture.
Perhaps the medium of black-and-white imaging has come full circle. During the Renaissance, greyscale images were preparatory sketches, stepping stones to finished products in colour. Today, the work-in-progress slash dupe of Road Warrior and James Mangold’s photographic studies of Logan were also stepping stones to colour products, while at the same time closing the loop by inspiring black-and-white products too.
With the era of budget- and technology-mandated monochrome outside the living memory of many viewers today, I think there is a new willingness to accept black-and-white as an artistic choice. The acclaimed sci-fi anthology series Black Mirror released an episode in greyscale this year, and where Netflix goes, others are bound to follow.
After fourteen nominations, celebrated cinematographer Roger Deakins, CBE, BSC, ASC finally won an Oscar last night, for his work on Denis Villeneuve’s Blade Runner 2049. Villeneuve’s sequel to Ridley Scott’s 1982 sci-fi noir is not a perfect film; its measured, thoughtful pace is not to everyone’s taste, and it has serious issues with women – all of the female characters being highly sexualised, callously slaughtered, or both – but the Best Cinematography Oscar was undoubtedly well deserved. Let’s take a look at the photographic style Deakins employed, and how it plays into the movie’s themes.
Blade Runner 2049 returns to the dystopian metropolis of Ridley Scott’s classic three decades later, introducing us to Ryan Gosling’s K. Like Harrison Ford’s Deckard before him, K is a titular Blade Runner, tasked with locating and “retiring” rogue replicants – artificial, bio-engineered people. He soon makes a discovery which could have huge implications both for himself and the already-strained relationship between humans and replicants. In his quest to uncover the truth, K must track down Deckard for some answers.
Villeneuve’s film meditates on deep questions of identity, creating a world in which you can never be sure who is or isn’t real – or even what truly constitutes being “real”. Deakins reinforces this existential uncertainty by reducing characters and locations to mere forms. Many scenes are shrouded in smog, mist, rain or snow, rendering humans and replicants alike as silhouettes.
K spends his first major scene seated in front of a window, the side-light bouncing off a nearby cabinet the only illumination on his face. Deakins’ greatest strength is his ability to adapt to whatever style each film requires, but if he has a recognisable signature it’s this courage to rely on a single source and let the rest of the frame go black.
Whereas Scott and his DP Jordan Cronenweth portrayed LA mainly at night, ablaze with pinpoints of light, Villeneuve and Deakins introduce it in daylight, but a daylight so dim and smog-ridden that it reveals even less than those night scenes from 1982.
All this is not to say that the film is frustratingly dark, or that audiences will struggle to make out what is going on. Shooting crisply on Arri Alexas with Arri/Zeiss Master Primes, Deakins is a master of ensuring that you see what you need to see.
A number of the film’s sequences are colour-coded, delineating them as separate worlds. The city is mainly fluorescent blues and greens, visually reinforcing the sickly state of society, with the police department – an attempt at justice in an insane world – a neutral white.
The Brutalist headquarters of Jared Leto’s blind entrepreneur Wallace are rendered in gold, as though the corporation attempted a friendly yellow but was corrupted by greed. These scenes also employ rippling reflections from pools of water. Whereas the watery light in the Tyrell HQ of Scott’s Blade Runner was a random last-minute idea by the director, concerned that his scene lacked enough interest and production value, here the light is clearly motivated by architectural water features. Yet it is used symbolically too, and very effectively so, as it underscores one of Blade Runner 2049’s most powerful scenes. At a point in the story where more than one character is calling their memories into question, the ripples playing across the walls are as intangible and illusory as those recollections. “I know what’s real,” Deckard asserts to Wallace, but both the photography and Ford’s performance bely his words.
The most striking use of colour is the sequence in which K first tracks Deckard down, hiding out in a Las Vegas that’s been abandoned since the detonation of a dirty bomb. Inspired by photos of the Australian dust storm of 2009, Deakins bathed this lengthy sequence in soft, orangey-red – almost Martian – light. This permeating warmth, contrasting with the cold artificial light of LA, underlines the personal nature of K’s journey and the theme of birth which is threaded throughout the film.
Deakins has stated in interviews that he made no attempt to emulate Cronenweth’s style of lighting, but nonetheless this sequel feels well-matched to the original in many respects. This has a lot to do with the traditional camerawork, with most scenes covered in beautifully composed static shots, and movement accomplished where necessary with track and dolly.
The visual effects, which bagged the film’s second Oscar, also drew on techniques of the past; the above featurette shows a Canon 1DC tracking through a miniature landscape at 2:29. “Denis and I wanted to do as much as possible in-camera,” Deakins told Variety, “and we insisted when we had the actors, at least, all the foreground and mid-ground would be in-camera.” Giant LED screens were used to get authentic interactive lighting from the advertising holograms on the city streets.
One way in which the lighting of the two Blade Runner movies is undeniably similar is the use of moving light sources to suggest an exciting world continuing off camera. (The infamous lens flares of J.J. Abrahms’ Star Trek served the same purpose, illustrating Blade Runner’s powerful influence on the science fiction genre.) But whereas, in the original film, the roving searchlights pierce the locations sporadically and intrusively, the dynamic lights of Blade Runner 2049 continually remodel the actors’ faces. One moment a character is in mysterious backlight, the next in sinister side-light, and the next in revealing front-light – inviting the audience to reassess who these characters are at every turn.
This obfuscation and transience of identity and motivation permeates the whole film, and is its core visual theme. The 1982 Blade Runner was a deliberate melding of sci-fi and film noir, but to me the sequel does not feel like noir at all. Here there is little hard illumination, no binary division of light and dark. Instead there is insidious soft light, caressing the edge of a face here, throwing a silhouette there, painting everyone on a continuous (and continuously shifting) spectrum between reality and artificiality.
Blade Runner 2049 is a much deeper and more subtle film than its predecessor, and Deakins’ cinematography beautifully reflects this.
Stasis is a personal photography project about time and light. You can view all the images here, and in this post I’ll take you through the technical and creative process of making them.
I got into cinematography directly through a love of movies and filmmaking, rather than from a fine art background. To plug this gap, over the past few of years I’ve been trying to give myself an education in art by going to galleries, and reading art and photography books. I’ve previously written about how JMW Turner’s work captured my imagination, but another artist whose work stood out to me was Gerrit (a.k.a. Gerard) Dou. Whereas most of the Dutch 17th century masters painted daylight scenes, Dou often portrayed people lit by only a single candle.
At around the same time as I discovered Dou, I researched and wrote a blog post about Barry Lyndon‘s groundbreaking candlelit scenes. This got me fascinated by the idea that you can correctly expose an image without once looking at a light meter or digital monitor, because tables exist giving the appropriate stop, shutter and ISO for any given light level… as measured in foot-candles. (One foot-candle is the amount of light received from a standard candle that is one foot away.)
So when I bought a 35mm SLR (a Pentax P30T) last autumn, my first thought was to recreate some of Dou’s scenes. It would be primarily an exercise in exposure discipline, training me to judge light levels and fall-off without recourse to false colours, histograms or any of the other tools available to a modern DP.
I conducted tests with Kate Madison, who had also agreed to furnish period props and costumes from the large collection which she had built up while making Born of Hope and Ren: The Girl with the Mark. Both the tests and the final images were captured on Fujifilm Superia X-tra 400. Ideally I would have tested multiple stocks, but I must confess that the costs of buying and processing several rolls were off-putting. I’d previously shot some basic latitude tests with Superia, so I had some confidence about what it could and couldn’t do. (It can be over-exposed at least five stops and still look good, but more than a stop under and it falls apart.) I therefore confined myself to experimenting with candle-to-subject distances, exposure times and filtration.
The tests showed that the concept was going to work, and also confirmed that I would need to use an 80B filter to cool the “white balance” of the film from its native daylight to tungsten (3400K). (As far as I can tell, tungsten-balanced stills film is no longer on the market.) Candlelight has a colour temperature of about 1800K, so it still reads as orange through an 80B, but without the filter it’s an ugly red.
Meanwhile, the concept had developed beyond simply recreating Gerrit Dou’s scenes. I decided to add a second character, contrasting the historical man lit only by his candle with a modern girl lit only by her phone. Flames have a hypnotic power, tapping into our ancient attraction to light, and today’s smartphones have a similarly powerful draw.
The candlelight was 1600K warmer than the filtered film, so I used an app called Colour Temp to set my iPhone to 5000K, making it 1600K cooler than the film; the phone would therefore look as blue as the candle looked orange. (Unfortunately my phone died quickly and I had trouble recharging it, so some of the last shots were done with Izzi’s non-white-balanced phone.) To match the respective colours of light, we dressed Ivan in earthy browns and Izzi in blues and greys.
We shot in St. John’s Church in Duxford, Cambridgeshire, which hasn’t been used as a place of worship since the mid-1800s. Unique markings, paintings and graffiti from the middle ages up to the present give it simultaneously a history and a timelessness, making it a perfect match to the clash of eras represented by my two characters. It resonated with the feelings I’d had when I started learning about art and realised the continuity of techniques and aims from me in my cinematography back through time via all the great artists of the past to the earliest cave paintings.
I knew from the tests that long exposures would be needed. Extrapolating from the exposure table, one foot-candle would require a 1/8th of a second shutter with my f1.4 lens wide open and the Fujifilm’s ISO of 400. The 80B has a filter factor of three, meaning you need three times more light, or, to put it another way, it cuts 1 and 2/3rds of a stop. Accounting for this, and the fact that the candle would often be more than a foot away, or that I’d want to see further into the shadows, the exposures were all at least a second long.
As time had become very much the theme of the project, I decided to make the most of these long exposures by playing with motion blur. Not only does this allow a static image – paradoxically – to show a passage of time, but it recalls 19th century photography, when faces would often blur during the long exposures required by early emulsions. Thus the history of photography itself now played a part in this time-fluid project.
I decided to shoot everything in portrait, to make it as different as possible from my cinematography work. Heavily inspired by all the classical art I’d been discovering, I used eye-level framing, often flat-on and framed architecturally with generous headroom, and a normal lens (an Asahi SMC Pentax-M 50mm/f1.4) to provide a natural field of view.
I ended up using my light meter quite a lot, though not necessarily exposing as it indicated. It was all educated guesswork, based on what the meter said and the tests I’d conducted.
I was tempted more than once to tell a definite story with the images, and had to remind myself that I was not making a movie. In the end I opted for a very vague story which can be interpreted many ways. Which of the two characters is the ghost? Or is it both of them? Are we all just ghosts, as transient as motion blur? Do we unwittingly leave an intangible imprint on the universe, like the trails of light my characters produce, or must we consciously carve our mark upon the world, as Ivan does on the wall?
Models: Izzi Godley & Ivan Moy. Stylist: Kate Madison. Assistant: Ash Maharaj. Location courtesy of the Churches Conservation Trust. Film processing and scanning by Aperture, London.
I used to do a lot of editing work alongside DPing, and although those days are now behind me, their influence lives on. Every day that I work as a cinematographer, I use some of the knowledge I gained while slaving over a multi-coloured keyboard. Here are some of the most important things I learnt from editing.
1. Performance always wins.
The editor will always use the take with the best performance. What this means for the DP is that there is really no point requesting another take because of a missed focus pull, bumpy dolly move or dodgy pan, because inevitably the performance will not be as spontaneous and engaging as it was when you cocked up the camerawork, so the editor will use the first take.
Of course you need to make the director aware of any significant technical issues, and if they want to do another take, that’s absolutely their prerogative. But the editor will still use the first take. So get it right on the first take, even if that means pushing for another rehearsal.
2. Your darlings will die.
You know all your favourite shots? All the ones you’ve been mentally ear-marking for your showreel? The beautifully-lit wides, the fancy camera moves, that cool scene with the really interesting set? Yeah, half of those won’t make the final cut.
That wide shot is used for a single second before they cut into the meaty mid-shots. The camera move slowed the scene down too much so they chopped it up. That scene with the cool set looked great but didn’t advance the plot.
Two things to learn from this: 1. Do a great job, but don’t be a perfectionist, because you might be wasting everyone’s time on something that is destined for the cutting room floor. 2. If you want that shot for your showreel, grab it from the DIT, otherwise you might never see it again.
3. Bring ’em in, let ’em leave.
I can’t count the number of times, when shooting a close-up, I’ve advised the director to run the whole scene. They just wanted to pick up a few lines, but I convince them to let the talent walk in at the start and walk out at the end. That way the editor has much more flexibility on when to cut, a flexibility which I know that I appreciated when I was the one wrangling the timeline.
Any angle you shoot, push to cover the entire scene from it. In most cases it takes only slightly more time, and it’s easier for the actors because they get to do the whole emotional arc. And the editor will have many more options.
4. Spot the Missing Shot.
The ability to edit in your head is incredibly useful on set. If you can mentally assemble the coverage you’ve just shot, you can quickly identify anything that’s missing. Years of editing trained me to do this, and it’s saved annoying pick-ups several times. Officially this is the script supervisor’s job, but smaller productions may not always have someone in this capacity, and even when they do, another person keeping track can’t hurt.
5. Respect the slate.
On smaller productions, the clapperboard is often treated as an inconvenience. People sometimes chat over it, directors giving last-minute instructions, or actors finishing their showbiz anecdotes before getting into character, rendering the audio announcement unintelligible. On no- or micro-budget productions there might not be a 2nd AC, so the board gets passed to whoever’s handy at the time, who has no idea what the current slate or take number are, and the whole thing becomes a meaningless farce.
Which is fine for everyone except the poor bastard in the edit suite who’s got to figure out which audio clip goes with which video clip. It can add hours of extra work for them. I’ve been there, and it ain’t pretty. So, for the sanity of the (assistant) editor, please respect the slate.
Recently, having put it off for as long as possible, I upgraded to MacOS High Sierra, the first new OS to not support Final Cut Pro 7. It was a watershed moment for me. Editing used to comprise at least half of my work, and Final Cut had been there throughout my entire editing career.
I first heard of Final Cut in early 2000, when it was still on version one. The Rural Media Company in Hereford, which was my main client at the start of my freelance career, had purchased a copy to go with their shiny Mac G3. The problem was, no-one at the company knew how to use it.
Meanwhile, I was lobbying to get some time in the Avid edit suite (a much hallowed and expensive room) to cut behind-the-scenes footage from Integr8, a film course I’d taken part in the previous summer. The course and its funding were long finished, but since so much BTS footage had been shot, I felt it was a shame not to do something with it.
Being 19 and commensurately inexperienced, I was denied time on the Avid. Instead, the head of production suggested I used the G3 which was sitting and idle and misunderstood in one of the offices. Disappointed but rising to the challenge, I borrowed the manual for Final Cut Pro, took it home and read it cover to cover. Then I came back in and set to work cutting the Integr8 footage.
Editing in 2000 was undergoing a huge (excuse the pun) transition. In the back of the equipment storeroom, Rural Media still had a tape-to-tape editing system, but it had already fallen almost completely out of use. Editing had gone non-linear.
In a room next to the kitchen was the Optima suite. This was a computer (I forget what type) fitted with a low resolution analogue video capture card and an off-line editing app called Optima. In this suite you would craft your programme from the low-rez clips, exporting an EDL (Edit Decision List) onto a floppy disc when you were done. This you took into the Avid suite to be on-lined – recapturing just the clips that were needed in full, glorious, standard definition. You could make a few fine adjustments and do a bit of grading before outputting the finished product back to tape.
It wasn’t practical to do the whole edit on the Avid because (a) hard drives big enough to store all the media for a film at full rez weren’t really available at that time, and (b) the Avid system was hellishly expensive and therefore time on it was charged at a premium rate.
As I edited the Integr8 BTS on Final Cut Pro, I believed I was using an off-line system similar to the Optima. The images displayed in the Viewer and Canvas were certainly blocky and posterised. But when I recorded the finished edit back to tape, I couldn’t quite believe what I was seeing. Peering through the viewfinder of the Mini-DV camera which I was using as a recording deck, I was astonished to see the programme playing at the exact same quality it had been shot at. This little G3 and the relatively affordable app on it were a complete, professional quality editing system.
I looked across the office to the sign on the Avid suite’s door. It might as well have read: “DINOSAUR”.
Within a few months I had invested in my own Mac – a G4, no less – and was using FCP regularly. The next year I used it to cut my first feature, The Beacon, and three more feature-length projects followed in the years after that, along with countless shorts and corporates. Using FCP became second nature to me, with the keyboard shortcuts hard-wired into my reflexes.
And it wasn’t just me. Final Cut became ubiquitous in the no-/low-budget sector. Did it have its flaws? Definitely. It crashed more often than Richard Hammond. I can think of no other piece of software I’ve screamed so much at (with the exception of a horrific early desktop publishing app which I masochistically used to create some Media Studies GCSE coursework).
And of course Apple shat all over themselves in 2011 when they released the much-reviled Final Cut X, causing many loyal users to jump ship. I stayed well away from the abomination, sticking with the old FCP 7 until I officially quit editing in 2014, and continuing to use it for personal projects long after that.
So it was quite a big deal for me to finally let it go. I’ve got DaVinci Resolve installed now, for the odd occasion when I need to recut my showreel. It’s not the same though.
Timelines aren’t my world any more, light is, but whenever I look back on my years as an editor, Final Cut Pro’s brushed-aluminium interface will always materialise in my mind’s eye.