“A Cliché for the End of the World”

Photo: Catherine Ashenden

In August 2019 Jonnie Howard, director of The Knowledge, approached me about shooting an unusual short film with him. A Cliché for the End of the World is only two minutes long, but Jonnie wanted to shoot it as two unbroken takes which would be presented side by side. Each take would follow one character, starting and ending with them next to each other, but separating in the middle.

My first thought was that the two takes would have to be shot concurrently, but to squeeze two cameras into the small location and keep each out of the other’s frame would have been impossible. Instead, we settled on shooting with a single camera. After capturing 18 takes of the first side, Jonnie reviewed the footage with his editor Kat and selected one to use. We then shot the other side, with Kat calling out cues that would keep the actors in sync with the selected “master” take. (It took 18 takes to get this side in the can as well, partly because of getting the cues right and partly because of the difficulties Steadicam op Luke Oliver had in manoeuvring up the narrow staircase.)

The film had to be lit in a way that worked for both sides, with the camera starting in the living room looking towards the kitchen, moving up the stairs, through the landing and into the bedroom.

The HMI skips off the floor (left); Jeremy creates the dynamic look of TV light (right)

Working as usual to the general principle of lighting from the back, I set up a 2.5K HMI outside the kitchen window to punch a shaft of sunlight into the room. I angled this steeply so that it would not reach the actors directly, but instead bounce off the floor and light them indirectly. (See my article on lighting through windows.)

Gaffer Jeremy Dawson blacked out the living room windows to keep the foreground dark. He used an LED panel set to 6,600K (versus our camera’s white balance of 5,600K) to simulate an off-screen TV, waving a piece of black wrap in front of it to create dynamics.

The HMI outside (left); the diffused Dedo in the loft (right)

Next we needed to bring up the light levels for the actor’s journey up the stairs, which were naturally darker. Jeremy and spark Gareth Neal opened the loft hatch on the landing and rigged an LED Dedo inside, aimed at the darkest part of the staircase. They diffused this with some kind of net curtain I think.

To brighten the landing we set up a diffused 2×4 Kino Flo in the spare room and partially closed the door to give the light some shape. Both this and the loft Dedo were a couple of stops under key so as not to look too artificial.

Luke Oliver balances Jonnie’s C200 on his Steadicam rig.

All that remained was the bedroom. The characters were to end up sitting on the bed facing the window. Originally the camera in both takes was to finish facing them, with the window behind it, but this would have meant shadowing the actors, not to mention that space between the bed and the window was very limited. After some discussion between me, Jonnie, Luke, the cast, and production designer Amanda Stekly, we ended up moving the bed so that the camera could shoot the actors from behind, looking towards the window. This of course made for much more interesting and dimensional lighting.

The window looked out onto the street, and with a narrow pavement and no permission from the council, rigging a light outside was out of the question. Furthermore, we knew that the sun was going to shine right into that window later in the day, seriously messing with our continuity. Unfortunately all we could do was ask Amanda to dress in a net curtain. This took the worst of the harshness out of any direct sun and hopefully disguised the natural changes in light throughout the day at least a little.

When the sun did blast in through the window at about 6pm, we added a layer of unbleached muslin behind the net curtain to soften it further. We doubled this as the angle of the sun got more straight-on, then removed it entirely when the sun vanished behind the rooftops opposite at 7pm. About 20 minutes later we rigged a daylight LED panel in the room, bouncing off the ceiling, as a fill to counteract the diminishing natural light. We wrapped just as it was becoming impossible to match to earlier takes.

We were shooting in RAW on a Canon C200, which should give some grading latitude to help match takes from different times of day. The split-screen nature of the film means that the match needs to be very close though!

As I write this, the film is still in postproduction, and I very much look forward to seeing how it comes out. I’ll leave you with the start and end frames from slate 2, take 17, with a very quick and dirty grade.

“A Cliché for the End of the World”

How Colour Works

Colour is a powerful thing. It can identify a brand, imply eco-friendliness, gender a toy, raise our blood pressure, calm us down. But what exactly is colour? How and why do we see it? And how do cameras record it? Let’s find out.

 

The Meaning of “Light”

One of the many weird and wonderful phenomena of our universe is the electromagnetic wave, an electric and magnetic oscillation which travels at 186,000 miles per second. Like all waves, EM radiation has the inversely-proportional properties of wavelength and frequency, and we humans have devised different names for it based on these properties.

The electromagnetic spectrum

EM waves with a low frequency and therefore a long wavelength are known as radio waves or, slightly higher in frequency, microwaves; we used them to broadcast information and heat ready-meals. EM waves with a high frequency and a short wavelength are known as x-rays and gamma rays; we use them to see inside people and treat cancer.

In the middle of the electromagnetic spectrum, sandwiched between infrared and ultraviolet, is a range of frequencies between 430 and 750 terahertz (wavelengths 400-700 nanometres). We call these frequencies “light”, and they are the frequencies which the receptors in our eyes can detect.

If your retinae were instead sensitive to electromagnetic radiation of between 88 and 91 megahertz, you would be able to see BBC Radio 2. I’m not talking about magically seeing into Ken Bruce’s studio, but perceiving the FM radio waves which are encoded with his silky-smooth Scottish brogue. Since radio waves can pass through solid objects though, perceiving them would not help you to understand your environment much, whereas light waves are absorbed or reflected by most solid objects, and pass through most non-solid objects, making them perfect for building a picture of the world around you.

Within the range of human vision, we have subdivided and named smaller ranges of frequencies. For example, we describe light of about 590-620nm as “orange”, and below about 450nm as “violet”. This is all colour really is: a small range of wavelengths (or frequencies) of electromagnetic radiation, or a combination of them.

 

In the eye of the beholder

Scanning electron micrograph of a retina

The inside rear surfaces of your eyeballs are coated with light-sensitive cells called rods and cones, named for their shapes.

The human eye has about five or six million cones. They come in three types: short, medium and long, referring to the wavelengths to which they are sensitive. Short cones have peak sensitivity at about 420nm, medium at 530nm and long at 560nm, roughly what we call blue, green and red respectively. The ratios of the three cone types vary from person to person, but short (blue) ones are always in the minority.

Rods are far more numerous – about 90 million per eye – and around a hundred times more sensitive than cones. (You can think of your eyes as having dual native ISOs like a Panasonic Varicam, with your rods having an ISO six or seven stops faster than your cones.) The trade-off is that they are less temporally and spatially accurate than cones, making it harder to see detail and fast movement with rods. However, rods only really come into play in dark conditions. Because there is just one type of rod, we cannot distinguish colours in low light, and because rods are most sensitive to wavelengths of 500nm, cyan shades appear brightest. That’s why cinematographers have been painting night scenes with everything from steel grey to candy blue light since the advent of colour film.

The spectral sensitivity of short (blue), medium (green) and long (red) cones

The three types of cone are what allow us – in well-lit conditions – to have colour vision. This trichromatic vision is not universal, however. Many animals have tetrachromatic (four channel) vision, and research has discovered some rare humans with it too. On the other hand, some animals, and “colour-blind” humans, are dichromats, having only two types of cone in their retinae. But in most people, perceptions of colour result from combinations of red, green and blue. A combination of red and blue light, for example, appears as magenta. All three of the primaries together make white.

Compared with the hair cells in the cochlea of your ears, which are capable of sensing a continuous spectrum of audio frequencies, trichromacy is quite a crude system, and it can be fooled. If your red and green cones are triggered equally, for example, you have no way of telling whether you are seeing a combination of red and green light, or pure yellow light, which falls between red and green in the spectrum. Both will appear yellow to you, but only one really is. That’s like being unable to hear the difference between, say, the note D and a combination of the notes C and E. (For more info on these colour metamers and how they can cause problems with certain types of lighting, check out Phil Rhode’s excellent article on Red Shark News.)

 

Artificial eye

A Bayer filter

Mimicking your eyes, video sensors also use a trichromatic system. This is convenient because it means that although a camera and TV can’t record or display yellow, for example, they can produce a mix of red and green which, as we’ve just established, is indistinguishable from yellow to the human eye.

Rather than using three different types of receptor, each sensitive to different frequencies of light, electronic sensors all rely on separating different wavelengths of light before they hit the receptors. The most common method is a colour filter array (CFA) placed immediately over the photosites, and the most common type of CFA is the Bayer filter, patented in 1976 by an Eastman Kodak employee named Dr Bryce Bayer.

The Bayer filter is a colour mosaic which allows only green light through to 50% of the photosites, only red light through to 25%, and only blue to the remaining 25%. The logic is that green is the colour your eyes are most sensitive to overall, and that your vision is much more dependent on luminance than chrominance.

A RAW, non-debayered image

The resulting image must be debayered (or more generally, demosaiced) by an algorithm to produce a viewable image. If you’re recording log or linear then this happens in-camera, whereas if you’re shooting RAW it must be done in post.

This system has implications for resolution. Let’s say your sensor is 2880×1620. You might think that’s the number of pixels, but strictly speaking it isn’t. It’s the number of photosites, and due to the Bayer filter no single one of those photosites has more than a third of the necessary colour information to form a pixel of the final image. Calculating that final image – by debayering the RAW data – reduces the real resolution of the image by 20-33%. That’s why cameras like the Arri Alexa or the Blackmagic Cinema Camera shoot at 2.8K or 2.5K, because once it’s debayered you’re left with an image of 2K (cinema standard) resolution.

 

colour Compression

Your optic nerve can only transmit about one percent of the information captured by the retina, so a huge amount of data compression is carried out within the eye. Similarly, video data from an electronic sensor is usually compressed, be it within the camera or afterwards. Luminance information is often prioritised over chrominance during compression.

Examples of chroma subsampling ratios

You have probably come across chroma subsampling expressed as, for example, 444 or 422, as in ProRes 4444 (the final 4 being transparency information, only relevant to files generated in postproduction) and ProRes 422. The three digits describe the ratios of colour and luminance information: a file with 444 chroma subsampling has no colour compression; a 422 file retains colour information only in every second pixel; a 420 file, such as those on a DVD or BluRay, contains one pixel of blue info and one of red info (the green being derived from those two and the luminance) to every four pixels of luma.

Whether every pixel, or only a fraction of them, has colour information, the precision of that colour info can vary. This is known as bit depth or colour depth. The more bits allocated to describing the colour of each pixel (or group of pixels), the more precise the colours of the image will be. DSLRs typically record video in 24-bit colour, more commonly described as 8bpc or 8 bits per (colour) channel. Images of this bit depth fall apart pretty quickly when you try to grade them. Professional cinema cameras record 10 or 12 bits per channel, which is much more flexible in postproduction.

CIE diagram showing the gamuts of three video standards. D65 is the standard for white.

The third attribute of recorded colour is gamut, the breadth of the spectrum of colours. You may have seen a CIE (Commission Internationale de l’Eclairage) diagram, which depicts the range of colours perceptible by human vision. Triangles are often superimposed on this diagram to illustrate the gamut (range of colours) that can be described by various colour spaces. The three colour spaces you are most likely to come across are, in ascending order of gamut size: Rec.709, an old standard that is still used by many monitors; P3, used by digital cinema projectors; and Rec.2020. The latter is the standard for ultra-HD, and Netflix are already requiring that some of their shows are delivered in it, even though monitors capable of displaying Rec.2020 do not yet exist. Most cinema cameras today can record images in Rec.709 (known as “video” mode on Blackmagic cameras) or a proprietary wide gamut (“film” mode on a Blackmagic, or “log” on others) which allows more flexibility in the grading suite. Note that the two modes also alter the recording of luminance and dynamic range.

To summarise as simply as possible: chroma subsampling is the proportion of pixels which have colour information, bit depth is the accuracy of that information and gamut is the limits of that info.

That’s all for today. In future posts I will look at how some of the above science leads to colour theory and how cinematographers can make practical use of it.

SaveSave

How Colour Works

Black Magic Cinema Camera Review

Throughout September I got a crash-course introduction to the Blackmagic Cinema Camera as I used it to shoot Harriet Sams’ period action adventure web series The First Musketeer. The camera was kindly lent to us by our gaffer, Richard Roberts. Part-way through the shoot I recorded my initial thoughts on the camera in this video blog:

Here’s a summary of the key differences between the Blackmagic and a Canon DSLR.

Canon DSLR Blackmagic Cinema Camera
Rolling shutter (causes picture distortion during fast movement) Rolling shutter (though not as bad as DSLRs)
Pixels thrown away to achieve downscaling to 1080P video resolution, results in distracting moiré patterns on fabrics, bricks walls and other grid-like patterns Pixels smoothly downscaled from 2.5K to 1080P to eliminate moiré. Raw 2.5K recording also available
On-board screen shuts off when external monitor is connected On-board screen remains on when external monitor is connected
Some models have flip-out screens which can be adjusted to any viewing angle and easily converted into viewfinders with a cheap loupe attachment On-board screen is fixed and highly reflective so hard to see in all but the darkest of environments
Maximum frame rate: 60fps at 720P Maximum frame rate: 30fps at 1080P
50mm lens is equivalent to 50mm (5D) or 72mm (other models) full-frame lens 50mm lens is equivalent to 115mm full-frame lens
10-11 stops of dynamic range 13 stops of dynamic range
Recording format: highly compressed H.264, although Magic Lantern now allows for limited raw recording Recording format: uncompressed raw, ProRes or DNXHD
Battery life: about 2 hours from the 600D’s bundled battery in movie mode Battery life: about 1 hour from the non-removable internal battery
Weight: 570g (600D) Weight: 1,700g
Audio: stereo minijack input, no headphone socket Audio: dual quarter-inch jacks for input, headphone socket

Having now come to the end of the project, I stand by the key message of my video blog above: if you already own a DSLR, it’s not worth upgrading to a Blackmagic. You’d just be swapping one set of problems (rolling shutter, external monitoring difficulties, aliasing) for another (hard-to-see on-board screen, weight, large depth of field).

The BMCC rigged with a lock-it box for timecode sync with the audio recorder, on a Cinecity Pro-Aim shoulder mount
The BMCC rigged with a lock-it box for timecode sync with the audio recorder, on a Cinecity Pro-Aim shoulder mount

The depth of field was really the killer for me. Having shot on the 600D for three years I’m used to its lovely shallow depth of field. With the Blackmagic’s smaller 16mm sensor it was much harder to throw backgrounds of focus, particularly on wide shots. At times I felt like some of the material I was shooting looked a bit “TV” as a result.

The small sensor also creates new demands on your set of lenses; they all become more telephoto than they used to be. A 50mm lens used on a crop-chip DSLR like the 600D is equivalent to about an 72mm lens on a full-frame camera like the 5D Mark III or a traditional 35mm SLR. That same 50mm lens used on the Blackmagic is equivalent to 115mm! It was lucky that data wrangler Rob McKenzie was able to lend us his Tokina 11-16mm f2.8 otherwise we would not have been able to get useful wide shots in some of the more cramped locations.

As for the Blackmagic’s ability to shoot raw, it sounds great, but will you use it? I suggest the images you get in ProRes mode are good enough for anything bar a theatrical release, and are of a far more manageable data size. You still get the high dynamic range in ProRes mode (although it’s optional), and that takes a little getting used to for everyone. More than once the director asked me to make stuff moodier, more shadowy; the answer was it is shadowy, you just won’t be able to see it like that until it’s graded.

The colour saturation is also very low, again to give maximum flexibility in the grade, but it makes it very hard for the crew huddled around the monitor to get a sense of what the finished thing is going to look like. As a cinematographer I pride myself on delivering images that looked graded before they actually are, but I couldn’t do that with the Blackmagic. But maybe that’s just a different workflow I’d need to adapt to.

The biggest plus to the BMCC is the lovely organic images it produces, as a result of both the down-sampling from 2.5K and the high dynamic range. This was well suited to The First Musketeer’s period setting. However, I think next season I’ll be pushing for a Canon C300 to get back the depth of field.

I’ll leave you with a few frame grabs from The First Musketeer.

Note: I have amended this post as I originally stated, incorrectly, that the BMCC has a global shutter. The new 4K Blackmagic Production Camera does have a global shutter though.

Black Magic Cinema Camera Review