How Colour Works

Colour is a powerful thing. It can identify a brand, imply eco-friendliness, gender a toy, raise our blood pressure, calm us down. But what exactly is colour? How and why do we see it? And how do cameras record it? Let’s find out.

 

The Meaning of “Light”

One of the many weird and wonderful phenomena of our universe is the electromagnetic wave, an electric and magnetic oscillation which travels at 186,000 miles per second. Like all waves, EM radiation has the inversely-proportional properties of wavelength and frequency, and we humans have devised different names for it based on these properties.

The electromagnetic spectrum

EM waves with a low frequency and therefore a long wavelength are known as radio waves or, slightly higher in frequency, microwaves; we used them to broadcast information and heat ready-meals. EM waves with a high frequency and a short wavelength are known as x-rays and gamma rays; we use them to see inside people and treat cancer.

In the middle of the electromagnetic spectrum, sandwiched between infrared and ultraviolet, is a range of frequencies between 430 and 750 terahertz (wavelengths 400-700 nanometres). We call these frequencies “light”, and they are the frequencies which the receptors in our eyes can detect.

If your retinae were instead sensitive to electromagnetic radiation of between 88 and 91 megahertz, you would be able to see BBC Radio 2. I’m not talking about magically seeing into Ken Bruce’s studio, but perceiving the FM radio waves which are encoded with his silky-smooth Scottish brogue. Since radio waves can pass through solid objects though, perceiving them would not help you to understand your environment much, whereas light waves are absorbed or reflected by most solid objects, and pass through most non-solid objects, making them perfect for building a picture of the world around you.

Within the range of human vision, we have subdivided and named smaller ranges of frequencies. For example, we describe light of about 590-620nm as “orange”, and below about 450nm as “violet”. This is all colour really is: a small range of wavelengths (or frequencies) of electromagnetic radiation, or a combination of them.

 

In the eye of the beholder

Scanning electron micrograph of a retina

The inside rear surfaces of your eyeballs are coated with light-sensitive cells called rods and cones, named for their shapes.

The human eye has about five or six million cones. They come in three types: short, medium and long, referring to the wavelengths to which they are sensitive. Short cones have peak sensitivity at about 420nm, medium at 530nm and long at 560nm, roughly what we call blue, green and red respectively. The ratios of the three cone types vary from person to person, but short (blue) ones are always in the minority.

Rods are far more numerous – about 90 million per eye – and around a hundred times more sensitive than cones. (You can think of your eyes as having dual native ISOs like a Panasonic Varicam, with your rods having an ISO six or seven stops faster than your cones.) The trade-off is that they are less temporally and spatially accurate than cones, making it harder to see detail and fast movement with rods. However, rods only really come into play in dark conditions. Because there is just one type of rod, we cannot distinguish colours in low light, and because rods are most sensitive to wavelengths of 500nm, cyan shades appear brightest. That’s why cinematographers have been painting night scenes with everything from steel grey to candy blue light since the advent of colour film.

The spectral sensitivity of short (blue), medium (green) and long (red) cones

The three types of cone are what allow us – in well-lit conditions – to have colour vision. This trichromatic vision is not universal, however. Many animals have tetrachromatic (four channel) vision, and research has discovered some rare humans with it too. On the other hand, some animals, and “colour-blind” humans, are dichromats, having only two types of cone in their retinae. But in most people, perceptions of colour result from combinations of red, green and blue. A combination of red and blue light, for example, appears as magenta. All three of the primaries together make white.

Compared with the hair cells in the cochlea of your ears, which are capable of sensing a continuous spectrum of audio frequencies, trichromacy is quite a crude system, and it can be fooled. If your red and green cones are triggered equally, for example, you have no way of telling whether you are seeing a combination of red and green light, or pure yellow light, which falls between red and green in the spectrum. Both will appear yellow to you, but only one really is. That’s like being unable to hear the difference between, say, the note D and a combination of the notes C and E. (For more info on these colour metamers and how they can cause problems with certain types of lighting, check out Phil Rhode’s excellent article on Red Shark News.)

 

Artificial eye

A Bayer filter

Mimicking your eyes, video sensors also use a trichromatic system. This is convenient because it means that although a camera and TV can’t record or display yellow, for example, they can produce a mix of red and green which, as we’ve just established, is indistinguishable from yellow to the human eye.

Rather than using three different types of receptor, each sensitive to different frequencies of light, electronic sensors all rely on separating different wavelengths of light before they hit the receptors. The most common method is a colour filter array (CFA) placed immediately over the photosites, and the most common type of CFA is the Bayer filter, patented in 1976 by an Eastman Kodak employee named Dr Bryce Bayer.

The Bayer filter is a colour mosaic which allows only green light through to 50% of the photosites, only red light through to 25%, and only blue to the remaining 25%. The logic is that green is the colour your eyes are most sensitive to overall, and that your vision is much more dependent on luminance than chrominance.

A RAW, non-debayered image

The resulting image must be debayered (or more generally, demosaiced) by an algorithm to produce a viewable image. If you’re recording log or linear then this happens in-camera, whereas if you’re shooting RAW it must be done in post.

This system has implications for resolution. Let’s say your sensor is 2880×1620. You might think that’s the number of pixels, but strictly speaking it isn’t. It’s the number of photosites, and due to the Bayer filter no single one of those photosites has more than a third of the necessary colour information to form a pixel of the final image. Calculating that final image – by debayering the RAW data – reduces the real resolution of the image by 20-33%. That’s why cameras like the Arri Alexa or the Blackmagic Cinema Camera shoot at 2.8K or 2.5K, because once it’s debayered you’re left with an image of 2K (cinema standard) resolution.

 

colour Compression

Your optic nerve can only transmit about one percent of the information captured by the retina, so a huge amount of data compression is carried out within the eye. Similarly, video data from an electronic sensor is usually compressed, be it within the camera or afterwards. Luminance information is often prioritised over chrominance during compression.

Examples of chroma subsampling ratios

You have probably come across chroma subsampling expressed as, for example, 444 or 422, as in ProRes 4444 (the final 4 being transparency information, only relevant to files generated in postproduction) and ProRes 422. The three digits describe the ratios of colour and luminance information: a file with 444 chroma subsampling has no colour compression; a 422 file retains colour information only in every second pixel; a 420 file, such as those on a DVD or BluRay, contains one pixel of blue info and one of red info (the green being derived from those two and the luminance) to every four pixels of luma.

Whether every pixel, or only a fraction of them, has colour information, the precision of that colour info can vary. This is known as bit depth or colour depth. The more bits allocated to describing the colour of each pixel (or group of pixels), the more precise the colours of the image will be. DSLRs typically record video in 24-bit colour, more commonly described as 8bpc or 8 bits per (colour) channel. Images of this bit depth fall apart pretty quickly when you try to grade them. Professional cinema cameras record 10 or 12 bits per channel, which is much more flexible in postproduction.

CIE diagram showing the gamuts of three video standards. D65 is the standard for white.

The third attribute of recorded colour is gamut, the breadth of the spectrum of colours. You may have seen a CIE (Commission Internationale de l’Eclairage) diagram, which depicts the range of colours perceptible by human vision. Triangles are often superimposed on this diagram to illustrate the gamut (range of colours) that can be described by various colour spaces. The three colour spaces you are most likely to come across are, in ascending order of gamut size: Rec.709, an old standard that is still used by many monitors; P3, used by digital cinema projectors; and Rec.2020. The latter is the standard for ultra-HD, and Netflix are already requiring that some of their shows are delivered in it, even though monitors capable of displaying Rec.2020 do not yet exist. Most cinema cameras today can record images in Rec.709 (known as “video” mode on Blackmagic cameras) or a proprietary wide gamut (“film” mode on a Blackmagic, or “log” on others) which allows more flexibility in the grading suite. Note that the two modes also alter the recording of luminance and dynamic range.

To summarise as simply as possible: chroma subsampling is the proportion of pixels which have colour information, bit depth is the accuracy of that information and gamut is the limits of that info.

That’s all for today. In future posts I will look at how some of the above science leads to colour theory and how cinematographers can make practical use of it.

SaveSave

How Colour Works

A History of Black and White

The contact sheet from my first roll of Ilford Delta 3200

Having lately shot my first roll of black-and-white film in a decade, I thought now would be a good time to delve into the story of monochrome image-making and the various reasons artists have eschewed colour.

I found the recent National Gallery exhibition, Monochrome: Painting in Black and White, a great primer on the history of the unhued image. Beginning with examples from medieval religious art, the exhibition took in grisaille works of the Renaissance before demonstrating the battle between painting and early photography, and finishing with monochrome modern art.

Several of the pictures on display were studies or sketches which were generated in preparation for colour paintings. Ignoring hue allowed the artists to focus on form and composition, and this is still one of black-and-white’s great strengths today: stripping away chroma to heighten other pictorial effects.

“Nativity” by Petrus Christus, c. 1455

What fascinated me most in the exhibition were the medieval religious paintings in the first room. Here, old testament scenes in black-and-white were painted around a larger, colour scene from the new testament; as in the modern TV trope, the flashbacks were in black-and-white. In other pictures, a colour scene was framed by a monochrome rendering of stonework – often incredibly realistic – designed to fool the viewer into thinking they were seeing a painting in an architectural nook.

During cinema’s long transition from black-and-white to colour, filmmakers also used the two modes to define different layers of reality. When colour processes were still in their infancy and very expensive, filmmakers selected particular scenes to pick out in rainbow hues, while the surrounding material remained in black-and-white like the borders of the medieval paintings. By 1939 the borders were shrinking, as The Wizard of Oz portrayed Kansas, the ordinary world, in black-and-white, while rendering Oz – the bulk of the running time – in colour.

Michael Powell, Emeric Pressburger and legendary Technicolor cinematographer Jack Cardiff, OBE, BSC subverted expectations with their 1946 fantasy-romance A Matter of Life and Death, set partly on Earth and partly in heaven. Says Cardiff in his autobiography:

Quite early on I had said casually to Michael Powell, “Of course heaven will be in colour, won’t it?” And Michael replied, “No. Heaven will be in black and white.” He could see I was startled, and grinned: “Because everyone will expect heaven to be in colour, I’m doing it in black-and-white.”

Ironically Cardiff had never shot in black-and-white before, and he ultimately captured the heavenly scenes on three-strip Technicolor, but didn’t have the colour fully developed, resulting in a pearlescent monochrome.

Meanwhile, DPs like John Alton, ASC were pushing greyscale cinematography to its apogee with a genre that would come to be known as film noir. Oppressed Jews like Alton fled the rising Nazism of Europe for the US, bringing German Expressionism with them. The result was a trend of hardboiled thrillers lit with oppressive contrast, harsh shadows, concealing silhouettes and dramatic angles, all of which were heightened by the lack of distracting colour.

A classic bit of Alton's noir lighting from The Big Combo
“The Big Combo” DP: John Alton, ASC

Alton himself had a paradoxical relationship with chroma, famously stating that “black and white are colours”. While he is best known today for his noir, his only Oscar win was for his work on the Technicolor musical An American in Paris, the designers of which hated Alton for the brightly-coloured light he tried to splash over their sets and costumes.

It wasn’t just Alton that was moving to colour. Soon the economics were clear: chromatic cinema was more marketable and no longer prohibitively expensive. The writing was on the wall for black-and-white movies, and by the end of the sixties they were all but gone.

I was brought up in a world of default colour, and the first time I can remember becoming aware of black-and-white was when Schindler’s List was released in 1993. I can clearly recall a friend’s mother refusing to see the film because she felt she wouldn’t be getting her money’s worth if there was no colour. She’s not alone in this view, and that’s why producers are never keen to green-light monochrome movies. Spielberg only got away with it because his name was proven box office gold.

“Schindler’s List” DP: Janusz Kamiński, ASC

A few years later, Jonathan Frakes and his DP Matthew F. Leonetti, ASC wanted to shoot the holodeck sequence of Star Trek: First Contact in black-and-white, but the studio deemed test footage “too experimental”. For the most part, the same attitude prevails today. Despite being marketed as a “visionary” director ever since Pan’s Labyrinth, Guillermo del Toro’s vision of The Shape of Water as a black-and-white film was rejected by financiers. He only got the multi-Oscar-winning fairytale off the ground by reluctantly agreeing to shoot in colour.

Yet there is reason to be hopeful about black-and-white remaining an option for filmmakers. In 2007 MGM denied Frank Darabont the chance to make The Mist in black-and-white, but they permitted a desaturated version on the DVD. Darabont had this to say:

No, it doesn’t look real. Film itself [is a] heightened recreation of reality. To me, black-and-white takes that one step further. It gives you a view of the world that doesn’t really exist in reality and the only place you can see that representation of the world is in a black-and-white movie.

“The Mist” DP: Rohn Schmidt

In 2016, a “black and chrome” version of Mad Max: Fury Road was released on DVD and Blu-Ray, with director George Miller saying:

The best version of “Road Warrior” [“Mad Max 2”]  was what we called a “slash dupe,” a cheap, black-and-white version of the movie for the composer. Something about it seemed more authentic and elemental. So I asked Eric Whipp, the [“Fury Road”] colourist, “Can I see some scenes in black-and-white with quite a bit of contrast?” They looked great. So I said to the guys at Warners, “Can we put a black-and-white version on the DVD?”

One of the James Mangold photos which inspired “Logan Noir”

The following year, Logan director James Mangold’s black-and-white on-set photos proved so popular with the public that he decided to create a monochrome version of the movie. “The western and noir vibes of the film seemed to shine in the form, and there was not a trace of the modern comic hero movie sheen,” he said. Most significantly, the studio approved a limited theatrical release for Logan Noir, presumably seeing the extra dollar-signs of a second release, rather than the reduced dollar-signs of a greyscale picture.

Perhaps the medium of black-and-white imaging has come full circle. During the Renaissance, greyscale images were preparatory sketches, stepping stones to finished products in colour. Today, the work-in-progress slash dupe of Road Warrior and James Mangold’s photographic studies of Logan were also stepping stones to colour products, while at the same time closing the loop by inspiring black-and-white products too.

With the era of budget- and technology-mandated monochrome outside the living memory of many viewers today, I think there is a new willingness to accept black-and-white as an artistic choice. The acclaimed sci-fi anthology series Black Mirror released an episode in greyscale this year, and where Netflix goes, others are bound to follow.

A History of Black and White

Roger Deakins’ Oscar-winning Cinematography of “Blade Runner 2049”

After fourteen nominations, celebrated cinematographer Roger Deakins, CBE, BSC, ASC finally won an Oscar last night, for his work on Denis Villeneuve’s Blade Runner 2049. Villeneuve’s sequel to Ridley Scott’s 1982 sci-fi noir is not a perfect film; its measured, thoughtful pace is not to everyone’s taste, and it has serious issues with women – all of the female characters being highly sexualised, callously slaughtered, or both – but the Best Cinematography Oscar was undoubtedly well deserved. Let’s take a look at the photographic style Deakins employed, and how it plays into the movie’s themes.

Blade Runner 2049 returns to the dystopian metropolis of Ridley Scott’s classic three decades later, introducing us to Ryan Gosling’s K. Like Harrison Ford’s Deckard before him, K is a titular Blade Runner, tasked with locating and “retiring” rogue replicants – artificial, bio-engineered people. He soon makes a discovery which could have huge implications both for himself and the already-strained relationship between humans and replicants. In his quest to uncover the truth, K must track down Deckard for some answers.

Villeneuve’s film meditates on deep questions of identity, creating a world in which you can never be sure who is or isn’t real – or even what truly constitutes being “real”. Deakins reinforces this existential uncertainty by reducing characters and locations to mere forms. Many scenes are shrouded in smog, mist, rain or snow, rendering humans and replicants alike as silhouettes.

K spends his first major scene seated in front of a window, the side-light bouncing off a nearby cabinet the only illumination on his face. Deakins’ greatest strength is his ability to adapt to whatever style each film requires, but if he has a recognisable signature it’s this courage to rely on a single source and let the rest of the frame go black.

Whereas Scott and his DP Jordan Cronenweth portrayed LA mainly at night, ablaze with pinpoints of light, Villeneuve and Deakins introduce it in daylight, but a daylight so dim and smog-ridden that it reveals even less than those night scenes from 1982.

All this is not to say that the film is frustratingly dark, or that audiences will struggle to make out what is going on. Shooting crisply on Arri Alexas with Arri/Zeiss Master Primes, Deakins is a master of ensuring that you see what you need to see.

A number of the film’s sequences are colour-coded, delineating them as separate worlds. The city is mainly fluorescent blues and greens, visually reinforcing the sickly state of society, with the police department – an attempt at justice in an insane world – a neutral white.

The Brutalist headquarters of Jared Leto’s blind entrepreneur Wallace are rendered in gold, as though the corporation attempted a friendly yellow but was corrupted by greed. These scenes also employ rippling reflections from pools of water. Whereas the watery light in the Tyrell HQ of Scott’s Blade Runner was a random last-minute idea by the director, concerned that his scene lacked enough interest and production value, here the light is clearly motivated by architectural water features. Yet it is used symbolically too, and very effectively so, as it underscores one of Blade Runner 2049’s most powerful scenes. At a point in the story where more than one character is calling their memories into question, the ripples playing across the walls are as intangible and illusory as those recollections. “I know what’s real,” Deckard asserts to Wallace, but both the photography and Ford’s performance bely his words.

The most striking use of colour is the sequence in which K first tracks Deckard down, hiding out in a Las Vegas that’s been abandoned since the detonation of a dirty bomb. Inspired by photos of the Australian dust storm of 2009, Deakins bathed this lengthy sequence in soft, orangey-red – almost Martian – light. This permeating warmth, contrasting with the cold artificial light of LA, underlines the personal nature of K’s journey and the theme of birth which is threaded throughout the film.

Deakins has stated in interviews that he made no attempt to emulate Cronenweth’s style of lighting, but nonetheless this sequel feels well-matched to the original in many respects. This has a lot to do with the traditional camerawork, with most scenes covered in beautifully composed static shots, and movement accomplished where necessary with track and dolly.

The visual effects, which bagged the film’s second Oscar, also drew on techniques of the past; the above featurette shows a Canon 1DC tracking through a miniature landscape at 2:29. “Denis and I wanted to do as much as possible in-camera,” Deakins told Variety, “and we insisted when we had the actors, at least, all the foreground and mid-ground would be in-camera.” Giant LED screens were used to get authentic interactive lighting from the advertising holograms on the city streets.

One way in which the lighting of the two Blade Runner movies is undeniably similar is the use of moving light sources to suggest an exciting world continuing off camera. (The infamous lens flares of J.J. Abrahms’ Star Trek served the same purpose, illustrating Blade Runner’s powerful influence on the science fiction genre.) But whereas, in the original film, the roving searchlights pierce the locations sporadically and intrusively, the dynamic lights of Blade Runner 2049 continually remodel the actors’ faces. One moment a character is in mysterious backlight, the next in sinister side-light, and the next in revealing front-light – inviting the audience to reassess who these characters are at every turn.

This obfuscation and transience of identity and motivation permeates the whole film, and is its core visual theme. The 1982 Blade Runner was a deliberate melding of sci-fi and film noir, but to me the sequel does not feel like noir at all. Here there is little hard illumination, no binary division of light and dark. Instead there is insidious soft light, caressing the edge of a face here, throwing a silhouette there, painting everyone on a continuous (and continuously shifting) spectrum between reality and artificiality.

Blade Runner 2049 is a much deeper and more subtle film than its predecessor, and Deakins’ cinematography beautifully reflects this.

Roger Deakins’ Oscar-winning Cinematography of “Blade Runner 2049”

Grading “Above the Clouds”

Recently work began on colour grading Above the Clouds, a comedy road movie I shot for director Leon Chambers. I’ve covered every day of shooting here on my blog, but the story wouldn’t be complete without an account of this crucial stage of postproduction.

I must confess I didn’t give much thought to the grade during the shoot, monitoring in Rec.709 and not envisaging any particular “look”. So when Leon asked if I had any thoughts or references to pass on to colourist Duncan Russell, I had to put my thinking cap on. I came up with a few different ideas and met with Leon to discuss them. The one that clicked with his own thoughts was a super-saturated vintage postcard (above). He also liked how, in a frame grab I’d been playing about with, I had warmed up the yellow of the car – an important character in the movie!

Leon was keen to position Above the Clouds‘ visual tone somewhere between the grim reality  of a typical British drama and the high-key gloss of Hollywood comedies. Finding exactly the right spot on that wide spectrum was the challenge!

“Real but beautiful” was Duncan’s mantra when Leon and I sat down with him last week for a session in Freefolk’s Baselight One suite. He pointed to the John Lewis “Tiny Dancer” ad as a good touchstone for this approach.

We spent the day looking at the film’s key sequences. There was a shot of Charlie, Oz and the Yellow Peril (the car) outside the garage from week one which Duncan used to establish a look for the three characters. It’s commonplace nowadays to track faces and apply individual grades to them, making it possible to fine-tune skin-tones with digital precision. I’m pleased that Duncan embraced the existing contrast between Charlie’s pale, freckled innocence and Oz’s dirty, craggy world-weariness.

Above the Clouds was mainly shot on an Alexa Mini, in Log C ProRes 4444, so there was plenty of detail captured beyond the Rec.709 image that I was (mostly) monitoring. A simple example of this coming in useful is the torchlight charity shop scene, shot at the end of week two. At one point Leo reaches for something on a shelf and his arm moves right in front of his torch. Power-windowing Leo’s arm, Duncan was able to bring back the highlight detail, because it had all been captured in the Log C.

But just because all the detail is there, it doesn’t mean you can always use it. Take the gallery scenes, also shot in week two, at the Turner Contemporary in Margate. The location has large sea-view windows and white walls. Many of the key shots featured Oz and Charlie with their backs towards the windows. This is a classic contrasty situation, but I knew from checking the false colours in log mode that all the detail was being captured.

Duncan initially tried to retain all the exterior detail in the grade, by separating the highlights from the mid-tones and treating them differently. He succeeded, but it didn’t look real. It looked like Oz and Charlie were green-screened over a separate background. Our subconscious minds know that a daylight exterior cannot be only slightly brighter than an interior, so it appeared artificial. It was necessary to back off on the sky detail to keep it feeling real. (Had we been grading in HDR [High Dynamic Range], which may one day be the norm, we could theoretically have retained all the detail while still keeping it realistic. However, if what I’ve heard of HDR is correct, it may have been unpleasant for audiences to look at Charlie and Oz against the bright light of the window beyond.)

There were other technical challenges to deal with in the film as well. One was the infra-red problem we encountered with our ND filters during last autumn’s pick-ups, which meant that Duncan had to key out Oz’s apparently pink jacket and restore it to blue. Another was the mix of formats employed for the various pick-ups: in addition to the Alexa Mini, there was footage from an Arri Amira, a Blackmagic Micro Cinema Camera (BMMCC) and even a Canon 5D Mk III. Although the latter had an intentionally different look, the other three had to match as closely as possible.

A twilight scene set in a rural village contains perhaps the most disparate elements. Many shots were done day-for-dusk on the Alexa Mini in Scotland, at the end of week four. Additional angles were captured on the BMMCC in Kent a few months later, both day-for-dusk and dusk-for-dusk. This outdoor material continues directly into indoor scenes, shot on a set this February on the Amira. Having said all that, they didn’t match too badly at all, but some juggling was required to find a level of darkness that worked for the whole sequence while retaining consistency.

In other sequences, like the ones in Margate near the start of the film, a big continuity issue is the clouds. Given the film’s title, I always tried to frame in plenty of sky and retain detail in it, using graduated ND filters where necessary. Duncan was able to bring out, suppress or manipulate detail as needed, to maintain continuity with adjacent shots.

Consistency is important in a big-picture sense too. One of the last scenes we looked at was the interior of Leo’s house, from weeks two and three, for which Duncan hit upon a nice, painterly grade with a bit of mystery to it. The question is, does that jar with the rest of the movie, which is fairly light overall, and does it give the audience the right clues about the tone of the scene which will unfold? We may not know the answers until we watch the whole film through.

Duncan has plenty more work to do on Above the Clouds, but I’m confident it’s in very good hands. I will probably attend another session when it’s close to completion, so watch this space for that.

See all my Above the Clouds posts here, or visit the official website.

Grading “Above the Clouds”

Lighting I Like: “Preacher”

Preacher is the subject of this week’s episode of Lighting I Like. I discuss two scenes, from the second episode of the second season, “Mumbai Sky Tower”, which demonstrate the over-the-top, comic-book style of the show.

Both seasons of Preacher can be seen on Amazon Video in the UK.

New episodes of Lighting I Like are released at 8pm BST every Wednesday. Next week I’ll look at two scenes from BroadchurchClick here to see the playlist of all Lighting I Like episodes.

Lighting I Like: “Preacher”

12 Tips for Better Instagram Photos

I joined this social media platform last summer, after hearing DP Ed Moore say in an interview that his Instagram feed helps him get work. I can’t say that’s happened for me yet, but an attractive Instagram feed can’t do any creative freelancer any harm. And for photographers and cinematographers, it’s a great way to practice our skills.

The tips below are primarily aimed at people who are using a phone camera to take their pictures, but many of them will apply to all types of photography.

The particular challenge with Instagram images is that they’re usually viewed on a phone screen; they’re small, so they have to be easy for the brain to decipher. That means reducing clutter, keeping things bold and simple.

Here are twelve tips for putting this philosophy into practice. The examples are all taken from my own feed, and were taken with an iPhone 5, almost always using the HDR (High Dynamic Range) mode to get the best tonal range.

 

1. choose your background carefully

The biggest challenge I find in taking snaps with my phone is the huge depth of field. This makes it critical to have a suitable, non-distracting background, because it can’t be thrown out of focus. In the pub photo below, I chose to shoot against the blank pillar rather than against the racks of drinks behind the bar, so that the beer and lens mug would stand out clearly. For the Lego photo, I moved the model away from a messy table covered in multi-coloured blocks to use a red-only tray as a background instead.

 

2. Find Frames within frames

The Instagram filters all have a frame option which can be activated to give your image a white border, or a fake 35mm negative surround, and so on. An improvement on this is to compose your image so that it has a built-in frame. (I discussed frames within frames in a number of my recent posts on composition.)

 

3. try symmetrical composition

To my eye, the square aspect ratio of Instagram is not wide enough for The Rule of Thirds to be useful in most cases. Instead, I find the most arresting compositions are central, symmetrical ones.

 

4. Consider Shooting flat on

In cinematography, an impression of depth is usually desirable, but in a little Instagram image I find that two-dimensionality can sometimes work better. Such photos take on a graphical quality, like icons, which I find really interesting. The key thing is that 2D pictures are easier for your brain to interpret when they’re small, or when they’re flashing past as you scroll.

 

5. Look for shapes

Finding common shapes in a structure or natural environment can be a good way to make your photo catch the eye. In these examples I spotted an ‘S’ shape in the clouds and footpath, and an ‘A’ shape in the architecture.

 

6. Look for textures

Textures can add interest to your image. Remember the golden rule of avoiding clutter though. Often textures will look best if they’re very bold, like the branches of the tree against the misty sky here, or if they’re very close-up, like this cathedral door.

 

7. Shoot into the light

Most of you will not be lighting your Instagram pics artificially, so you need to be aware of the existing light falling on your subject. Often the strongest look is achieved by shooting towards the light. In certain situations this can create interesting silhouettes, but often there are enough reflective surfaces around to fill in the shadows so you can get the beauty of the backlight and still see the detail in your subject. You definitely need to be in HDR mode for this.

 

8. Look for interesting light

It’s also worth looking out for interesting light which may make a dull subject into something worth capturing. Nature provides interesting light every day at sunrise and sunset, so these are good times to keep an eye out for photo ops.

 

9. Use lens flare for interest

Photographers have been using lens flare to add an extra something to their pictures for decades, and certain science fiction movies have also been known to use (ahem) one or two. To avoid a flare being too overpowering, position your camera so as to hide part of the sun behind a foreground object. To get that anamorphic cinema look, wipe your finger vertically across your camera lens. The natural oils on your skin will cause a flare at 90° to the direction you wiped in. (Best not try this with that rented set of Master Primes though.)

 

10. Control your palette

Nothing gives an image a sense of unity and professionalism as quickly as a controlled colour palette. You can do this in-camera, like I did below by choosing the purple cushion to photograph the book on, or by adjusting the saturation and colour cast in the Photos app, as I did with the Canary Wharf image. For another example, see the Lego shot under point 3.

 

11. Wait for the right moment

Any good photographer knows that patience is a virtue. Waiting for pedestrians or vehicles to reach just the right spot in your composition before tapping the shutter can make the difference between a bold, eye-catching photo and a cluttered mess. In the below examples, I waited until the pedestrians (left) and the rowing boat and swans (right) were best placed against the background for contrast and composition before taking the shot.

 

12. Quality control

One final thing to consider: is the photo you’ve just taken worthy of your Instagram profile, or is it going to drag down the quality of your feed? If it’s not good, maybe you should keep it to yourself.

Check out my Instagram feed to see if you think I’ve broken this rule!

12 Tips for Better Instagram Photos

9 Fun Photic Facts from a 70-year-old Book

Shortly before Christmas, while browsing the secondhand books in the corner of an obscure Herefordshire garden centre, I came across a small blue hardback called The Tricks of Light and Colour by Herbert McKay. Published in 1947, the book covered almost every aspect of light you could think of, from the inverse square law to camouflage and optical illusions. What self-respecting bibliophile cinematographer could pass that up?

Here are some quite-interesting things about light which the book describes…

  

1. SPHERES ARE THE KEY to understandING the inverse square law.

Any cinematographer worth their salt will know that doubling a subject’s distance from a lamp will quarter their brightness; tripling their distance will cut their brightness to a ninth; and so on.  This, of course, is the inverse square law. If you struggle to visualise this law and why it works the way it does, The Tricks of Light and Colour offers a good explanation.

[Think] of light being radiated from… a mere point. Light and heat are radiated in straight lines and in all directions [from this point]. At a distance of one foot from the glowing centre the whole quantity of light and heat is spread out over the surface of a sphere with a radius of one foot. At a distance of two feet from the centre it is spread over the surface of a sphere of radius two feet. Now to find an area we multiply two lengths; in the case of a sphere both lengths are the radius of the sphere. As both lengths are doubled the area is four times as great… We have the same amounts of light and heat spread over a sphere four times as great, and so the illumination and heating effect are reduced to a quarter as great.

 

2. MIRAGES ARE DUE TO Total internal reflection.

This is one of the things I dimly remember being taught in school, which this book has considerably refreshed me on. When light travels from one transparent substance to another, less dense, transparent substance, it bends towards the surface. This is called refraction, and it’s the reason that, for example, streams look shallower than they really are, when viewed from the bank. If the first substance is very dense, or the light ray is approaching the surface at a glancing angle, the ray might not escape at all, instead bouncing back down. This is called total internal reflection, and it’s the science behind mirages.

The heated sand heats the air above it, and so we get an inversion of the density gradient: low density along the heated surface, higher density in the cooler air above. Light rays are turned down, and then up, so that the scorched and weary traveller sees an image of the sky, and the images looks like a pool of cool water on the face of the desert.

 

3. Pinhole images pop up in unexpected places.

Most of us have made a pinhole camera at some point in our childhood, creating an upside-down image on a tissue paper screen by admitting light rays through a tiny opening. Make the opening bigger and the image becomes a blur, unless you have a lens to focus the light, as in a “proper” camera or indeed our eyes. But the pinhole imaging effect can occur naturally too. I’ve sometimes lain in bed in the morning, watching images of passing traffic or flapping laundry on a line projected onto my bedroom ceiling through the little gap where the curtains meet at the top. McKay describes another example:

One of the prettiest examples of the effect may be seen under trees when the sun shines brightly. The ground beneath a tree may be dappled with circles of light, some of them quite bright… When we look up through the leaves towards the sun we may see the origin of the circles of light. We can see points of light where the sun shines through small gaps between the leaves. Each of these gaps acts in the same way as a pinhole: it lets through rays from the sun which produce an image of the sun on the ground below.

 

4. The sun isn’t a point source.

“Shadows are exciting,” McKay enthuses as he opens chapter VI. They certainly are to a cinematographer. And this cinematographer was excited to learn something about the sun and its shadow which is really quite obvious, but I had never considered before.

Look at the shadow of a wall. Near the base, where the shadow begins, the edge of the shadow is straight and sharp… Farther out, the edge of the shadow gets more and more fuzzy… The reason lies of course in the great sun itself. The sun is not a mere point of light, but a globe of considerable angular width.

The accompanying illustration shows how you would see all, part or none of the sun if you stood in a slightly different position relative to the hypothetical wall. The area where none of the sun is visible is of course in full shadow (umbra), and the area where the sun is partially visible is the fuzzy penumbra (the “almost shadow”).

  

5. Gravity bends LIGHT.

Einstein hypothesised that gravity could bend light rays, and observations during solar eclipses proved him right. Stars near to the eclipsed sun were seen to be slightly out of place, due to the huge gravitational attraction of the sun.

The effect is very small; it is too small to be observed when the rays pass a comparatively small body like the moon. We need a body like the sun, at whose surface gravity is 160 or 170 times as great as at the surface of the moon, to give an observable deviation…. The amount of shift depends on the apparent nearness of a star to the sun, that is, the closeness with which the rays of light from the star graze the sun. The effect of gravity fades out rapidly, according to the inverse square law, so that it is only near the sun that the effects can be observed.

 

6. Light helped us discover helium.

Sodium street-lamps are not the most pleasant of sources, because hot sodium vapour emits light in only two wave-lengths, rather than a continuous spectrum. Interestingly, cooler sodium vapour absorbs the same two wave-lengths. The same is true of other elements: they  emit certain wave-lengths when very hot, and absorb the same wave-lengths when less hot. This little bit of science led to a major discovery.

The sun is an extremely hot body surrounded by an atmosphere of less highly heated vapours. White light from the sun’s surfaces passes through these heated vapours before it reaches us; many wave-lengths are absorbed by the sun’s atmosphere, and there is a dark line in the spectrum for each wave-length that has been absorbed. The thrilling thing is that these dark lines tell us which elements are present in the sun’s atmosphere. It turned out that the lines in the sun’s spectrum represented elements already known on the earth, except for one small group of lines which were ascribed to a hitherto undetected element. This element was called helium (from helios, the sun).

 

7. Moonlight is slightly too dim for colours.

Our retinas are populated by two different types of photoreceptors: rods and cones. Rods are much more sensitive than cones, and enable us to see in very dim light once they’ve had some time to adjust. But rods cannot see colours. This is why our vision is almost monochrome in dark conditions, even under the light of a full moon… though only just…

The light of the full moon is just about the threshold, as we say, of colour vision; a little lighter and we should see colours.

 

8. MAGIC HOUR can be longer than an hour.

We cinematographers often think of magic “hour” as being much shorter than an hour. When prepping for a dusk-for-night scene on The Little Mermaid, I used my light meter to measure the length of shootable twilight. The result was 20 minutes; after that, the light was too dim for our Alexas at 800 ISO and our Cooke S4 glass at T2. But how long after sunset is it until there is literally no light left from the sun, regardless of how sensitive your camera is? McKay has this to say…

Twilight is partly explained as an effect of diffusion. When the sun is below the horizon it still illuminates particles of dust and moisture in the air. Some of the scattered light is thrown down to the earth’s surface… Twilight ends when the sun is 17° or 18° below the horizon. At the equator [for example] the sun sinks vertically at the equinoxes, 15° per hour; so it sinks 17° in 1 hour 8 minutes.

 

9. Why isn’t Green a primary colour in paint?

And finally, the answer to something that bugged me during my childhood. When I was a small child, daubing crude paintings of stick figures under cheerful suns, I was taught that the primary colours are red, blue and yellow. Later I learnt that the true primary colours, the additive colours of light, are red, blue and green. So why is it that green, a colour that cannot be created by mixing two other colours of light, can be created by mixing blue and yellow paints?

When white light falls on a blue pigment, the pigment absorbs reds and yellows; it reflects blue and also some green. A yellow pigment absorbs blue and violet; it reflects yellow, and also some red and green which are the colours nearest to it in the spectrum. When the two pigments are mixed it may be seen that all the colours are absorbed by one or other of the components except green.

 

If you’re interested in picking up a copy of The Tricks of Light and Colour yourself, there is one on Amazon at the time of writing, but it will set you back £35. Note that Herbert McKay is not to be confused with Herbert C. McKay, an American author who was writing books about stereoscopic photography at around the same time.

9 Fun Photic Facts from a 70-year-old Book

Lighting I Like: “Life on Mars”

This week’s edition of Lighting I Like focuses on a scene from Life on Mars, my all-time favourite TV show. Broadcast on the BBC in 2006 and 2007, this was a police procedural with a twist: John Simm’s protagonist D.I. Sam Tyler had somehow travelled back in time to the 1970s… or was he just in a coma imagining it all? Each week his politically correct noughties policing style would clash with the seventies “bang ’em up first, ask questions later” approach of Philip Glenister’s iconic Gene Hunt.

I must get around to doing a proper post on colour theory one of these days, but in the meantime, there’s a bit about colour contrast in this post. And you can read more about using practicals in this post.

I hope you enjoyed the show. The sixth and final episode goes out at the same time next week: 8pm GMT on Wednesday, and will feature perhaps the most stunning scene yet, from the Starz series Outlander. Subscribe to my YouTube channel to make sure you never miss an episode of Lighting I Like.

Lighting I Like: “Life on Mars”

Lighting I Like: “Harry Potter and the Philosopher’s Stone”

The third episode of my YouTube cinematography series Lighting I Like is out now. This time I discuss a scene from the first instalment in the Harry Potter franchise, directed by Chris Columbus and photographed by John Seale, ACS, ASC.

 

You can find out more about the forest scene from Wolfman which I mentioned, either in the February 2010 issue of American Cinematographer if you have a subscription, or towards the bottom of this page on Cine Gleaner.

If you’re a fan of John Seale’s work, you may want to read my post “20 Facts About the Cinematography of Mad Max: Fury Road.

To read about how I’ve tackled nighttime forest scenes myself, check out “Poor Man’s Process II” (Ren: The Girl with the Mark) and Above the Clouds: Week Two”.

I hope you enjoyed the show. Episode four goes out at the same time next week: 8pm GMT on Wednesday, and will cover a scene from episode two of the lavish Netflix series The Crown. Subscribe to my YouTube channel to make sure you never miss an episode.

Lighting I Like: “Harry Potter and the Philosopher’s Stone”