How Analogue Photography Can Make You a Better Cinematographer

With many of us looking for new hobbies to see us through the zombie apocalypse Covid-19 lockdown, analogue photography may be the perfect one for an out-of-work DP. While few of us may get to experience the magic and discipline of shooting motion picture film, stills film is accessible to all. With a range of stocks on the market, bargain second-hand cameras on eBay, seemingly no end of vintage glass, and even home starter kits for processing your own images, there’s nothing to stop you giving it a go.

Since taking them up again In 2018, I’ve found that 35mm and 120 photography have had a positive impact on my digital cinematography. Here are five ways in which I think celluloid photography can help you too sharpen your filmmaking skills.

 

1. Thinking before you click

When you only have 36 shots on your roll and that roll cost you money, you suddenly have a different attitude to clicking the shutter. Is this image worthy of a place amongst those 36? If you’re shooting medium or large-format then the effect is multiplied. In fact, given that we all carry phone cameras with us everywhere we go, there has to be a pretty compelling reason to lug an SLR or view camera around. That’s bound to raise your game, making you think longer and harder about composition and content, to make every frame of celluloid a minor work of art.

 

2. Judging exposure

I know a gaffer who can step outside and tell you what f-stop the light is, using only his naked eye. This is largely because he is a keen analogue photographer. You can expose film by relying on your camera’s built-in TTL (through the lens) meter, but since you can’t see the results until the film is processed, analogue photographers tend to use other methods as well, or instead, to ensure a well-exposed negative. Rules like “Sunny Sixteen” (on a sunny day, set the aperture to f/16 and the shutter speed reciprocal to match the ISO, e.g. 1/200th of a second at ISO 200) and the use of handheld incident meters make you more aware of the light levels around you. A DP with this experience can get their lighting right more quickly.

 

3. Pre-visualising results

We digital DPs can fall into the habit of not looking at things with our eyes, always going straight to the viewfinder or the monitor to judge how things look. Since the optical viewfinder of an analogue camera tells you little more than the framing, you tend to spend less time looking through the camera and more using your eye and your mind to visualise how the image will look. This is especially true when it comes to white balance, exposure and the distribution of tones across a finished print, none of which are revealed by an analogue viewfinder. Exercising your mind like this gives you better intuition and increases your ability to plan a shoot, through storyboarding, for example.

 

4. Grading

If you take your analogue ethic through to post production by processing and printing your own photographs, there is even more to learn. Although detailed manipulation of motion pictures in post is relatively new, people have been doctoring still photos pretty much since the birth of the medium in the mid-19th century. Discovering the low-tech origins of Photoshop’s dodge and burn tools to adjust highlights and shadows is a pure joy, like waving a magic wand over your prints. More importantly, although the printing process is quick, it’s not instantaneous like Resolve or Baselight, so you do need to look carefully at your print, visualise the changes you’d like to make, and then execute them. As a DP, this makes you more critical of your own work and as a colourist, it enables you to work more efficiently by quickly identifying how a shot can be improved.

 

5. Understanding

Finally, working with the medium which digital was designed to imitate gives you a better understanding of that imitation. It was only when I learnt about push- and pull-processing – varying the development time of a film to alter the brightness of the final image – that my understanding of digital ISO really clicked. Indeed, some argue that electronic cameras don’t really have ISO, that it’s just a simulation to help users from an analogue background to understand what’s going on. If all you’ve ever used is the simulation (digital), then you’re unlikely to grasp the concepts in the same way that you would if you’ve tried the original (analogue).

How Analogue Photography Can Make You a Better Cinematographer

Secondary Grades are Nothing New

Last week I posted an article I wrote a while back (originally for RedShark News), entitled “Why You Can’t Relight Footage in Post”. You may detect that this article comes from a slightly anti-colourist place. I have been, for most of my career, afraid of grading – afraid of colourists ruining my images, indignant that my amazing material should even need grading. Arrogance? Ego? Delusion? Perhaps, but I suspect all DPs have felt this way from time to time.

I think I have finally started to let go of this fear and to understand the symbiotic relationship betwixt DP and colourist. As I mentioned a couple of weeks ago, one of the things I’ve been doing to keep myself occupied during the Covid-19 lockdown is learning to grade. This is so that I can grade the dramatic scenes in my upcoming lighting course, but also an attempt to understand a colourist’s job better. The course I’m taking is this one by Matthew Falconer on Udemy. At 31 hours, it takes some serious commitment to complete, commitment I fear I lack. But I’ve got through enough to have learnt the ins and outs of Davinci Resolve, where to start when correcting an image, the techniques of primary and secondary grades, and how to use the scopes and waveforms. I would certainly recommend the course if you want to learn the craft.

As I worked my way through grading the supplied demo footage, I was struck by two similarities. Firstly, as I tracked an actor’s face and brightened it up, I felt like I was in the darkroom dodging a print. (Dodging involves blocking some of the light reaching a certain part of the image when making an enlargement from a film negative, resulting in a brighter patch.) Subtly lifting the brightness and contrast of your subject’s face can really help draw the viewer’s eye to the right part of the image, but digital colourists were hardly the first people to recognise this. Photographers have been dodging – and the opposite, burning – prints pretty much since the invention of the negative process almost 200 years ago.

The second similarity struck me when I was drawing a power curve around an actor’s shirt in order to adjust its colour separately from the rest of the image. I was reminded of this image from Painting with Light, John Alton’s seminal 1949 work on cinematography…

 

The chin scrim is a U-shaped scrim… used to cut the light off hot white collars worn with black dinner jackets.

It’s hard for a modern cinematographer to imagine blocking static enough for such a scrim to be useful, or indeed a schedule generous enough to permit the setting-up of such an esoteric tool. But this was how you did a power window in 1949: in camera.

Sometimes I’ve thought that modern grading, particularly secondaries (which target only specific areas of the image) are unnecessary; after all, we got through a century of cinema just fine without them. But in a world where DPs don’t have the time to set up chin scrims, and can’t possibly expect a spark to follow an actor around with one, adding one in post is a great solution. Our cameras might have more dynamic range than 1940s film stock, meaning that that white collar probably won’t blow out, but we certainly don’t want it distracting the eye in the final grade.

Like I said in my previous post, what digital grading does so well are adjustments of emphasis. This is not to belittle the process at all. Those adjustments of emphasis make a huge difference. And while the laws of physics mean that a scene can’t feasibly be relit in post, they also mean that a chin scrim can’t feasibly follow an actor around a set, and you can’t realistically brighten an actor’s face with a follow spot.

What I’m trying to say is, do what’s possible on set, and do what’s impossible in post. This is how lighting and grading work in harmony.

Secondary Grades are Nothing New

Why You Can’t Re-light Footage in Post

The concept of “re-lighting in post” is one that has enjoyed a popularity amongst some no-budget filmmakers, and which sometimes gets bandied around on much bigger sets as well. If there isn’t the time, the money or perhaps simply the will to light a scene well on the day, the flexibility of RAW recording and the power of modern grading software mean that the lighting can be completely changed in postproduction, so the idea goes.

I can understand why it’s attractive. Lighting equipment can be expensive, and setting it up and finessing it is one of the biggest consumers of time on any set. The time of a single wizard colourist can seem appealingly cost-effective – especially on an unpaid, no-budget production! – compared with the money pit that is a crew, cast, location, catering, etc, etc. Delaying the pain until a little further down the line can seem like a no-brainer.

There’s just one problem: re-lighting footage is fundamentally impossible. To even talk about “re-lighting” footage demonstrates a complete misunderstanding of what photographing a film actually is.

This video, captured at a trillion frames per second, shows the tranmission and reflection of light.

The word “photography” comes from Greek, meaning “drawing with light”. This is not just an excuse for pompous DPs to compare themselves with the great artists of the past as they “paint with light”; it is a concise explanation of what a camera does.

A camera can’t record a face. It can’t record a room, or a landscape, or an animal, or objects of any kind. The only thing a camera can record is light. All photographs and videos are patterns of light which the viewer’s brain reverse-engineers into a three-dimensional scene, just as our brains reverse-engineer the patterns of light on the retinae every moment of every day, to make sense of our surroundings.

The light from this object gets gradually brighter then gradually darker again – therefore it is a curved surface. There is light on the top of that nose but not on the underneath, so it must be sticking out. These oval surfaces are absorbing all the red and blue light and reflecting only green, so it must be plant life. Such are the deductions made continuously by the brain’s visual centre.

A compound lens for a prototype light-field camera by Adobe

To suggest that footage can be re-lit is to suggest that recorded light can somehow be separated from the underlying physical objects off which that light reflected. Now of course that is within the realms of today’s technology; you could analyse a filmed scene and build a virtual 3D model of it to match the footage. Then you could “re-light” this recreated scene, but it would be a hell of a lot of work and would, at best, occupy the Uncanny Valley.

Some day, perhaps some day quite soon, artificial intelligence will be clever enough to do this for us. Feed in a 2D video and the computer will analyse the parallax and light shading to build a moving 3D model to match it, allowing a complete change of lighting and indeed composition.

Volumetric capture is already a functioning technology, currently using a mix of infrared and visible-light cameras in an environment lit as flatly as possible for maximum information – like log footage pushed to its inevitable conclusion. By surrounding the subject with cameras, a moving 3D image results.

Sir David Attenborough getting his volume captured by Microsoft

Such rigs are a type of light-field imaging, a technology that reared its head a few years ago in the form of Lytro, with viral videos showing how depth of field and even camera angle (to a limited extent) could be altered with this seemingly magical system. But even Lytro was capturing light, albeit it in a way that allowed for much more digital manipulation.

Perhaps movies will eventually be captured with some kind of Radar-type technology, bouncing electromagnetic waves outside the visible spectrum off the sets and actors to build a moving 3D model. At that point the need for light will have been completely eliminated from the production process, and the job of the director of photography will be purely a postproduction one.

While I suspect most DPs would prefer to be on a physical set than hunched over a computer, we would certainly make the transition if that was the only way to retain meaningful authorship of the image. After all, most of us are already keen to attend grading sessions to ensure our vision survives postproduction.

The Lytro Illum 2015 CP+ by Morio – own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=38422894

But for the moment at least, lighting must be done on set; re-lighting after the fact is just not possible in any practical way. This is not to take away from the amazing things that a skilled colourist can do, but the vignettes, the split-toning, the power windows, the masking and the tracking – these are adjustments of emphasis.

A soft shadow can be added, but without 3D modelling it can never fall and move as a real shadow would. A face can be brightened, but the quality of light falling on it can’t be changed from soft to hard. The angle of that light can’t be altered. Cinematographers refer to a key-light as the “modelling” light for a reason: because it defines the 3D model which your brain reverse-engineers when it sees the image.

So if you’re ever tempted to leave the job of lighting to postproduction, remember that your footage is literally made of light. If you don’t take the time to get your lighting right, you might as well not have any footage at all.

Why You Can’t Re-light Footage in Post

Camerimage 2017: Wednesday

This is the third and final part of my report from my time at Camerimage, the Polish film festival focused on cinematography. Read part one here and part two here.

 

Up.Grade: Human Vision & Colour Pipelines

I thought I would be one of the few people who would be bothered to get up and into town for this technical 10:15am seminar. But to the surprise of both myself and the organisers, the auditorium of the MCK Orzeł was once again packed – though I’d learnt to arrive in plenty of time to grab a ticket.

Up.grade is an international colour grading training programme. Their seminar was divided into two distinct halves: the first was a fascinating explanation of how human beings perceive colour, by Professor Andrew Stockman; the second was a basic overview of colour pipelines.

Prof. Stockman’s presentation – similar to his TED video above – had a lot of interesting nuggets about the way we see. Here are a few:

  • Our eyes record very little colour information compared with luminance info. You can blur the chrominance channel of an image considerably without seeing much difference; not so with the luminance channel.
  • Light hitting a rod or cone (sensor cells in our retinae) straightens the twist in the carbon double bond of a molecule. It’s a binary (on/off) response and it’s the same response for any frequency of light. It’s just that red, green and blue cones have different probabilities of absorbing different frequencies.
  • There are no blue cones in the centre of the fovea (the part of the retina responsible for detailed vision) because blue wavelengths would be out of focus due to the terrible chromatic aberration of our eyes’ lenses.
  • Data from the rods and cones is compressed in the retina to fit the bandwidth which the optical nerve can handle.
  • Metamers are colours that look the same but are created differently. For example, light with a wavelength of 575nm is perceived as yellow, but a mixture of 670nm (red) and 540nm (green) is also perceived as yellow, because the red and green cones are triggered in the same way in both scenarios. (Isn’t that weird? It’s like being unable to hear the difference between the note D and a combination of the notes C and E. It just goes to show how unreliable our senses really are.)
  • Our perception of colour changes according to its surroundings and the apparent colour of the lighting – a phenomenon perfectly demonstrated by the infamous white-gold/blue-black dress.

All in all, very interesting and well worth getting out of bed for!

At the end of the seminar I caught up with fellow DP Laura Howie, and her friend Ben, over coffee and cake. Then I sauntered leisurely to the Opera Nova and navigated the labyrinthine route to the first-floor lecture theatre, where I registered for the imminent Arri seminar.

 

Arri Seminar: International Support Programme

After picking up my complementary Arri torch, which was inexplicably disguised as a pen, I bumped into Chris Bouchard. Neither of us held high hopes that the Support Programme would be relevant to us, but we thought it was worth getting the lowdown just in case.

Shooting “Kolkata”

The Arri International Support Programme (ISP) is a worldwide scheme to provide emerging filmmakers with sponsored camera/lighting/grip equipment, postproduction services, and in some cases co-production or sales deals as well. Mandy Rahn, the programme’s leader, explained that it supports young people (though there is no strict age limit) making their first, second or third feature in the $500,000-$5,000,000 budget range. They support both drama and documentary, but not short-form projects, which ruled out any hopes I might have had that it could be useful for Ren: The Girl with the Mark.

Having noted these keys details, Chris and I decided to duck out and head elsewhere. While Chris checked out some cameras on the Canon stand, I had a little chat with the reps from American Cinematographer about some possible coverage of The Little Mermaid. We then popped over to the MCK and caught part of a Canon seminar, including a screening of the short documentary Kolkata. Shortly we were treading the familiar path back to the Opera Nova and the first-floor lecture theatre for a Kodak-sponsored session with Ed Lachman, ASC, only to find it had been cancelled for reasons unknown.

 

Red Seminar: High resolution Image Processing Pipeline

Next on our radar was a Red panel. I wasn’t entirely sure if I could handle another high resolution seminar, but I suggested we return once more to the MCK anyway and relax in the bar with one eye on the live video feed. Unfortunately we got there to find that the monitors had disappeared, so we had to go into the auditorium, where it was standing room only.

“GLOW” – DP: Christian Sprenger

Light Iron colourist Ian Vertovec was talking about his experience grading the Netflix series GLOW, a highly enjoyable comedy-drama set behind the scenes of an eighties female wrestling show. Netflix wanted the series delivered in high dynamic range (HDR) and wide colour gamut (WCG), of a spec so high that no screens are yet capable of displaying it. In fact Vertovec graded in P3 (the colour space used for cinema projection) which was then mapped to Netflix’s higher specs for delivery. The Rec.709 (standard gamut) version was automatically created from the P3 grade by Dolby Vision software which analysed the episodes frame by frame. Netflix streams a 4,000 NIT signal to all viewers, which is then down-converted live (using XML data also generated by the Dolby Vision software) to 100, 650 or 1,000 NITs depending on their display. In theory this should provide a consistent image across all screens.

Vertovec demonstrated his image pipeline for GLOW: multi-layer base grade, halation pass, custom film LUT, blur/sharp pass, grain pass. The aim was to get the look of telecined film. The halation pass involved making a copy of the image, keying out all but the highlights, blurring those highlights and layering them back on top of the original footage. I used to do a similar thing to soften Mini-DV footage back in the day!

An interesting point was made about practicals in HDR. If you have an actor in front of or close to a practical lamp in frame, it’s a delicate balancing act to get them bright enough to look real, yet not so bright that it hurts your eyes to look at the actor with a dazzling lamp next to them. When practicals are further away from your cast they can be brighter because your eye will naturally track around them as in real life.

Next up was Dan Duran from Red, who explained a new LUT that is being rolled out across their cameras. Most of this went in one ear and out the other!

 

“Breaking Bad”

Afterwards, Chris and I returned to Kung Fusion for another delicious dinner. The final event of the day which I wanted to catch was Breaking Bad‘s pilot episode, screening at Bydgoszcz’s Vue multiplex as part of the festival’s John Toll retrospective. Having binged the entire series relatively recently, I loved seeing the very first episode again – especially on the big screen – with the fore-knowledge of where the characters would end up.

Later Chris introduced me to DP Sebastian Cort, and the three of us decided to try our luck at getting into the Panavision party. We snuck around the back of the venue and into one of the peripheral buildings, only to be immediately collared by a bouncer and sent packing!

This ignoble failure marked the end of my Camerimage experience, more or less. After another drink or two at Cheat we called it a night, and I was on an early flight back to Stansted the next morning. I met some interesting people and learnt a lot from the seminars. There were some complaints that the festival was over-subscribed, and indeed – as I have described – you had to be quick off the mark to get into certain events, but that was pretty much what I had been expecting. I certainly won’t put be off attending again in the future.

To learn more about two of the key issues raised at this year’s Camerimage, check out my Red Shark articles:

Camerimage 2017: Wednesday

Grading “Above the Clouds”

Recently work began on colour grading Above the Clouds, a comedy road movie I shot for director Leon Chambers. I’ve covered every day of shooting here on my blog, but the story wouldn’t be complete without an account of this crucial stage of postproduction.

I must confess I didn’t give much thought to the grade during the shoot, monitoring in Rec.709 and not envisaging any particular “look”. So when Leon asked if I had any thoughts or references to pass on to colourist Duncan Russell, I had to put my thinking cap on. I came up with a few different ideas and met with Leon to discuss them. The one that clicked with his own thoughts was a super-saturated vintage postcard (above). He also liked how, in a frame grab I’d been playing about with, I had warmed up the yellow of the car – an important character in the movie!

Leon was keen to position Above the Clouds‘ visual tone somewhere between the grim reality  of a typical British drama and the high-key gloss of Hollywood comedies. Finding exactly the right spot on that wide spectrum was the challenge!

“Real but beautiful” was Duncan’s mantra when Leon and I sat down with him last week for a session in Freefolk’s Baselight One suite. He pointed to the John Lewis “Tiny Dancer” ad as a good touchstone for this approach.

We spent the day looking at the film’s key sequences. There was a shot of Charlie, Oz and the Yellow Peril (the car) outside the garage from week one which Duncan used to establish a look for the three characters. It’s commonplace nowadays to track faces and apply individual grades to them, making it possible to fine-tune skin-tones with digital precision. I’m pleased that Duncan embraced the existing contrast between Charlie’s pale, freckled innocence and Oz’s dirty, craggy world-weariness.

Above the Clouds was mainly shot on an Alexa Mini, in Log C ProRes 4444, so there was plenty of detail captured beyond the Rec.709 image that I was (mostly) monitoring. A simple example of this coming in useful is the torchlight charity shop scene, shot at the end of week two. At one point Leo reaches for something on a shelf and his arm moves right in front of his torch. Power-windowing Leo’s arm, Duncan was able to bring back the highlight detail, because it had all been captured in the Log C.

But just because all the detail is there, it doesn’t mean you can always use it. Take the gallery scenes, also shot in week two, at the Turner Contemporary in Margate. The location has large sea-view windows and white walls. Many of the key shots featured Oz and Charlie with their backs towards the windows. This is a classic contrasty situation, but I knew from checking the false colours in log mode that all the detail was being captured.

Duncan initially tried to retain all the exterior detail in the grade, by separating the highlights from the mid-tones and treating them differently. He succeeded, but it didn’t look real. It looked like Oz and Charlie were green-screened over a separate background. Our subconscious minds know that a daylight exterior cannot be only slightly brighter than an interior, so it appeared artificial. It was necessary to back off on the sky detail to keep it feeling real. (Had we been grading in HDR [High Dynamic Range], which may one day be the norm, we could theoretically have retained all the detail while still keeping it realistic. However, if what I’ve heard of HDR is correct, it may have been unpleasant for audiences to look at Charlie and Oz against the bright light of the window beyond.)

There were other technical challenges to deal with in the film as well. One was the infra-red problem we encountered with our ND filters during last autumn’s pick-ups, which meant that Duncan had to key out Oz’s apparently pink jacket and restore it to blue. Another was the mix of formats employed for the various pick-ups: in addition to the Alexa Mini, there was footage from an Arri Amira, a Blackmagic Micro Cinema Camera (BMMCC) and even a Canon 5D Mk III. Although the latter had an intentionally different look, the other three had to match as closely as possible.

A twilight scene set in a rural village contains perhaps the most disparate elements. Many shots were done day-for-dusk on the Alexa Mini in Scotland, at the end of week four. Additional angles were captured on the BMMCC in Kent a few months later, both day-for-dusk and dusk-for-dusk. This outdoor material continues directly into indoor scenes, shot on a set this February on the Amira. Having said all that, they didn’t match too badly at all, but some juggling was required to find a level of darkness that worked for the whole sequence while retaining consistency.

In other sequences, like the ones in Margate near the start of the film, a big continuity issue is the clouds. Given the film’s title, I always tried to frame in plenty of sky and retain detail in it, using graduated ND filters where necessary. Duncan was able to bring out, suppress or manipulate detail as needed, to maintain continuity with adjacent shots.

Consistency is important in a big-picture sense too. One of the last scenes we looked at was the interior of Leo’s house, from weeks two and three, for which Duncan hit upon a nice, painterly grade with a bit of mystery to it. The question is, does that jar with the rest of the movie, which is fairly light overall, and does it give the audience the right clues about the tone of the scene which will unfold? We may not know the answers until we watch the whole film through.

Duncan has plenty more work to do on Above the Clouds, but I’m confident it’s in very good hands. I will probably attend another session when it’s close to completion, so watch this space for that.

See all my Above the Clouds posts here, or visit the official website.

Grading “Above the Clouds”

12 Tips for Better Instagram Photos

I joined this social media platform last summer, after hearing DP Ed Moore say in an interview that his Instagram feed helps him get work. I can’t say that’s happened for me yet, but an attractive Instagram feed can’t do any creative freelancer any harm. And for photographers and cinematographers, it’s a great way to practice our skills.

The tips below are primarily aimed at people who are using a phone camera to take their pictures, but many of them will apply to all types of photography.

The particular challenge with Instagram images is that they’re usually viewed on a phone screen; they’re small, so they have to be easy for the brain to decipher. That means reducing clutter, keeping things bold and simple.

Here are twelve tips for putting this philosophy into practice. The examples are all taken from my own feed, and were taken with an iPhone 5, almost always using the HDR (High Dynamic Range) mode to get the best tonal range.

 

1. choose your background carefully

The biggest challenge I find in taking snaps with my phone is the huge depth of field. This makes it critical to have a suitable, non-distracting background, because it can’t be thrown out of focus. In the pub photo below, I chose to shoot against the blank pillar rather than against the racks of drinks behind the bar, so that the beer and lens mug would stand out clearly. For the Lego photo, I moved the model away from a messy table covered in multi-coloured blocks to use a red-only tray as a background instead.

 

2. Find Frames within frames

The Instagram filters all have a frame option which can be activated to give your image a white border, or a fake 35mm negative surround, and so on. An improvement on this is to compose your image so that it has a built-in frame. (I discussed frames within frames in a number of my recent posts on composition.)

 

3. try symmetrical composition

To my eye, the square aspect ratio of Instagram is not wide enough for The Rule of Thirds to be useful in most cases. Instead, I find the most arresting compositions are central, symmetrical ones.

 

4. Consider Shooting flat on

In cinematography, an impression of depth is usually desirable, but in a little Instagram image I find that two-dimensionality can sometimes work better. Such photos take on a graphical quality, like icons, which I find really interesting. The key thing is that 2D pictures are easier for your brain to interpret when they’re small, or when they’re flashing past as you scroll.

 

5. Look for shapes

Finding common shapes in a structure or natural environment can be a good way to make your photo catch the eye. In these examples I spotted an ‘S’ shape in the clouds and footpath, and an ‘A’ shape in the architecture.

 

6. Look for textures

Textures can add interest to your image. Remember the golden rule of avoiding clutter though. Often textures will look best if they’re very bold, like the branches of the tree against the misty sky here, or if they’re very close-up, like this cathedral door.

 

7. Shoot into the light

Most of you will not be lighting your Instagram pics artificially, so you need to be aware of the existing light falling on your subject. Often the strongest look is achieved by shooting towards the light. In certain situations this can create interesting silhouettes, but often there are enough reflective surfaces around to fill in the shadows so you can get the beauty of the backlight and still see the detail in your subject. You definitely need to be in HDR mode for this.

 

8. Look for interesting light

It’s also worth looking out for interesting light which may make a dull subject into something worth capturing. Nature provides interesting light every day at sunrise and sunset, so these are good times to keep an eye out for photo ops.

 

9. Use lens flare for interest

Photographers have been using lens flare to add an extra something to their pictures for decades, and certain science fiction movies have also been known to use (ahem) one or two. To avoid a flare being too overpowering, position your camera so as to hide part of the sun behind a foreground object. To get that anamorphic cinema look, wipe your finger vertically across your camera lens. The natural oils on your skin will cause a flare at 90° to the direction you wiped in. (Best not try this with that rented set of Master Primes though.)

 

10. Control your palette

Nothing gives an image a sense of unity and professionalism as quickly as a controlled colour palette. You can do this in-camera, like I did below by choosing the purple cushion to photograph the book on, or by adjusting the saturation and colour cast in the Photos app, as I did with the Canary Wharf image. For another example, see the Lego shot under point 3.

 

11. Wait for the right moment

Any good photographer knows that patience is a virtue. Waiting for pedestrians or vehicles to reach just the right spot in your composition before tapping the shutter can make the difference between a bold, eye-catching photo and a cluttered mess. In the below examples, I waited until the pedestrians (left) and the rowing boat and swans (right) were best placed against the background for contrast and composition before taking the shot.

 

12. Quality control

One final thing to consider: is the photo you’ve just taken worthy of your Instagram profile, or is it going to drag down the quality of your feed? If it’s not good, maybe you should keep it to yourself.

Check out my Instagram feed to see if you think I’ve broken this rule!

12 Tips for Better Instagram Photos

Lighting I Like: “Harry Potter and the Philosopher’s Stone”

The third episode of my YouTube cinematography series Lighting I Like is out now. This time I discuss a scene from the first instalment in the Harry Potter franchise, directed by Chris Columbus and photographed by John Seale, ACS, ASC.

 

You can find out more about the forest scene from Wolfman which I mentioned, either in the February 2010 issue of American Cinematographer if you have a subscription, or towards the bottom of this page on Cine Gleaner.

If you’re a fan of John Seale’s work, you may want to read my post “20 Facts About the Cinematography of Mad Max: Fury Road.

To read about how I’ve tackled nighttime forest scenes myself, check out “Poor Man’s Process II” (Ren: The Girl with the Mark) and Above the Clouds: Week Two”.

I hope you enjoyed the show. Episode four goes out at the same time next week: 8pm GMT on Wednesday, and will cover a scene from episode two of the lavish Netflix series The Crown. Subscribe to my YouTube channel to make sure you never miss an episode.

Lighting I Like: “Harry Potter and the Philosopher’s Stone”

Giving Yourself Somewhere to Go

On the recce for The Second Shepherd's Play. Photo: Douglas Morse
On the recce for The Second Shepherd’s Play. Photo: Douglas Morse

As a cinematographer, it can often be tempting to make your shots look as slick and beautiful as possible. But that’s not always right for the story. And sometimes it can leave you nowhere to go.

Currently I’m shooting The Second Shepherds’ Play, a medieval comedy adaptation, for director Douglas Morse. The story starts in the mud and drizzle of three shepherds’ daily drudge, and in a Python-esque twist ends up in the nativity. The titular trio develop from a base, selfish, almost animalistic state to something much more divine.

So, much as my instincts filming the opening scenes yesterday were to have a shallow depth of field and bounce boards everywhere to put a sparkle in the shepherds’ eyes, this wouldn’t have been right for this stage of the film. We had to have somewhere to go, so I shot at around f9 all day with unmodified natural, overcast light. As we get towards the end of the story – we’re shooting roughly in story order – I’ll start to use eyelight and more sculpted illumination and reduce the depth of field, as well as switching from handheld to sticks.

Grading episode one of Ren
Grading episode one of Ren

Similarly, grading episode one of Ren the other day, it was important to keep things bright and cheerful, so that later episodes could be colder and darker by comparison when things go wrong for our heroes. And playing the long game, I lit Ren herself with soft, shadowless light for most of the first season, so that as she develops from innocence to more of an action heroine in later seasons, her lighting can get harder and moodier.

Like all heads of department on a production, DPs are storytellers, and it all comes down to doing what’s right for the story, and what’s right for that moment in the story.

Giving Yourself Somewhere to Go

Grading Stop/Eject

Grading Stop/Eject at Whitecross Post Production
Grading Stop/Eject at Whitecross Post Production

On Monday, Stop/Eject entered the final phase of postproduction: grading. In the optical days, this process involved adjusting the cocktail of chemicals and the length of time the film would be bathed in those chemicals to make basic adjustments to the amount of red, green and blue in each shot and the brightness.

The reason for this can be most readily appreciated if you imagine a scene shot outdoors, in which one camera angle may have been recorded under warm, direct sunlight, whereas another which is cut to immediately after may have been recorded when the sun was behind a cloud and the light was cooler in colour. But even artificially lit scenes will need a little work to match up each shot to the next; the human eye is quite sensitive to changes in colour and brightness.

With the digital revolution came an exponentially expanded toolset for grading. Individual colours can be isolated and changed, shadows and highlights can be adjusted independently, and feathered masks can be applied to highlight or shade just one part of the frame – with the software even able to track elements if they move around so that the mask always stayed lock to that element. (Watch the Digital Intermediate featurette on the Fellowship of the Ring DVD to get a glimpse into what’s possible.)

Colourist Michael Stirling
Colourist Michael Stirling

Stop/Eject was graded by Michael Stirling at the company he runs in East London, White Cross Post Production. He very kindly put in a lot of hours, including two evenings, to make sure we got the best out of the film’s images. Even at this late stage in the game we were still telling the story: drawing the viewer’s eye to critical elements in the frame, enhancing lighting transitions when time stops and starts, making the happy past sequences warm and inviting, and contrasting those with cooler, darker scenes in the present.

Given that we shot in 16:9 (standard widescreen) but were cropping to 2:35:1 (cinemascope), we had the opportunity to move the images up or down behind the widescreen mask to reframe shots slightly; this is known as re-racking. We also added some subtle 35mm grain to the whole film.

The grade was the first time – and perhaps the last – that I was able to see the film on a really high quality, properly calibrated projector. It really exposed the quality of the camera and glass that were used. And while the Canon 600D and relatively cheap lenses look great on every other screen I’ve watched it on, on Whitecross’ projector I could really see the value of investing in high-end lenses. Even the difference between the Canon 50mm/f1.8 (£60) and the more expensive Sigma 20mm and 105mm lenses was apparent.

But I digress. The important things are: the grade looks great, and Stop/Eject is now finished. Hooray!

Sophie and I are off to Cannes this weekend – subscribe to my YouTube channel to get our daily video blogs. And when we get back it’s film festival submissions, DVD/Blu-ray authoring, and premiere arranging all the way.

Grading Stop/Eject

The Dark Side Guide to Digital Intermediate

Here is the last of the Dark Side Guides: The Dark Side Guide to Digital Intermediate. I really had to muddle my way through post-production on the pilot, wishing there was somewhere I could get all the information I needed, but there wasn’t – until now!

This step-by-step guide takes you through the complex post-production route known as DI, whereby footage shot on film is transferred to the digital domain for editing, FX and colour grading, before being recorded back to film for distribution and exhibition. Invaluable tips on everything from telecine of your rushes to Dolby authorisation for your soundtrack are complemented by a sample budget laying out all the costs.

As always, if you have any questions that the guide doesn’t answer, please feel free to ask me.

The Dark Side Guide to Digital Intermediate