Secondary Grades are Nothing New

Last week I posted an article I wrote a while back (originally for RedShark News), entitled “Why You Can’t Relight Footage in Post”. You may detect that this article comes from a slightly anti-colourist place. I have been, for most of my career, afraid of grading – afraid of colourists ruining my images, indignant that my amazing material should even need grading. Arrogance? Ego? Delusion? Perhaps, but I suspect all DPs have felt this way from time to time.

I think I have finally started to let go of this fear and to understand the symbiotic relationship betwixt DP and colourist. As I mentioned a couple of weeks ago, one of the things I’ve been doing to keep myself occupied during the Covid-19 lockdown is learning to grade. This is so that I can grade the dramatic scenes in my upcoming lighting course, but also an attempt to understand a colourist’s job better. The course I’m taking is this one by Matthew Falconer on Udemy. At 31 hours, it takes some serious commitment to complete, commitment I fear I lack. But I’ve got through enough to have learnt the ins and outs of Davinci Resolve, where to start when correcting an image, the techniques of primary and secondary grades, and how to use the scopes and waveforms. I would certainly recommend the course if you want to learn the craft.

As I worked my way through grading the supplied demo footage, I was struck by two similarities. Firstly, as I tracked an actor’s face and brightened it up, I felt like I was in the darkroom dodging a print. (Dodging involves blocking some of the light reaching a certain part of the image when making an enlargement from a film negative, resulting in a brighter patch.) Subtly lifting the brightness and contrast of your subject’s face can really help draw the viewer’s eye to the right part of the image, but digital colourists were hardly the first people to recognise this. Photographers have been dodging – and the opposite, burning – prints pretty much since the invention of the negative process almost 200 years ago.

The second similarity struck me when I was drawing a power curve around an actor’s shirt in order to adjust its colour separately from the rest of the image. I was reminded of this image from Painting with Light, John Alton’s seminal 1949 work on cinematography…

 

The chin scrim is a U-shaped scrim… used to cut the light off hot white collars worn with black dinner jackets.

It’s hard for a modern cinematographer to imagine blocking static enough for such a scrim to be useful, or indeed a schedule generous enough to permit the setting-up of such an esoteric tool. But this was how you did a power window in 1949: in camera.

Sometimes I’ve thought that modern grading, particularly secondaries (which target only specific areas of the image) are unnecessary; after all, we got through a century of cinema just fine without them. But in a world where DPs don’t have the time to set up chin scrims, and can’t possibly expect a spark to follow an actor around with one, adding one in post is a great solution. Our cameras might have more dynamic range than 1940s film stock, meaning that that white collar probably won’t blow out, but we certainly don’t want it distracting the eye in the final grade.

Like I said in my previous post, what digital grading does so well are adjustments of emphasis. This is not to belittle the process at all. Those adjustments of emphasis make a huge difference. And while the laws of physics mean that a scene can’t feasibly be relit in post, they also mean that a chin scrim can’t feasibly follow an actor around a set, and you can’t realistically brighten an actor’s face with a follow spot.

What I’m trying to say is, do what’s possible on set, and do what’s impossible in post. This is how lighting and grading work in harmony.

Secondary Grades are Nothing New

Why You Can’t Re-light Footage in Post

The concept of “re-lighting in post” is one that has enjoyed a popularity amongst some no-budget filmmakers, and which sometimes gets bandied around on much bigger sets as well. If there isn’t the time, the money or perhaps simply the will to light a scene well on the day, the flexibility of RAW recording and the power of modern grading software mean that the lighting can be completely changed in postproduction, so the idea goes.

I can understand why it’s attractive. Lighting equipment can be expensive, and setting it up and finessing it is one of the biggest consumers of time on any set. The time of a single wizard colourist can seem appealingly cost-effective – especially on an unpaid, no-budget production! – compared with the money pit that is a crew, cast, location, catering, etc, etc. Delaying the pain until a little further down the line can seem like a no-brainer.

There’s just one problem: re-lighting footage is fundamentally impossible. To even talk about “re-lighting” footage demonstrates a complete misunderstanding of what photographing a film actually is.

This video, captured at a trillion frames per second, shows the tranmission and reflection of light.

The word “photography” comes from Greek, meaning “drawing with light”. This is not just an excuse for pompous DPs to compare themselves with the great artists of the past as they “paint with light”; it is a concise explanation of what a camera does.

A camera can’t record a face. It can’t record a room, or a landscape, or an animal, or objects of any kind. The only thing a camera can record is light. All photographs and videos are patterns of light which the viewer’s brain reverse-engineers into a three-dimensional scene, just as our brains reverse-engineer the patterns of light on the retinae every moment of every day, to make sense of our surroundings.

The light from this object gets gradually brighter then gradually darker again – therefore it is a curved surface. There is light on the top of that nose but not on the underneath, so it must be sticking out. These oval surfaces are absorbing all the red and blue light and reflecting only green, so it must be plant life. Such are the deductions made continuously by the brain’s visual centre.

A compound lens for a prototype light-field camera by Adobe

To suggest that footage can be re-lit is to suggest that recorded light can somehow be separated from the underlying physical objects off which that light reflected. Now of course that is within the realms of today’s technology; you could analyse a filmed scene and build a virtual 3D model of it to match the footage. Then you could “re-light” this recreated scene, but it would be a hell of a lot of work and would, at best, occupy the Uncanny Valley.

Some day, perhaps some day quite soon, artificial intelligence will be clever enough to do this for us. Feed in a 2D video and the computer will analyse the parallax and light shading to build a moving 3D model to match it, allowing a complete change of lighting and indeed composition.

Volumetric capture is already a functioning technology, currently using a mix of infrared and visible-light cameras in an environment lit as flatly as possible for maximum information – like log footage pushed to its inevitable conclusion. By surrounding the subject with cameras, a moving 3D image results.

Sir David Attenborough getting his volume captured by Microsoft

Such rigs are a type of light-field imaging, a technology that reared its head a few years ago in the form of Lytro, with viral videos showing how depth of field and even camera angle (to a limited extent) could be altered with this seemingly magical system. But even Lytro was capturing light, albeit it in a way that allowed for much more digital manipulation.

Perhaps movies will eventually be captured with some kind of Radar-type technology, bouncing electromagnetic waves outside the visible spectrum off the sets and actors to build a moving 3D model. At that point the need for light will have been completely eliminated from the production process, and the job of the director of photography will be purely a postproduction one.

While I suspect most DPs would prefer to be on a physical set than hunched over a computer, we would certainly make the transition if that was the only way to retain meaningful authorship of the image. After all, most of us are already keen to attend grading sessions to ensure our vision survives postproduction.

The Lytro Illum 2015 CP+ by Morio – own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=38422894

But for the moment at least, lighting must be done on set; re-lighting after the fact is just not possible in any practical way. This is not to take away from the amazing things that a skilled colourist can do, but the vignettes, the split-toning, the power windows, the masking and the tracking – these are adjustments of emphasis.

A soft shadow can be added, but without 3D modelling it can never fall and move as a real shadow would. A face can be brightened, but the quality of light falling on it can’t be changed from soft to hard. The angle of that light can’t be altered. Cinematographers refer to a key-light as the “modelling” light for a reason: because it defines the 3D model which your brain reverse-engineers when it sees the image.

So if you’re ever tempted to leave the job of lighting to postproduction, remember that your footage is literally made of light. If you don’t take the time to get your lighting right, you might as well not have any footage at all.

Why You Can’t Re-light Footage in Post

Grading “Above the Clouds”

Recently work began on colour grading Above the Clouds, a comedy road movie I shot for director Leon Chambers. I’ve covered every day of shooting here on my blog, but the story wouldn’t be complete without an account of this crucial stage of postproduction.

I must confess I didn’t give much thought to the grade during the shoot, monitoring in Rec.709 and not envisaging any particular “look”. So when Leon asked if I had any thoughts or references to pass on to colourist Duncan Russell, I had to put my thinking cap on. I came up with a few different ideas and met with Leon to discuss them. The one that clicked with his own thoughts was a super-saturated vintage postcard (above). He also liked how, in a frame grab I’d been playing about with, I had warmed up the yellow of the car – an important character in the movie!

Leon was keen to position Above the Clouds‘ visual tone somewhere between the grim reality  of a typical British drama and the high-key gloss of Hollywood comedies. Finding exactly the right spot on that wide spectrum was the challenge!

“Real but beautiful” was Duncan’s mantra when Leon and I sat down with him last week for a session in Freefolk’s Baselight One suite. He pointed to the John Lewis “Tiny Dancer” ad as a good touchstone for this approach.

We spent the day looking at the film’s key sequences. There was a shot of Charlie, Oz and the Yellow Peril (the car) outside the garage from week one which Duncan used to establish a look for the three characters. It’s commonplace nowadays to track faces and apply individual grades to them, making it possible to fine-tune skin-tones with digital precision. I’m pleased that Duncan embraced the existing contrast between Charlie’s pale, freckled innocence and Oz’s dirty, craggy world-weariness.

Above the Clouds was mainly shot on an Alexa Mini, in Log C ProRes 4444, so there was plenty of detail captured beyond the Rec.709 image that I was (mostly) monitoring. A simple example of this coming in useful is the torchlight charity shop scene, shot at the end of week two. At one point Leo reaches for something on a shelf and his arm moves right in front of his torch. Power-windowing Leo’s arm, Duncan was able to bring back the highlight detail, because it had all been captured in the Log C.

But just because all the detail is there, it doesn’t mean you can always use it. Take the gallery scenes, also shot in week two, at the Turner Contemporary in Margate. The location has large sea-view windows and white walls. Many of the key shots featured Oz and Charlie with their backs towards the windows. This is a classic contrasty situation, but I knew from checking the false colours in log mode that all the detail was being captured.

Duncan initially tried to retain all the exterior detail in the grade, by separating the highlights from the mid-tones and treating them differently. He succeeded, but it didn’t look real. It looked like Oz and Charlie were green-screened over a separate background. Our subconscious minds know that a daylight exterior cannot be only slightly brighter than an interior, so it appeared artificial. It was necessary to back off on the sky detail to keep it feeling real. (Had we been grading in HDR [High Dynamic Range], which may one day be the norm, we could theoretically have retained all the detail while still keeping it realistic. However, if what I’ve heard of HDR is correct, it may have been unpleasant for audiences to look at Charlie and Oz against the bright light of the window beyond.)

There were other technical challenges to deal with in the film as well. One was the infra-red problem we encountered with our ND filters during last autumn’s pick-ups, which meant that Duncan had to key out Oz’s apparently pink jacket and restore it to blue. Another was the mix of formats employed for the various pick-ups: in addition to the Alexa Mini, there was footage from an Arri Amira, a Blackmagic Micro Cinema Camera (BMMCC) and even a Canon 5D Mk III. Although the latter had an intentionally different look, the other three had to match as closely as possible.

A twilight scene set in a rural village contains perhaps the most disparate elements. Many shots were done day-for-dusk on the Alexa Mini in Scotland, at the end of week four. Additional angles were captured on the BMMCC in Kent a few months later, both day-for-dusk and dusk-for-dusk. This outdoor material continues directly into indoor scenes, shot on a set this February on the Amira. Having said all that, they didn’t match too badly at all, but some juggling was required to find a level of darkness that worked for the whole sequence while retaining consistency.

In other sequences, like the ones in Margate near the start of the film, a big continuity issue is the clouds. Given the film’s title, I always tried to frame in plenty of sky and retain detail in it, using graduated ND filters where necessary. Duncan was able to bring out, suppress or manipulate detail as needed, to maintain continuity with adjacent shots.

Consistency is important in a big-picture sense too. One of the last scenes we looked at was the interior of Leo’s house, from weeks two and three, for which Duncan hit upon a nice, painterly grade with a bit of mystery to it. The question is, does that jar with the rest of the movie, which is fairly light overall, and does it give the audience the right clues about the tone of the scene which will unfold? We may not know the answers until we watch the whole film through.

Duncan has plenty more work to do on Above the Clouds, but I’m confident it’s in very good hands. I will probably attend another session when it’s close to completion, so watch this space for that.

See all my Above the Clouds posts here, or visit the official website.

Grading “Above the Clouds”

9 Tips for Easier Sound Syncing

Colin Smith slates a shot on Stop/Eject
Colin Smith slates a shot on Stop/Eject. Photo: Paul Bednall

While syncing sound in an edit recently I came across a number of little mistakes that cost me time, so I decided to put together some on-set and off-set tips for smooth sound syncing.

On set: tips for the 2nd AC

  1. Get the slate and take number on the slate right. This means a dedicated 2nd AC (this American term seems to have supplanted the more traditional British clapper-loader), not just any old crew member grabbing the slate at the last minute.
  2. Get the date on the slate right. This can be very helpful for starting to match up sound and picture in a large project if other methods fail.
  3. Hold the slate so that your fingers are not covering any of the info on it.
  4. Make MOS (mute) shots very clear by holding the sticks with your fingers through them.
  5. Make sure the rest of the cast and crew appreciate the importance of being quiet while the slate and take number are read out. It’s a real pain for the editing department if the numbers can’t be heard over chit-chat and last-minute notes from the director.
  6. Speak clearly and differentiate any numbers that could be misheard, e.g. “slate one three” and “slate three zero” instead of the similar-sounding “slate thirteen” and “slate thirty”.
Rick Goldsmith slates a steadicam shot on Stop/Eject. Photo: Paul Bednall
Rick Goldsmith slates a steadicam shot on Stop/Eject. Photo: Paul Bednall

For more on best slating practice, see my Slating 101 blog post.

Off set: tips for the DIT and assistant editor

  1. I recommend renaming both sound and video files to contain the slate and take number, but be sure to do this immediately after ingesting the material and on all copies of it. There is nothing worse than having copies of the same file with different names floating around.
  2. This should be obvious, but please, please, please sync your sound BEFORE starting to edit or I will hunt you down and kill you. No excuses.
  3. An esoteric one for any dinosaurs like me still using Final Cut 7: make sure you’ve set your project’s frame rate correctly (in Easy Setup) before importing your audio rushes. Otherwise FCP will assign them timecodes based on the wrong rate, leading to errors and sound falling out of sync if you ever need to relink your project’s media.

Follow these guidelines and dual system sound will be painless – well, as painless as it can ever be!

9 Tips for Easier Sound Syncing

5 Tips for Successful Pick-ups

Discussing the next set-up on the Ren pick-ups shoot with director Kate Madison. Photo: Michael Hudson
Discussing the next set-up on the Ren pick-ups shoot with director Kate Madison. Photo: Michael Hudson

Recently I’ve been involved in pick-ups shoots for a couple of projects I lensed last year: action-comedy feature The Gong Fu Connection and fantasy series Ren. Both pick-up shoots were strange experiences, featuring some very familiar aspects of the original shoot – locations, sets, costumes – but noticeably lacking others – certain actors, crew members and so on. The Ren pick-ups in particular were like re-living principal photography in microcosm, with stressful crowd shoots followed by more relaxed, smaller scenes and finally night shots with flaming arrows again!

A CTB-gelled Arrilite 1000 stands in for the 2.5K we used for backlight during principal photography on Ren! Photo: Michael Hudson
A CTB-gelled Arrilite 1000 stands in for the 2.5K HMI used for backlight during principal photography on Ren! Photo: Michael Hudson

I’ve blogged previously about how a director/producer can prepare for pick-ups – by keeping certain key props and costumes, for example – but today I have a few thoughts from a DP’s perspective.

1. Keep a record of lighting plans. I have a pretty good memory for my lighting set-ups, but not everyone does, so keeping notes is a good idea. Your gaffer may even do this for you. I frequently use this blog as a means of recording lighting set-ups, and indeed tried to access it during the Ren pick-ups shoot but was foiled by dodgy wifi.

2. Keep camera logs. On a properly crewed shoot this will be the 2nd AC’s job. The logs should include at least the following info for each slate: lens, aperture, ASA, white balance and shutter angle. This can be useful in principal photography too, for example if you shoot the two parts of a shot-reverse at different ends of the day or different days all together, and need to make sure you use the same lens.

Production assistant Claire Finn tends the brazier which provides smoke in the absence of the Artem smoke gun we used during principal photography. Photo: Michael Hudson
Production assistant Claire Finn tends the brazier which provides smoke in the absence of the Artem smoke gun used during principal photography. Photo: Michael Hudson

3. Have the original scene handy when you shoot the pick-ups. Load the edit onto a laptop or tablet so that you can compare it on set to the new material you’re framing up.

4. Own a bit of lighting kit if you can. In the shed I have some battered old Arrilites and a few other bits and pieces of gear that has seen better days. On a proper shoot I would leave this at home and have the production hire much better kit. But for pick-ups, when there’s often no money left, this stuff can come in handy.

5. Keep gels. If you employ an unusual colour of gel during principal photography, try to keep a piece of it in case you need to revisit that lighting set-up in pick-ups. Production will have to pay for the gel once it’s been used anyway. On the Ren pick-ups shoot, after pulling all of my gels out of the plastic kitchen bin I keep them in, I was relieved to find that I still had two pieces of the Urban Sodium gel I used in the flaming arrows scene the first time around.

Urban Sodium gel provides the grungy orange light for the flaming arrows scene, just as it did last November. Photo: Hermes Contreras
Urban Sodium gel provides the grungy orange light for the flaming arrows scene, just as it did last November. Photo: Hermes Contreras
5 Tips for Successful Pick-ups

Converting Blackmagic Raw Footage to ProRes with After Effects

My 4K Blackmagic Production Camera
Blackmagic Production Camera

One of the big benefits of the Blackmagic cameras is their ability to shoot raw – lossless Cinema DNG files that capture an incredible range of detail. But encoding those files into a useable format for editing can be tricky, especially if your computer won’t run the processor-intensive DaVinci Resolve which ships with the camera.

You can usually turn to the Adobe Creative Suite when faced with intractable transcoding problems, and sure enough After Effects provides one solution for raw to ProRes conversion.

I’ll take you through it, step by step.  Let’s assume you’ve been shooting on a Blackmagic Cinema Camera and you have some 2.5K raw shots which you want to drop into your edit timeline alongside 1080P ProRes 422HQ material.

1. In After Effects’ launch window, select New Composition. A dialogue box will appear in which you can spec up your project. For this example, we’re going to choose the standard HDTV resolution of 1920×1080. It’s critical that you get your frame rate right, or your audio won’t sync. Click OK once you’ve set everything to your liking.

step1

2. Now go to the File menu and select Import > File. Navigate to the raw material on your hard drive. The BMCC creates a folder for each raw clip, containing the individual Cinema DNG frames and a WAV audio file. Select the first DNG file in the folder and ensure that Camera Raw Sequence is ticked, then click OK.

step2

3. You’ll then have the chance to do a basic grade on the shot – though with only the first frame to judge it by.

step3

4. Use Import > File again to import the WAV audio file.

step4

5. Your project bin should now contain the DNG sequence – shown as a single item – along with the WAV audio and the composition. Drag the DNG sequence into the main viewer window. Because the BMCC’s raw mode records at a resolution of 2.5K and you set your composition to 1080P, the image will appear cropped.

step5

6. If necessary, zoom out (using the drop-down menu in the bottom left of the Composition window) so you can see the wireframe of the 2.5K image. Then click and drag the bottom right corner of that wireframe to shrink the image until it fits into the 1080P frame. Hold down shift while dragging to maintain the aspect ratio.

step6

7. Drag the WAV audio onto the timeline, taking care to align it precisely with the video.

step7

8. Go to Composition Settings in the Composition menu and alter the duration of the composition to match the duration of the clip (which you can see by clicking the DNG sequence in the project bin).

step8

9. Go to the Composition menu again and select Add to Render Queue. The composition timeline will give way to the Render Queue tab.

step9

10. Next to the words Output Module in the Render Queue, you’ll see a clickable Lossless setting (yellow and underlined). Click this to open the Output Module Settings.

step10

11. In the Video Output section, click on Format Options… We’re going to pick ProRes 422 HQ, to match with the non-raw shots we hypothetically filmed. Click OK to close the Format Options.

step11

12. You should now be back in Output Module Settings. Before clicking OK to close this, be sure to tick the Audio Output box to make sure you don’t end up with a mute clip. You should not need to change the default output settings of 48kHz 16-bit stereo PCM.

step12

13. In the Render Queue tab, next to the words Output to you’ll see a clickable filename – the default is Comp1.mov. Click on this to bring up a file selector and choose where to save your ProRes file.

step13

14. Click Render (top far right of the Render Queue tab). Now just sit back and wait for your computer to crunch the numbers.

step14

I’ve never used After Effects before, so there are probably ways to streamline this process which I’m unaware of. Can anyone out there suggest any improvements to this workflow? Is it possible to automate a batch?

Converting Blackmagic Raw Footage to ProRes with After Effects

Mixing Amelia’s Letter

IMG_1841
Nico Metten works on the mix of Amelia’s Letter

Yesterday I attended the final sound mix for Amelia’s Letter, the short supernatural drama I directed last year for writer Steve Deery and producer Sophia Ramcharan. This is always one of my favourite parts of the filmmaking process; all the hard work of generating the material is done, and it’s just about arranging those materials in the right proportions to create a whole larger than the sum of its parts.

Mixing is harder the more tracks of sound you have. It took Neil Douek and I forever to wrangle the layers and layers of audio I’d laid into a decent mix for Soul Searcher (listen to a breakdown here), and The Dark Side of the Earth‘s pilot was a delicate balancing act with swordfight SFX, dialogue and a big orchestral score all going on at the same time (watch an interview with the mixer here). Stop/Eject, being quieter and less complex, was a breeze to mix (read the blog post here).

Amelia’s Letter was a little more complex than Stop/Eject, but not much. It was my third collaboration with gifted sound designer Henning Knoepfel, and my fourth with the equally gifted composer Scott Benzie, who both gave us excellent material to work with. In the pilot’s seat for the mix was Nico Metten of Picture Sound. Although I hadn’t worked with him before, he was very much in tune with what I wanted from the mix. In a nutshell, the brief was: make it scary.

If Amelia’s Letter succeeds, and I think it does, it should be by turns unnervingly scary and heart-breakingly sad. I did research the horror genre when I embarked on the project, but for the latter stages of preproduction and during the shoot (basically, whenever I was dealing with the actors) the important thing was that the characters worked and were empathetic; the sadness would naturally follow. I tried to avoid thinking of the film in horror terms at all during that stage.

Recording one last sound effect...
Recording one last sound effect…

But once we got to post, it was time to start thinking about creeping out the audience, and downright scaring them. As the last stage in the audio chain, the mix needed to play a big part in this. Nico agreed, and had already added some extra creepy sounds by the time I arrived. As we went through, we added in more impacts to the jumpy moments, not forgetting to keep things quiet in the run-up to those moments to make them seem even louder by comparison.

Just as, during the picture edit, Tristan and I had been reminded of the power of NOT cutting, during the sound mix I was reminded of the power of subtracting sound, rather than always adding it. In a couple of key places we discovered that muting the first few bars of a music cue to let the SFX do the job made for much more impact when the music did come in.

But the mix wasn’t just about making it scary. The film climaxes with a sequence of flashbacks and revelations that was tricky to edit and still wasn’t quite doing what I wanted. It was only at the scoring and mixing stage that I was finally able to realise that a clear transition was needed halfway through the sequence; as I said to Nico, “At this point it needs to stop being scary and become sad.” In practice this meant dropping out the dissonant sounds and the ominous rumbling, even dropping out the ambience, and letting Scott’s beautifully sad music carry the rest of the scene.

It never ceases to amaze me how the story shines through in the end. You hack away at this lump of stone all through production and post, and at the end you’ve revealed a sculpture that – though in detail it may be different – follows all the important lines of the writer’s original blueprint.
Now begins the process of entering Amelia’s Letter into festivals…

Amelia’s Letter is a Stella Vision production in association with Pondweed Productions. Find out more at facebook.com/ameliasletter

Mixing Amelia’s Letter

Amelia’s Letter: The Edit Continues

Tristan, Steve and insufficient chairs.
Tristan, Steve and insufficient chairs.

A week after the test screening, I sat down with editor Tristan Ofield in a corner of Steve Deery’s book depot to take a final pass at Amelia’s Letter. Steve balanced on a pile of boxes beside us. Who says exec producers get all the luxury?

The main aim of the day was to make the film clearer. This became a fascinating exercise with notes from the test screening like, “I didn’t get that Barbara was a writer,” although she spends most of her screen-time sitting at a typewriter. How could we configure these images to more effectively tell the audience that Barbara is a writer, without the benefit of dialogue or ridiculous captions? And without showing her actually writing, because the whole crux of the film is that she’s suffering from writer’s block – and that needs to come across too. How? By really getting into the nuts and bolts of how motion picture editing tells a story, that’s how.

The previous evening I’d been watching 2 Reel Guys, a YouTube series about the creative filmmaking process. It’s incredibly cheesy, and a little bit soporific, but it does make some excellent points. Like how just two different shots can be edited together in three different ways for very different effects.

So how did we make it clearer that Barbara was a writer suffering from block? First, Tristan altered the scene to open on a shot of Barbara standing thoughtfully over the typewriter, with the machine dominant in frame. He held the shot for quite a while to let the audience take it all in. “A reminder of the power of not cutting,” he pointed out.

The Letter of Undue Importance
The Letter of Undue Importance

The second step was for us to really consider when to cut to the keyboard, or to the blank paper. The scene’s previous iteration had started on the blank paper, but I think that image failed to sink in for viewers, who were too busy trying to work out where they were and what was going on. Moving it later in the scene made it much more powerful.

It was also important not to cut to something else at the wrong time. There was a cutaway of a letter that had to be included somewhere for plot reasons, but I was convinced that if we showed that immediately before the typewriter CU then we would be telling the audience that Barbara was trying to compose a reply to the letter. Context is everything in editing. Put a different shot before or after a certain shot and you can completely change the meaning of that shot. By cutting to the letter as Barbara puts a teacup down next to it, Tristan was able to avoid it gaining undue importance.

Tristan's got one of those proper, colour-coded editing keyboards. Cool.
Tristan’s got one of those proper, colour-coded editing keyboards. Cool.

Another big lesson/reminder of the day was: less is more. I had been feeling for a while that Amelia’s Letter had one too many layers of supernatural mystery. Would the film be clearer if one was removed?

Steve was sceptical, and understandably so. No writer loves having chunks of their material hacked out. But to his credit, he let Tristan and I try it. After watching this revised version through, all three of us were convinced it was the right decision. Everything else in the film had become stronger because this one thread had been removed. Minor characters gained more importance because they weren’t competing with the removed element, and major characters’ challenges and emotions shone through more clearly. And the audience would have a much better chance of solving the film’s two remaining mysteries without scatching their heads over the third one too.

At the end of the day, we left greatly satisfied with what we had accomplished. Soon Amelia’s Letter will enter the next phase of postproduction: sound design, music composition, grading and visual effects. Stay tuned.

Amelia’s Letter is written by Steven Deery, directed by me and produced by Sophia Ramcharan of Stella Vision Productions. Visit the Amelia’s Letter Facebook page.

Amelia’s Letter: The Edit Continues

The Visual Effects of The Abyss

It’s time for one of my occasional asides celebrating the world of traditional visual effects – miniatures, matte paintings, rear projection, stop motion and the like. For a film using all of those techniques, look no further than The Abyss (1989). Arguably James Cameron’s most underrated film, it can also be considered his most ambitious. Whereas Terminator 2 had bigger action scenes, Titanic had a bigger set and Avatar had more cutting edge technology, these concerns all pale in comparison to the sheer difficulty of shooting so much material underwater.

The hour-long documentary Under Pressure makes the risks and challenges faced by Cameron and his crew very clear.

The Abyss won an Oscar for Best Visual Effects, and is remembered chiefly for the then-cutting-edge CG water tentacle. But it also ran the gamut of traditional effects techniques.

The film follows the crew of an experimental underwater drilling platform, led by Bud (Ed Harris), as they are roped into helping a team of navy divers, led by Lt. Coffey (Michael Biehn), investigate the sinking of a submarine. Underwater-dwelling aliens and cold war tensions become involved, and soon an unhinged Coffey is setting off in a submersible to dispatch a nuke to the bottom of the Cayman Trench and blow up the extra-terrestrials.

When Bud and his wife Lindsey (Mary Elizabeth Mastrantonio) give chase in a second submersible, a visual effects tour de force ensues. The following methods were used to build the sequence:

abyss1

  • Medium-wide shots of the actors in real submersibles shot in an abandoned power station that had been converted by the production into the world’s largest fresh-water filtered tank, equal in capacity to about eleven Olympic swimming pools.

abyss2

  • Close-ups of the actors in a submersible mock-up on stage.

abyss3

  • Over-the-shoulder shots of the actors in the submersible mock-up, with a rear projection screen  outside the craft’s dome, showing miniature footage accomplished with….

abyss4

  • Quarter-scale radio-controlled submarines, shot in a smaller tank. These miniatures were remarkably powerful and, due to the lights and batteries on board, weighed around 450lb (204kg). In order to see what they were doing, the operators were underwater as well, using sealed waterproof joysticks to direct the craft. The RC miniatures were used when the craft needed to collide with each other, or with the underwater landscape, and whenever the audience was not going to get a good look at the domes on the front of the submersibles and notice the lack of actors within.

abyss5

  • One of the custom film projectors inserted into the miniature subs
    One of the custom film projectors inserted into the miniature subs

    Where a more controlled camera move was required, or the actors needed to be visible inside the subs, but it was not practical to shoot full-scale, motion control was used. This is the same technique used to shoot spaceships in, for example, the original Star Wars trilogy. A computer-controlled camera moves around a static model (or vice versa), exposing film very slowly in order to maintain a large depth of field. The move is repeated several times for each different vehicle under different lighting conditions, before compositing all of the “passes” together on the optical printer in the desired ratios, to achieve the final look. For The Abyss’s motion control work, the illusion of being underwater was created with smoke. In shots featuring the submersibles’ robot arms, stop motion was employed to animate these appendages. But perhaps the neatest trick was in making the miniature subs appear to be inhabited; the models were fitted with tiny projectors which would throw pre-filmed footage of the actors onto a circular screen behind the dome.

The sub chase demonstrates perfectly how visual effects should work: mixing a range of techniques so that the audience never has time to figure out how each one is done, and using an appropriate technique for each individual shot so that you’re making things no more and no less complicated than necessary to tell that little piece of the story.

My favourite effect in the sequence is near the end, when the dome of Coffey’s sub cracks under the water pressure. This was filmed over-the-shoulder using rear projection for the view outside of the dome. But the dome was taken from a real submersible, and as such was too thick and too valuable to be genuinely cracked. So someone, and whoever he or she is is an absolute genius, came up with the idea of using an arrangement of backlit sellotape on the dome to create the appearance of a crack. A flag was then set in front of the backlight, rendering the sellotape invisible. On cue, the flag was slid aside, gradually illuminating the “crack”.

crack

Now that, my friends, is thinking outside the box.

The Visual Effects of The Abyss