LED lighting has found its way onto most sets now, but there is another off-shoot of LED technology which I see cropping up more and more in American Cinematographer articles. Sometimes it’s lighting, sometimes it’s a special effect, and often it’s both. I’m talking about LED screens: huge LED panels that, rather than emitting solid, constant light, display a moving image like a giant monitor.
I touched on LED Screens in my article about shooting on moving trains, and moving backgrounds do seem to be one of the most common uses for these screens. House of Cards has been in the news this week for all the wrong reasons, but it remains a useful example here. Production designer Steve Arnold describes the use of LED screens for car scenes in the political drama:
We had a camera crew go to Washington, D.C. to drive around and shoot plates for what you see outside when you’re driving. And that is fed into the LED screens above the car. So as the scene is progressing, the LED screens are synched up to emit interactive light to match the light conditions you see in the scenery you’re driving past (that will be added in post). All the reflections on the car windows, the window frames and door jambs is being shot while we’re shooting the actors in the car. Then in post the green screens are replaced with the synced up driving plates, and it works really well. It gives you the sense of light passing over the actors’ faces, matching the lighting that is in the image of the plate.
This appears to be the go-to method for shooting car scenes now, and more exotic forms of transport are using the technique as well. Rogue One employed “a massive array of WinVision Air 9mm LED panels” to create “an interactive hyperspace lighting effect” (American Cinematographer, February 2017).
Production designer Doug Chiang comments on the use of LED screens in the Death Star command centre:
We wanted to see things on the viewscreen where traditionally it would have been a giant bluescreen; we wanted the interactive reflective quality of what you would actually see. Even though we ultimately had to replace some of those images with higher-fidelity images in postproduction, they were enough to give a sense that the quality of light on the actors and the reflections on the set looked and felt very real.
One of the first major uses of LED screens for lighting was in the seminal stranded-in-space thriller Gravity. Concerned about blending the actors convincingly with the CGI backgrounds, DP Emmanuel Lubezki, ASC, AMC came up with a solution that was, at the time, cutting-edge: “I had the idea to build a set out of LED panels and to light the actors’ faces inside it with the previs animation.” (AC, November 2013)
Gravity also featured a scene in which Sandra Bullock’s character puts out a fire, and here once again LED panels provided interactive light. This is a technique that has since been used on several other films to simulate off-camera fires, including Christopher Nolan’s Dunkirk, and the true story of the BP oil rig disaster, Deepwater Horizon.
Traditionally, fire has been simulated with tungsten sources, often Maxibrutes, but on Deepwater Horizon these were relegated to background action, while foregrounds were keyed by a huge 42’x24′ video wall made up of 252 LED panels. DP Enrqiue Chediak, ASC had this to say (in AC, October 2016):
Fire caused by burning oil is very red and has deep blacks. You cannot get that with the substance that the special effects crews use – all those propane fires are yellow. Oil fire has a very specific quality, and I wanted to reach that. It was important to feel the sense of hell.
By playing back footage of real oil fires on the video wall, Chediak was able to get the realistic colour of lighting he wanted, while retaining authentic dynamics.
This technique isn’t necessarily confined to big-budget productions. In theory you could create interactive lighting with an iPad. For example, a tight shot of an actor supposedly warming themselves by a fireplace; if you could get the iPad close enough, playing a video of flames, I imagine the result would be quite convincing. Has anyone out there tried something like this? Let me know if you have!
I’ll leave you with a music video I shot a few years back (more info here), featuring custom-built LED panels in the background.
The latest episode of Lighting I Like is out, analysing how the “Splinter Chamber” set is lit in time travel thriller 12 Monkeys. This adaptation of the Terry Gilliam movie can be seen on Netflix in the UK.
The publicity machine is ramping up for Kenneth Branagh’s Murder on the Orient Express remake, and it’s got me thinking about the challenges of a script set largely on a moving train. There are a number of ways of realising such scenes, and today I’m going to look at five movies that demonstrate different techniques. All of these methods are equally applicable to scenes in cars or any other moving vehicle.
1. For Real: “The Darjeeling limited”
Wes Anderson’s 2007 film The Darjeeling Limited sees three brothers embarking on a spiritual railway journey across India. Many of the usual Anderson tropes are present and correct – linear tracking shots, comical headgear, Jason Schwartzman – but surprisingly the moving train wasn’t done with some kind of cutesy stop-motion. Production designer Mark Friedberg explains:
The big creative decision Wes made was that we were going to shoot this movie on a moving train. And all that does is complicate life. It makes it more expensive, it makes the logistics impossible. It made it incredibly difficult to figure out how many crew, what crew, what gear… but what it did do is it made it real.
Kenneth Branagh has stated that at least some of Murder on the Orient Express was shot on a real moving train too:
They painstakingly built a fully functioning period authentic locomotive and carriages from the Orient Express during the golden, glamorous age of travel. It was a train that moved… All of our actors were passengers on the train down the leafy lanes of Surrey, pretending to be the former Yugoslavia.
2. Poor Man’s Process: “The Double”
Although best known as The IT Crowd‘s Moss and the new host of the Crystal Maze, Richard Ayoade is also an accomplished director. His last feature was a darkly beautiful adaptation of Dostoyevsky’s classic identity-crisis novella The Double.
Unlike the other movies on this list, The Double only has short sequences on a train, and that’s a key point. So named because it’s a cheap alternative to rear projection (a.k.a. process photography), Poor Man’s Process is a big cheat. In order to hide the lack of motion, you keep the view outside your vehicle’s windows blank and featureless – typically a night sky, but a black subway tunnel or a grey daytime sky can also work. Then you create the illusion of motion with dynamic lighting, a shaky camera, and grips rocking the carriage on its suspension. Used judiciously, this technique can be very convincing, but you would never get away with it for a whole movie.
Poor Man’s works particularly well in The Double, the black void outside the subway car playing into the oppressive and nightmarish tone of the whole film. In an interview with Pushing Pixels, production designer David Crank explains how the subway carriage set was built out of an old bus. He goes on to describe how the appearance of movement was created:
We put the forks of a forklift under the front of the bus, and shook it… For the effect of moving lights outside the train, it was a combination of some spinning lights on stands, as well as lights on small rolling platforms which tracked back and forth down the outside of the bus.
Duncan “Zowie Bowie” Jones followed up his low-budget masterpiece Moon with Hollywood sci-fi thriller Source Code, a sort of mash-up of Quantum Leap and Groundhog Day with a chilling twist. It takes place predominantly on a Chicago-bound commuter train, in reality a set surrounded by green screen. In the featurette above, Jones mentions that shooting on a real moving train was considered, but ultimately rejected in favour of the flexibility of working on stage:
Because we revisit an event multiple times, it was absolutely integral to making it work, and for the audience not to get bored, that we were able to vary the visuals. And in order to do that we had to be able to build platforms outside of the train and be able to really vary the camera angles.
In the DVD commentary, Jones also notes that the background plates were shot in post from a real train “loaded up with cameras”.
It’s difficult to make it feel like natural light is coming in and still get the sense of movement on a train… We worked with computer programs where we actually move the light itself, and brighten and dim the lights so it feels as if you are travelling… The lights are never 100% constant.
When I shot The Little Mermaid last year we did some train material against green screen. To make the lighting dynamic, the grips built “branch-a-loris” rigs: windmills of tree branches which they would spin in front of the lamps to create passing shadows.
4. Rear projection: “Last Passenger”
Perhaps the most low-budget film on this list, Last Passenger is a 2013 independent thriller set aboard a runaway train. Director Omid Nooshin and DP Angus Hudson wanted a vintage look, choosing Cooke Xtal anamorphic lenses and a visual effects technique that had long since fallen out of favour: rear projection.
Before the advent of optical – and later digital – compositing, rear projection was commonly used to provide moving backgrounds for scenes in vehicles. The principle is simple: the pre-recorded backgrounds are projected onto a screen like this…
Hudson goes into further detail on the technique as used for the Last Passenger:
To capture [the backgrounds] within our limited means, we ended up shooting from a real train using six Canon 5D cameras, rigged in such a way that we got forward, sideways and rear-facing views out of the train at the same time. We captured a huge amount of footage, hours and hours of footage. That allowed us to essentially have 270 degrees of travelling shots, all of which were interlinked.
Because rear projection is an in-camera technique, Nooshin and Hudson were able to have dirt and water droplets on the windows without worrying about creating a compositing nightmare in postproduction. Hudson also notes that the cast loved being able to see the backgrounds and react to them in real time.
5. L.E.D. Panels: “Train to Busan”
Enabling the actors to see the background plates was also a concern for Yeon Sang-ho, director of the hit Korean zombie movie Train to Busan. He felt that green screen would make it “difficult to portray the reality”, so he turned to the latest technology: LED screens. This must have made life easier not just for the cast, but for the cinematographer as well.
You see, when you travel by train in the daytime, most of the light inside the carriage comes from outside. Some of it is toplight from the big, flat sky, and some of it is hard light from the sun – both of these can be faked, as we’ve seen – but a lot of the light is reflected, bouncing off trees, houses, fields and all the other things that are zipping by. This is very difficult to simulate with traditional means, but with big, bright LED screens you get this interactive lighting for free. Because of this, and the lack of postproduction work required, this technique is becoming very popular for car and train scenes throughout the film and TV industry.
This brings us back to Murder on the Orient Express, for which 2,000 LED screens were reportedly employed. In a Digital Spy article, Branagh notes that this simulated motion had an unintended side effect:
It was curious that on the first day we used our gimballed train sets and our LED screens with footage that we’d gone to great trouble to shoot for the various environments – the lowlands and then the Alps, etc… people really did feel quite sick.
I’ll leave you with one final point of interest: some of the above films designed custom camera tracks into their train carriage sets. On Last Passenger, for example, the camera hung from a dolly which straddled the overhead luggage racks, while The Darjeeling Limited had an I-beam track designed into the centre of the ceiling. Non-train movies like Speed have used the same technique to capture dolly shots in the confines of a moving vehicle.
Micro-filmmaker Magazine’s Jeremy Hanke recently got in touch and asked if I would review his book, “Green Screen Made Easy”. I used to make a lot of micro- and no-budget movies packed full of VFX, but I usually avoided green-screen because I could never make it look good. Although those kind of projects are behind me, I agreed to the review because I figured that this book might help others succeed where I’d failed – and also I was interested to find out why I had failed!
What Jeremy and his co-author Michele Terpstra set out to do is to cover the entire process from start to finish: defining chromakeying, buying or building a green screen, lighting and shooting it, sourcing or shooting background plates, choosing keying software, and all aspects of the keying itself.
The book is aimed at no-budget filmmakers, hobbyists or aspiring professionals making self-funded or crowd-funded productions, those digital auteurs who are often their own producers, writers, DPs, editors, colourists and VFX artists. Perhaps you’ve tried green-screening before and been disappointed with the results. Perhaps you’ve always seen it as a bit too “techie” for you. Perhaps the unpaid VFX artist you had lined up for your sci-fi feature just pulled out. Or perhaps you’ve already reached a certain level of competency with keying and now you want to step up a level for your next production. If any of these scenarios ring true with you, I believe you’ll find this book very useful.
“Green Screen Made Easy” is divided into two halves, the first half (by Jeremy) on prepping and executing your green screen shoot, and the second half (by Michele) on the postproduction process. Both authors clearly write from extensive first-hand experience; throughout the text are the kind of tips and work-arounds that only come from long practice. By necessity there is a fair amount of technical content, but everything is lucidly explained and there’s a handy glossary if any of the terms are unfamiliar to you.
The section on lighting and shooting green screen material contained few surprises for me as a cinematographer – see my post on green screen for my own tips on this subject – but will be very useful to those newer to the field. The chapters on equipment are very thorough, considering everything from which camera and settings to choose to ensure the best key later on, to buying or building a mobile green screen, or even kitting out your own green screen studio – all with various alternatives to suit any budget.
The postproduction chapters revealed clearly why I struggled with keying in the past. Michele explains how the process is much more than simply pulling a single key, and can involve footage clean-up, garbage matting, a core key and a separate edge key, spill suppression, hold-out matting and light wrapping. The book guides you through all these steps, and outlines the pros and cons of the software and plug-in options for each step.
Once you’ve read this book, I’d say the only other thing you’ll need before you can start successfully green-screening is to watch some YouTube tutorial videos specific to your software. While the instructions in the book look pretty good (as far as I can tell without attempting to follow them) the medium of text seems a little restrictive in teaching what is inherently a visual process. There are explanatory images throughout “Green Screen Made Easy”, but in the ebook version at least I found it difficult to discern the subtle differences in some of the before-and-after comparisons.
Ultimately what will make you the best “green-screener” is practice, practice, practice, but by reading this book first you’ll give yourself a rock-solid foundation, an appreciation of the entire process from start to finish, and the insider knowledge to avoid a lot of time-sucking pitfalls. And keep it handy, because you’ll be sure to thumb through it and re-read those handy tips throughout your prep, production and post.
“Green Screen Made Easy” is available in paperback and ebook editions from Amazon.
Green screen work is almost unavoidable for a modern cinematographer. In an age when even the most basic of corporates might use the technique, and big blockbusters might never leave the green screen stage, knowing how to light for it is essential. The following tips apply equally to blue screen work….
1. Light the screen at key.
Or to put it another way, your screen should not be over- or under-exposed. If you use a light meter, you can hold it at various spots on the screen (taking care not to block any light with your body) and check that the reading always matches what the iris of your lens is set to. If your camera or monitor has a false colours option, you can use this to check the level and consistency of the exposure across the screen.
2. Use soft sources.
Bouncing tungsten lamps off polyboard is a cheap and effective way to spread soft light across a green screen. Typically you will want two sources, one to each side of the screen. They will need to be well flagged so that their light does not spill onto the subject.
On larger budgets, Kinoflo Image 85s or 87s are often used to illuminate green screens. They are 4ft 8-bank units which put out a large amount of soft light. Ask your hire company to supply them with spiked green tubes; designed especially for green screen work, these tubes help to increase the colour saturation of the screen. (Spiked blue tubes are also available.)
3. Control spill.
As far as possible, reflected green light from the screen should not fall on the subject. The main way to ensure this is to put as much distance as possible between the screen and the subject.
I learnt a great tip recently which also helps reduce spill: once the exact camera position is known, bring in 4×4 floppy flags slightly behind the subject, one either side, just out of frame.
4. Avoid dark shadows.
Green spill will bleed most easily into the dark areas on your subject, especially if you’re shooting with a wide aperture. Clipped (or ‘crushed’) blacks are particularly undesirable. The solution is to use more fill light, even if this goes against the mood and contrast levels you’re using in non-VFX shots. If you use LUTs, you should consider creating a custom one for green screen work which pushes the contrast further to compensate for this flatter starting point. If not, you will have to work with the colourist in post to ensure that the shadows are restored to their usual levels once the VFX are complete.
5. Add tracking markers.
Camera movement against green screen isn’t the no-no that it used to be, with any VFX team worth their salt being able to deal with handheld shots, pans, tilts and push-ins. If there isn’t a VFX supervisor on set, you can help them out by taping crosses to a few points on the screen. There should always be at least one marker in shot throughout the camera move (more if it’s a multi-axis move), and they shouldn’t stay put behind any tricky edges (e.g. long hair) for long.
When it comes to shooting elements for VFX, green-screen gets all the press. But certain kinds of elements can be tricky to key well, and sometimes it’s not the right look. In the last few days Kate Madison and I have needed to shoot last-minute elements for some shots in Ren: The Girl with the Mark, and we turned to monochromatic backgrounds.
Why? How does it work? Well certainly you can key out black or white just like you’d key out green, but the most powerful way to use these backgrounds is not with keying at all, but by a bit of basic maths. And don’t worry, the computer does the maths for you.
If you’ve ever used Photoshop, you’ll have noticed some layer modes called Screen and Multiply. Final Cut Pro has the same modes (it also has Add, which to most intents and purposes is the same as Screen) and so do all the major editing and FX packages.
Screen adds the brightness of each pixel of the layer to the layer underneath. Since black has a brightness of zero, your black screen disappears, and the element in front of it is blended seamlessly into the background image, with its apparent solidity determined by its brightness.
Multiply, as the name suggests, multiplies the brightness of each pixel with the layer underneath. Since white has a brightness of one, and any number multiplied by one is that same number, your white screen vanishes. Whatever element is in front of your screen is blended into the background image, with darker parts of the element showing up more than lighter parts.
One of the elements Kate and I needed to shoot was a flame, to be comped onto a torch. We lit a torch and clamped it to a stand, shooting at night with the pitch black garden in the background. It was the work of moments to comp this element into the shot using Screen mode.
Fire is the perfect partner for black-screen shooting, because it generates its own light and it’s not solid. Solid objects composited using Screen/Add or Multiply take on a ghostly appearance – perfect for, er, ghost effects – but not ideal in other situations; because of the way Screen mode works, anything that’s not peak white will be transparent to some degree.
We shot some fast-moving leaves and debris against black, but only the high level of motion blur allowed us to get away with it. In fact, if you know you’re going to have a lot of motion blur, black-screen might be the ideal method, because it will be tricky to get a clean key off a green-screen.
Other things that work well against black-screen are sparks, smoke and water/rain, again because they’re not solid. If you want to add rain or snow to a shot, black-screen is the way to go – check out my post about that here.
Yesterday Kate and I needed to shoot a whirlwind element. One of the VFX team suggested swirling sand in a vase of water. After a few experiments in the kitchen, we ended up using dirt from the garden. We used fluorescent softboxes for the background, ensuring we got a bright white background, and made weird arrangements of white paper to eliminate as many of the dark reflections in the vase as we could.
A few weeks back we shot hosepipe water against black, inverted it and used Multiply to superimpose it as blowing dirt.
With a little thinking outside the box, you can shoot all kinds of elements against white or black to meet your VFX needs. I’ll leave you with this featurette I made in 2006, breaking down the various low-tech FX – many of them black-screen – that I employed on my feature film Soul Searcher.
After countless viewings on VHS and DVD over my lifetime, I finally got to see Labyrinth on the big screen today. The imagination and detail in this film are just astonishing. Every scene has little puppet creatures wandering or flying about in the background to bring the sets to life. In today’s screening I noticed, for the first time, that there are two bottles of milk – presumably delivered by the Goblin Milkman – outside the door of Jareth’s castle. How brilliant is that?
Anyway, while there are many awesome things about Labyrinth, one of the techniques that I think is put to particularly good effect in the film is in-camera substitution. Typically this involves one type of puppet leaving frame briefly, and a second puppet – of the same character – reappearing in its place. Puppets are often limited in the actions that they can perform, and while scenes will commonly use different versions of the puppet in different shots to cover the full range of actions, Henson sometimes uses different versions of the puppet in the same shot to sell the illusion of a single, living creature. And though many of these effects are fairly obvious to a modern audience, you can still admire their ingenious design and perfect timing.
Skip through the movie to the timecodes listed below to see some of the best substitution effects.
1. Goblin Under the Bedclothes – 11:40
In the film’s first puppet scene, Sarah’s parent’s bedroom becomes infested with goblins, building up to David Bowie’s big oh-so-eighties entrance. One goblin crawls along the bed, under the sheets, before emerging. It looks like the initial crawling is achieved by pulling a rough goblin shape along on a wire under the sheets. The shape then drops out of the end of the bedclothes, behind a chest, and a moment later a puppet pops up from behind the same chest. This substitution effect obviates the need for a custom-built or chopped-up bed, which would have been necessary to permit the passage of the proper puppet and its puppeteer under the bedclothes.
2. Sir Didymus’ Acrobatics – 58:35
This shot appears to employ three different models of Sir Didymus, the honourable but fighting-crazed guardian of the bridge over the Bog of Eternal Stench. The first is a floppy version which is thrown behind some rocks by Ludo. After a practical puff of dusk, a second Sir Didymus – this one in a more rigid, leaping position – is launched from some kind of catapult hidden behind the rocks. He flies out of frame, to be replaced a moment later by the Muppet-style hand- and rod-puppet which is used for the majority of Sir Didymus’ shots.
3. Cowardly Ambrosius – 1:16:25
To his infinite chagrin, Sir Didymus’ bravery is not matched by that of his canine steed, Ambrosius. During the battle with Humongous, the petrified pooch rears up, throwing off his valiant rider, and retires shamelessly into hiding. The rearing up is accomplished with a rather unconvincing puppet dog. After he drops back down out of frame (aided by a slight zoom in to help lose him), a real dog enters in the background, running into hiding.
4. Double David – 1:27:53
In the film’s finale number, “Within You”, David Bowie’s Goblin King messes with our sense of direction as he jumps and flips around the disorientating Escher artwork brought to life. Early in the sequence he jumps off a ledge, only to reappear simultaneously in a background doorway, now seemingly obeying a pull of gravity at 90° to that which acted on his leap. A shot of Bowie jumping off the ledge cuts to another of him coming through the doorway. The doorway is filmed with the camera on its side, and to finish the action of the first Bowie’s leap, a body double is pulled across frame on a dolly. This can be seen at 25:36 in the behind-the-scenes documentary:
This kind of low-tech but ingenious filmmaking is in danger of dying, as CGI is perceived as the only tool to create illusions. But with a little thought, a little planning, cunning framing, and a knowledge of how to use editing (or lack thereof) to your advantage, very effective illusions can still be created in camera.
It’s time for one of my occasional asides celebrating the world of traditional visual effects – miniatures, matte paintings, rear projection, stop motion and the like. For a film using all of those techniques, look no further than The Abyss (1989). Arguably James Cameron’s most underrated film, it can also be considered his most ambitious. Whereas Terminator 2 had bigger action scenes, Titanic had a bigger set and Avatar had more cutting edge technology, these concerns all pale in comparison to the sheer difficulty of shooting so much material underwater.
The hour-long documentary Under Pressure makes the risks and challenges faced by Cameron and his crew very clear.
The Abyss won an Oscar for Best Visual Effects, and is remembered chiefly for the then-cutting-edge CG water tentacle. But it also ran the gamut of traditional effects techniques.
The film follows the crew of an experimental underwater drilling platform, led by Bud (Ed Harris), as they are roped into helping a team of navy divers, led by Lt. Coffey (Michael Biehn), investigate the sinking of a submarine. Underwater-dwelling aliens and cold war tensions become involved, and soon an unhinged Coffey is setting off in a submersible to dispatch a nuke to the bottom of the Cayman Trench and blow up the extra-terrestrials.
When Bud and his wife Lindsey (Mary Elizabeth Mastrantonio) give chase in a second submersible, a visual effects tour de force ensues. The following methods were used to build the sequence:
Medium-wide shots of the actors in real submersibles shot in an abandoned power station that had been converted by the production into the world’s largest fresh-water filtered tank, equal in capacity to about eleven Olympic swimming pools.
Close-ups of the actors in a submersible mock-up on stage.
Over-the-shoulder shots of the actors in the submersible mock-up, with a rear projection screen outside the craft’s dome, showing miniature footage accomplished with….
Quarter-scale radio-controlled submarines, shot in a smaller tank. These miniatures were remarkably powerful and, due to the lights and batteries on board, weighed around 450lb (204kg). In order to see what they were doing, the operators were underwater as well, using sealed waterproof joysticks to direct the craft. The RC miniatures were used when the craft needed to collide with each other, or with the underwater landscape, and whenever the audience was not going to get a good look at the domes on the front of the submersibles and notice the lack of actors within.
Where a more controlled camera move was required, or the actors needed to be visible inside the subs, but it was not practical to shoot full-scale, motion control was used. This is the same technique used to shoot spaceships in, for example, the original Star Wars trilogy. A computer-controlled camera moves around a static model (or vice versa), exposing film very slowly in order to maintain a large depth of field. The move is repeated several times for each different vehicle under different lighting conditions, before compositing all of the “passes” together on the optical printer in the desired ratios, to achieve the final look. For The Abyss’s motion control work, the illusion of being underwater was created with smoke. In shots featuring the submersibles’ robot arms, stop motion was employed to animate these appendages. But perhaps the neatest trick was in making the miniature subs appear to be inhabited; the models were fitted with tiny projectors which would throw pre-filmed footage of the actors onto a circular screen behind the dome.
The sub chase demonstrates perfectly how visual effects should work: mixing a range of techniques so that the audience never has time to figure out how each one is done, and using an appropriate technique for each individual shot so that you’re making things no more and no less complicated than necessary to tell that little piece of the story.
My favourite effect in the sequence is near the end, when the dome of Coffey’s sub cracks under the water pressure. This was filmed over-the-shoulder using rear projection for the view outside of the dome. But the dome was taken from a real submersible, and as such was too thick and too valuable to be genuinely cracked. So someone, and whoever he or she is is an absolute genius, came up with the idea of using an arrangement of backlit sellotape on the dome to create the appearance of a crack. A flag was then set in front of the backlight, rendering the sellotape invisible. On cue, the flag was slid aside, gradually illuminating the “crack”.
Now that, my friends, is thinking outside the box.
The other day I watched a 1966 Doctor Who story called The Ark. It’s easy to look at a TV show that old and laugh at the stilted acting, rubber monsters and crude effects. But given the archaic and draconian conditions the series was made under back then, I can only admire the creativity displayed by the director and his team in visualising a script which was scarcely less demanding than a contemporary Who story.
In the sixties, each Doctor Who episode was recorded virtually as live on a Friday evening, following a week of rehearsals. BBC rules strictly limited the number of times the crew could stop taping during the 90 minute recording session, which was to produce a 22 minute episode. Five cameras would glide around the tightly-packed sets in a carefully choroegraphed dance, with the vision mixer cutting between them in real-time as per the director’s shooting script. (Interesting side note: some of Terminator 2 was shot in a very similar fashion to maximise the number of angles captured in a day.) It’s no wonder that fluffed lines and camera wobbles occasionally marred the show, as there was rarely time for re-takes.
But what’s really hard for anyone with a basic knowledge of visual effects to get their head around today is that, until the Jon Pertwee era began in 1970, there was no chromakey (a.ka. blue- or green-screening) in Doctor Who. Just think about that for a moment: you have to make a science fiction programme without any electronic means of merging two images together, simple dissolves excepted.
So the pioneers behind those early years of Doctor Who had to be particularly creative when when they wanted to combine miniatures with live action. One of the ways they did this in The Ark was through forced perspective.
Forced perspective is an optical illusion, a trick of scale. We’ve all seen holiday photos where a friend or relative appears to be holding up the Eiffel Tower or the Leaning Tower of Pisa. The exact same technique can be used to put miniature spaceships into a full-scale live action scene.
In these frames from The Ark, two miniature landing craft are lowered into the background before the camera pans to a full-size craft in the foreground:
And in these later frames, another miniature craft is placed much closer to the camera than the Monoid (a.k.a. a man in a rubber suit). The miniature craft takes off, pulled up on a wire I presume – a feat which time, money and safety would have rendered impossible with the full-size prop:
Of course, Doctor Who was not by any means the first show to use forced perspective, nor was it the last. This nineties documentary provides a fascinating look at the forced perspective work in the Christopher Guest remake of Attack of the 50 Ft. Woman, and other films…
And Peter Jackson famously re-invented forced perspective cinematography for the Lord of the Rings trilogy, when his VFX team figured out a way to maintain the illusion during camera moves, by sliding one of the actors around on a motion control platform…
So remember to consider all your options, even the oldest tricks in the book, when you’re planning the VFX for your next movie.
In this 2005 featurette I break down many of the visual effects in my feature film Soul Searcher, revealing how they were created using old school techniques, like pouring milk into a fishtank for apocalyptic clouds. Watch the shots being built up layer by layer, starting with mundane elements like the water from a kitchen tap or drinking straws stuck to a piece of cardboard.