Just a quick post to say that the latest special edition of Doctor Who Magazine, out now, features an article I wrote about the history of the venerable series’ cinematography. From the cathode-ray tube multi-camera studio shoots of 1963 to the latest ARRI Alexa/Cooke Anamorphic photography, the technology and techniques of lighting and lensing Doctor Who encapsulate the history of TV making over the last six decades. I had a great time combining two subjects I know quite a bit about and was very excited to see the article in print. Look out for it in your local newsagent!
Next week filming commences for Harvey Greenfield is Running Late, a comedy feature based on the critically acclaimed one-man play by Paul Richards. Paul reprises the title role in the film, directed by Jonnie Howard, who I previously worked with on A Cliché for the End of the World and The Knowledge.
The production is based locally to me in Cambridgeshire, and over the last couple of months I’ve attended recces, rehearsals and meetings. I’ve tried to approach it the same way I did Hamlet, reading each draft of the script carefully and creating a spreadsheet breakdown. Scene by scene, the breakdown lists my ideas for camerawork and lighting.
Harvey is a stressed and neurotic character who can’t say no to anything. The film takes place over a single day of his life when he finds himself having to attend a wedding, a funeral, a big meeting at his office, a school play and an appointment at a garage. Numerous scenes see him jogging from commitment to commitment (always running late in more ways than one) while taking phone calls that only add to the pressure. In the finest tradition of Alfie, Ferris Bueller and Fleabag, he also talks to camera.
Talking of finest traditions, the budget is very low but ambitions are high! With 100 script pages and 14 days the shoot will be more of a sprint than a marathon.
The UK film and TV industry is busier at present than I’ve ever known it, making up for lost time last year, so sourcing crew and kit has certainly been challenging. But thanks to generous sponsorship by Global Distribution and Sigma we will be shooting on a Red Ranger Gemini – which regular readers may recall I almost selected for Hamlet – with Sigma Cine primes and zooms. I will be working with a completely new camera team and gaffer.
One of the first things Jonnie told me was that he wanted to use a lot of wide lenses. This makes a lot of sense for the story. Wide lenses fill the background with more clutter, making the frame busier and more stressful for Harvey. They also put us into Harvey’s headspace by forcing the camera physically close to get a tighter shot. We shot some tests early on with Paul, primarily on the Sigma Cine 14mm, to start getting a feel for that look.
Influences include Woody Allen, the Coen brothers, Wes Anderson, Terry Gilliam and Napoleon Dynamite, and as usual, watching reference films has formed an important part of prep for me.
Based on the colour palette Nicole Stone has put together for her costumes, I’ve decided to use orange as Harvey’s stress colour and green when he’s calmer. For most of the film this will just be a case of framing in orange or green elements when appropriate, or putting a splash of the relevant colour in the background. For key scenes later in the story we may go so far as to bathe Harvey in the colour.
Right, I’d better get back to trying to sort out the lighting kit hire, which is still up in the air. Possibly this post should have been called Pre-production for “Harvey Greenfield” is running late.
Astera Titan Tubes seem to be everywhere at the moment, every gaffer and DP’s favourite tool. Resembling fluorescent tubes, Asteras are wireless, flicker-free LED batons comprised of 16 pixels which can be individually coloured, flashed and programmed from an app to produce a range of effects.
Here are five ways in which I used Titan Tubes on my most recent feature, Hamlet. I’m not being sponsored by Astera to write this. I just know that loads of people out there are using them and I thought it would be interesting to share my own experiences.
1. Substitute fluorescents
We had a lot of scenes with pre-existing practical fluorescents in them. Sometimes we gelled these with ND or a colour to get the look we wanted, but other times it was easier to remove the fluorescent tube and cable-tie an Astera into the housing. As long as the camera didn’t get too close you were never going to see the ties, and the light could now be altered with the tap of an app.
On other occasions, when we moved in for close-ups, the real fluorescents weren’t in an ideal position, so we would supplement or replace them with an Astera on a stand and match the colour.
2. Hidden behind corners
Orientated vertically, Asteras are easy to hide behind pillars and doorways. One of the rooms we shot in had quite a dark doorway into a narrow corridor. There was just enough space to put in a vertical pole-cat with a tube on it which would light up characters standing in the doorway without it being seen by the camera.
3. Eye light
Ben Millar, Hamlet‘s gaffer, frequently lay an Astera on the floor to simulate a bit of floor bounce and put a sparkle in the talent’s eye. On other occasions when our key light was coming in at a very sidey angle, we would put an Astera in a more frontal position, to ping the eyes again and to wrap the side light very slightly.
4. rigged to the ceiling
We had a scene in a bathroom that was all white tiles. It looked very flat with the extant overhead light on. Our solution was to put up a couple of pole-cats, at the tops of the two walls that the camera would be facing most, and hang Asteras horizontally from them. Being tubes they have a low profile so it wasn’t hard to keep them out of the top of frame. We put honeycombs on them and the result was that we always had soft, wrappy backlight with minimal illumination of the bright white tiles.
5. Special effects
One of the most powerful things about Titan Tubes is that you can programme them with your own special effects. When we needed a Northern Lights effect, best boy Connor Adams researched the phenomenon and programmed a pattern of shifting greens into two tubes rigged above the set.
On War of the Worlds in 2019 we used the Asteras’ emergency lights preset to pick up some close-ups which were meant to have a police car just out of shot.
When DSLR video exploded onto the indie filmmaking scene a decade ago, film festivals were soon awash with shorts with ultra-blurry backgrounds. Now that we have some distance from that first novelty of large-sensor cinematography we can think more intelligently about how depth of field – be it shallow or deep – is best used to help tell our stories.
First, let’s recap the basics. Depth of field is the distance between the nearest and farthest points from camera that are in focus. The smaller the depth of field, the less the subject has to move before they go out of focus, and the blurrier any background and foreground objects appear. On the other hand, a very large depth of field may make everything from the foreground to infinity acceptably sharp.
Depth of field is affected by four things: sensor (or film) size, focal length (i.e. lens length), focal distance, and aperture. In the days of tiny Mini-DV sensors, I was often asked by a director to zoom in (increase the focal length) to decrease the depth of field, but sometimes that was counter-productive because it meant moving the camera physically further away, thus increasing the focal distance, thus increasing the depth of field.
It was the large 35mm sensors of DSLRs, compared with the smaller 1/3” or 2/3” chips of traditional video cameras, that made them so popular with filmmakers. Suddenly the shallow depth of field seen in a Super-35 movie could be achieved on a micro-budget. It is worth noting for the purists, however, that a larger sensor technically makes for a deeper depth of field. The shallower depth of field associated with larger sensors is actually a product of the longer lenses required to obtain the same field of view.
Once a camera is selected and filming is underway, aperture is the main tool that DPs tend to use to control depth of field. A small aperture (large f- or T-number) gives a large depth of field; a large aperture (small f- or T-number) gives a narrow depth of field. What all those early DSLR filmmakers, high on bokeh, failed to notice is that aperture is, and always has been, a creative choice. Plenty of directors and DPs throughout the history of cinema have chosen deep focus when they felt it was the best way of telling their particular story.
One of the most famous deep-focus films is 1941’s Citizen Kane, frequently voted the greatest movie ever made. First-time director Orson Welles came from a theatre background, and instructed DP Gregg Toland to keep everything in focus so that the audience could choose what to look at just as they could in a theatre. “What if they don’t look at what they’re supposed to look at?” Welles was apparently asked. “If that happens, I would be a very bad director,” was his reply.
Stanley Kubrick was also fond of crisp backgrounds. The infamous f/0.7 NASA lenses used for the candlelight scenes in Barry Lyndon were a rare and extreme exception borne of low-light necessity. A typical Kubrick shot has a formal, symmetrical composition with a single-point perspective and everything in focus right into the distance. Take the barracks in Full Metal Jacket, for example, where the background soldiers are just as sharp as the foreground ones. Like Welles, Kubrick’s reasons may have lain in a desire to emulate traditional art-forms, in this case paintings, where nothing is ever blurry.
The Indiana Jones trilogy was shot at a surprisingly slow stop by the late, great Douglas Slocombe. “I prefer to work in the aperture range of T14-T14.5 when I am shooting an anamorphic film like Raiders,” he said at the time. “The feeling of depth contributed to the look.” Janusz Kamiński continued that deep-focus look, shooting at T8-T11 when he inherited the franchise for Kingdom of the Crystal Skull.
At the other end of the aperture scale, the current Hulu series The Handmaid’s Tale makes great creative use of a shallow depth of field, creating a private world for the oppressed protagonist which works in tandem with voiceovers to put the viewer inside her head, the only place where she is free.
A director called James Reynolds had a similar idea in mind when I shot his short film, Exile Incessant. He wanted to photograph closed-minded characters with shallow focus, and show the more tolerant characters in deep focus, symbolising their openness and connection with the world. (Unfortunately the tiny lighting budget made deep focus impossible, so we instead achieved the symbolism by varying the harshness of the lighting.)
One production where I did vary the depth of field was Ren: The Girl with the Mark, where I chose f/4 as my standard working stop, but reduced it to as little as f/1.4 when the lead character was bonding with the mysterious spirit inside her. It was the same principle again of separating the subject from the world around her.
Depth of field is a fantastic creative tool, and one which we are lucky to have so much control over with today’s cameras. But it will always be most effective when it’s used expressively, not just aesthetically.
Raiders of the Lost Ark, the first instalment in the blockbusting Indiana Jones franchise, burst onto our screens a scarcely-believable 40 years ago. But of course, it’s not the years, it’s the mileage…
The origin story of this legendary character is itself the stuff of Hollywood legend. Fleeing LA to escape the dreaded box office results of Star Wars (spoiler: he needn’t have worried), George Lucas and his friend Steven Spielberg were building a sandcastle on a Hawaiian beach when Lucas first floated the idea.
Like Star Wars, the tale of adventuring archaeologist Indiana Smith was inspired by adventure serials of the 1950s. Although Spielberg liked the first name (which came from Lucas’s dog, a reference that the third film would twist back on itself), he wasn’t so keen on Smith, and so Indiana Jones was born.
Rather than auditions, actors under consideration were invited to join Spielberg in baking bread. Tom Selleck was famously the first choice for the lead, but his contract with the TV series Magnum, P.I. precluded his involvement, and Spielberg instead suggested to a reluctant Lucas that they cast his regular collaborator Harrison Ford.
Raiders was shot at a breakneck pace, with Spielberg determined to reverse his reputation for going over schedule and over budget. Beginning in summer 1980, the animated red line of the film crew travelled across a map of the world from La Rochelle, France to England’s Elstree Studios (where Lucas had shot Star Wars) to Tunisia (ditto) to Hawaii, where it had all begun.
The film, and indeed the whole of the original trilogy, was photographed in glorious Panavision anamorphic by the late, great Douglas Slocombe, OBE, BSC, ASC. “Dougie is one of the few cinematographers I’ve worked with who lights with hard and soft light,” Spielberg commented. “Just the contrast between those styles within the framework of also using warm light and cool light and mixing the two can be exquisite.”
Location challenges included the removal of 350 TV aerials in the Tunisian town of Kairouan, so that views from Sallah’s balcony would look period-accurate, this being before the days of digital tinkering.
Digital tinkering was applied to the DVD release many years later, however, to remove a tell-tale reflection in a glass screen protecting Harrison Ford from a real cobra. Besides this featured reptile – which proved the value of the screen by spitting venom all over it – the production team initially sourced 2,000 snakes for the scene in which Indy and friends locate the Ark of the Covenant. But Spielberg found that “they hardly covered the set, so I couldn’t get wide shots.” 7,000 more snakes were shipped in to complete the sequence.
While the classic truck chase was largely captured by second unit director Michael Moore working to pre-agreed storyboards, Spielberg liked to improvise in the first unit. The fight on the Flying Wing, during which Ford tore a ligament after the plane’s wheel rolled over his leg, was made up as the filmmakers went along. When Indy uses the plane to gun down a troop of bad guys, the director requested a last-minute change from graphic blood sprays to more of a dusty look. Mechanical effects supervisor Kit West resorted to putting cayenne pepper in the squibs, which had the entire crew in sneezing fits.
“I would hear complaints,” said Kathleen Kennedy, who worked her way up the producer ranks during the trilogy, beginning as “associate to Mr. Spielberg”. “‘Well, Steven’s not shooting the sketches.’ But once you get into a scene and it’s suddenly right there in front of you, I only think that it can be better if changes are made then.”
Spielberg’s most famous improvisation, when a four-day sword-fight was thrown out and replaced with Indy simply shooting the swordsman dead, was prompted by the uncomfortable Tunisian heat and the waves of sickness that were sapping morale. “We couldn’t understand why the crew was getting ill, because we were all drinking bottled Evian water,” recalled Ford’s stunt double Vic Armstrong. “Until one day somebody followed the guy that collected the empties and saw him filling these Evian bottles straight out of the water truck.”
Production wrapped in early October, and effects house ILM, sound designer Ben Burtt and composer John Williams worked their world-class magic on the film. For the opening of the Ark, ILM shot ghost puppets underwater, while the demise of the Nazi Toht was accomplished with a likeness of actor Ronald Lacey sculpted out of dental alginate, which melted gorily when heated.
Amongst the sounds Burtt recorded were a free-wheeling Honda station wagon (the giant boulder), hands squelching in a cheese casserole (slithering snakes) and the cistern cover of his own toilet (the lid of the Ark). Williams initially composed two potential themes, both of which Spielberg loved, so one became the main theme and the other the bridge.
Although still great fun, and delivering a verisimilitude which only practical effects and real stunts can, some aspects of Raiders are problematic to the modern eye. The Welsh John Rhys Davies playing the Egyptian Sallah, and a female lead who is continually shoved around by both villains and heroes alike, make the film a little less of a harmless romp today than it was intended at the time.
Raiders was a box office hit, spawning two excellent sequels (and a third of which we shall not speak) plus a spin-off TV series, The Young Indiana Jones Chronicles, and even a shot-for-shot amateur remake filmed by a group of Mississippi teenagers over many years. It also won five Oscars in technical categories, and firmly established Steven Spielberg as the biggest filmmaker in Hollywood.
A fifth Indiana Jones film recently entered production, helmed by Logan director James Mangold with Spielberg producing. It is scheduled for release in July 2022.
What colour is moonlight? In cinema, the answer is often blue, but what is the reality? Where does the idea of blue moonlight come from? And how has the colour of cinematic moonlight evolved over the decades?
The science bit
According to universetoday.com the lunar surface “is mostly oxygen, silicon, magnesium, iron, calcium and aluminium”. These elements give the moon its colour: grey, as seen best in photographs from the Apollo missions and images taken from space.
When viewed from Earth, Rayleigh scattering by the atmosphere removes the bluer wavelengths of light. This is most noticeable when the moon is low in the sky, when the large amount of atmosphere that the light has to travel through turns the lunar disc quite red, just as with the sun, while at its zenith the moon merely looks yellow.
Yellow is literally the opposite (or complement) of blue, so where on (or off) Earth did this idea of blue cinematic moonlight come from?
One explanation is that, in low light, our vision comes from our rods, the most numerous type of receptor in the human retina (see my article “How Colour Works” for more on this). These cells are more sensitive to blue than any other colour. This doesn’t actually mean that things look blue in moonlight exactly, just that objects which reflect blue light are more visible than those that don’t.
In reality everything looks monochromatic under moonlight because there is only one type of rod, unlike the three types of cones (red, green and blue) which permit colour vision in brighter situations. I would personally describe moonlight as a fragile, silvery grey.
Blue moonlight on screen dates back to the early days of cinema, before colour cinematography was possible, but when enterprising producers were colour-tinting black-and-white films to get more bums on seats. The Complete Guide to Colour by Tom Fraser has this to say:
As an interesting example of the objectivity of colour, Western films were tinted blue to indicate nighttime, since our eyes detect mostly blue wavelengths in low light, but orange served the same function in films about the Far East, presumably in reference to the warm evening light there.
It’s entirely possible that that choice to tint night scenes blue has as much to do with our perception of blue as a cold colour as it does with the functioning of our rods. This perception in turn may come from the way our skin turns bluer when cold, due to reduced blood flow, and redder when hot. (We saw in my recent article on white balance that, when dealing with incandescence at least, bluer actually means hotter.)
Whatever the reason, by the time it became possible to shoot in colour, blue had lodged in the minds of filmmakers and moviegoers as a shorthand for night.
Early colour films often staged their night scenes during the day; DPs underexposed and fitted blue filters in their matte boxes to create the illusion. It is hard to say whether the blue filters were an honest effort to make the sunlight look like moonlight or simply a way of winking to the audience: “Remember those black-and-white films where blue tinting meant you were watching a night scene? Well, this is the same thing.”
Day-for-night fell out of fashion probably for a number of reasons: 1. audiences grew more savvy and demanded more realism; 2. lighting technology for large night exteriors improved; 3. day-for-night scenes looked extremely unconvincing when brightened up for TV broadcast. Nonetheless, it remains the only practical way to show an expansive seascape or landscape, such as the desert in Mad Max: Fury Road.
One of the big technological changes for night shooting was the availability of HMI lighting, developed by Osram in the late 1960s. With these efficient, daylight-balanced fixtures large areas could be lit with less power, and it was easy to render the light blue without gels by photographing on tungsten film stock.
Cinematic moonlight reached a peak of blueness in the late 1980s and early ’90s, in keeping with the general fashion for saturated neon colours at that time. Filmmakers like Tony Scott, James Cameron and Jan de Bont went heavy on the candy-blue night scenes.
By the start of the 21st century bright blue moonlight was starting to feel a bit cheesy, and DPs were experimenting with other looks.
Speaking of the above ferry scene in War of the Worlds, Janusz Kaminski, ASC said:
I didn’t use blue for that night lighting. I wanted the night to feel more neutral. The ferryboat was practically illuminated with warm light and I didn’t want to create a big contrast between that light and a blue night look.
The invention of the digital intermediate (DI) process, and later the all-digital cinematography workflow, greatly expanded the possibilities for moonlight. It can now be desaturated to produce something much closer to the silvery grey of reality. Conversely, it can be pushed towards cyan or even green in order to fit an orange-and-teal scheme of colour contrast.
Darius Wolksi, ASC made this remark to American Cinematographer in 2007 about HMI moonlight on the Pirates of the Caribbean movies:
The colour temperature difference between the HMIs and the firelight is huge. If this were printed without a DI, the night would be candy blue and the faces would be red. [With a digital intermediate] I can take the blue out and turn it into more of a grey-green, and I can take the red out of the firelight and make it more yellow.
My favourite recent approach to moonlight was in the Amazon sci-fi series Tales from the Loop. Jeff Cronenweth, ASC decided to shoot all the show’s night scenes at blue hour, a decision motivated by the long dusks (up to 75 minutes) in Winnipeg, where the production was based, and the legal limits on how late the child actors could work.
The results are beautiful. Blue moonlight may be a cinematic myth, but Tales from the Loop is one of the few places where you can see real, naturally blue light in a night scene.
If you would like to learn how to light and shoot night scenes, why not take my online course, Cinematic Lighting? 2,300 students have enrolled to date, awarding it an average of 4.5 stars out of 5. Visit Udemy to sign up now.
How were visual effects achieved before the advent of computer generated imagery (CGI)? Most of us know that spaceships used to be miniatures, and monsters used to be puppets or people in suits, but what about the less tangible effects? How did you create something as exotic as an energy beam or a dimensional portal without the benefit of digital particle simulations? The answer was often a combination of chemistry, physics, artistry and ingenuity. Here are five examples.
1. “Star Trek” transporters
The original series of Star Trek, premiered in 1966, had to get creative to achieve its futuristic effects with the budget and technology available. The Howard Anderson Company was tasked with realising the iconic transporter effect which enables Kirk’s intrepid crew to beam down to alien planets. Darrell Anderson created the characteristic sparkles of the dematerialisation by filming backlit aluminium powder being sprinkled in front of a black background in slow motion. Hand-drawn mattes were then used to ensure that the sparkling powder only appeared over the characters.
2. “Ghostbusters” proton packs
The much-loved 1984 comedy Ghostbusters features all kinds of traditional effects, including the never-to-be-crossed particle streams with which the heroes battle their spectral foes. The streams consist of five layers of traditional cell animation – the same technique used to create, say, a Disney classic like Sleeping Beauty – which were composited and enhanced on an optical printer. (An optical printer is essentially two or more film projectors connected to a camera so that multiple separate elements can be combined into a single shot.) Composited onto the tips of the Ghostbusters’ guns were small explosions and other pyrotechnic effects shot on a darkened stage.
3. “Lifeforce” energy beams
This cult 1985 sci-fi horror film, most notable for an early screen appearance by Patrick Stewart, features alien vampires which drain the titular lifeforce from their victims. To visualise this lifeforce, VFX supervisor John Dykstra settled on a process whereby a blue argon laser was aimed at a rotating tube made of highly reflective mylar. This threw flowing lines of light onto a screen where it would be captured by the camera for later compositing with the live-action plates. The tube could be deliberately distorted or dented to vary the effects, and to add more energy to certain shots multiple brief elements of a flashing xenon bulb were added to the mix.
4. “Big Trouble in Little China” portal
A mixture of chemical and optical effects were employed for certain shots in the 1986 action-comedy Big Trouble in Little China. Director John Carpenter wanted an effervescent effect like “an Alka-Seltzer tablet in water” to herald the appearance of a trio of warriors known as the Three Storms. After many tests, the VFX team determined that a combination of green paint, metallic powder and acetone, heated in a Pyrex jar on a hotplate, produced an interesting and suitable effect. The concoction was filmed with a fisheye lens, then that footage was projected onto a dome to make it look like a ball of energy, and re-photographed through layers of distorted glass to give it a rippling quality.
5. “Independence Day” cloud tank
By 1996, CGI was replacing many traditional effects, but the summer blockbuster Independence Day used a healthy mix of both. To generate the ominous clouds in which the invading spacecraft first appear, the crew built what they called the “Phenomenon Rig”. This was a semi-circle of halogen lights and metal piping which was photographed in a water tank. Paint was injected into the water through the pipes, giving the appearance of boiling clouds when lit up by the lamps within. This was digitally composited with a live-action background plate and a model shot of the emerging ship.
Cathode ray tube televisions, those bulky, curve-screened devices we all used to have before the rise of LCD flat-screens, already seem like a distant memory. But did you know that they were not the first form of television, that John Logie Baird and his contemporaries first invented a mechanical TV system more akin to Victorian optical toys than the electronic screens that held sway for the greater part of the 20th century?
Mechanical television took several forms, but the most common type revolved, quite literally, around a German invention of 1884 called the Nipkow disc. This had a number of small holes around it, evenly spaced in a spiral pattern. In the Baird standard, developed by the Scottish inventor in the late 1920s, there were 30 holes corresponding to 30 lines of resolution in the resulting image, and the disc would revolve 12.5 times per second, which was the frame rate.
In a darkened studio, an arc light would be shone through the top portion of a spinning Nipkow disc onto the subject. The disc would create a flying spot – a spot of light that travelled horizontally across the scene (as one of the holes passed in front of the arc lamp) and then travelled horizontally across it again but now slightly lower down (as the next hole in the spiral pattern passed the lamp) and so on. For each revolution of the 30-hole disc, 30 horizontal lines of light would be scanned across the subject, one below the other.
A number of photocells would be positioned around the subject, continually converting the overall brightness of the light to a voltage. As the flying spot passed over light-coloured surfaces, more light would reflect off them and into the photocells, so a greater voltage would be produced. As the spot passed over darker objects, less light would reflect into the photocells and a smaller voltage would result. The voltage of the photocells, after amplification, would modulate a radio signal for transmission.
A viewer’s mechanical television set would consist of a radio receiver, a neon lamp and an upright Nipkow disc of a foot or two in diameter. The lamp – positioned behind the spinning disc – would fluctuate in brightness according to the radio signal.
The viewer would look through a rectangular mask fitted over the top portion of the disc. Each hole that passed in front of the neon lamp would draw a streak of horizontal (albeit slightly arcing) light across the frame, a streak varying in brightness along its length according to the continually varying brightness of the lamp. The next hole would draw a similar line just beneath it, and so on. Thanks to persistence of vision, all the lines would appear at once to the viewer, and it would be followed by 11.5 more sets of lines each second: a moving image.
A number of people were experimenting with this crude but magical technology at the same time, with Baird, the American Charles Francis Jenkins and the Japanese Kenjiro Takayanagi all giving historic public demonstrations in 1925.
The image quality was not great. For comparison, standard definition electronic TV has 576 lines and 25 frames per second in the UK, twice the temporal resolution and almost 20 times the spatial resolution of the Baird mechanical standard. The image was very dim, it was only an inch or two across, and it could only be viewed by a single person through a hood or shade extending from the rectangular mask.
The BBC began transmitting a regular mechanical TV service in 1929, by which time several stations were up and running in the USA. An early viewer, Ohio-based Murry Mercier Jr., who like many radio enthusiasts built his own mechanical TV from a kit, described one of the programmes he watched as “about 15 minutes long, consisting of block letters, from the upper left to the lower right of the screen. This was followed by a man’s head turning from left to right.” Hardly Breaking Bad.
Higher resolutions and larger images required larger Nipkow discs. A brighter image necessitated lenses in each of the disc’s holes to magnify the light. Baird once experimented with a disc of a staggering 8ft in diameter, fitted with lenses the size of bowling balls. One of the lenses came loose, unbalancing the whole disc and sending pieces flying across the workshop at lethal speeds.
Other methods of reproducing the image were developed, including the mirror screw, consisting of a stack of thin mirrors arranged like a spiral staircase, one “step” for each line of the image. The mirror screw produced much larger, brighter images than the Nipkow disc, but the writing was already on the wall for mechanical television.
By 1935, cathode ray tubes – still scanning their images line by line, but by magnetically deflecting an electron beam rather than with moving parts – had surpassed their mechanical counterparts in picture quality. The BBC shut down its mechanical service, pioneers like Baird focused their efforts on electronic imaging, and mechanical TV quietly disappeared.
“These are small,” Father Ted once tried to explain to Father Dougal, holding up toy cows, “but the ones out there are far away.” We may laugh at the gormless sitcom priest, but the chances are that we’ve all confounded size and distance, on screen at least.
The ship marooned in the desert in Close Encounters of the Third Kind, the cliff at the end of Tremors, the runways and planes visible through the windows of Die Hard 2’s control tower, the helicopter on the boat in The Wolf of Wall Street, even the beached whale in Mega Shark Versus Giant Octopus – all are small, not far away.
The most familiar forced perspective effect is the holiday snap of a friend or family member picking up the Eiffel Tower between thumb and forefinger, or trying to right the Leaning Tower of Pisa. By composing the image so that a close subject (the person) appears to be in physical contact with a distant subject (the landmark), the latter appears to be as close as the former, and therefore much smaller than it really is.
Architects have been playing tricks with perspective for centuries. Italy’s Palazzo Spada, for example, uses diminishing columns and a ramped floor to make a 26ft corridor look 100ft long. Many film sets – such as the basement of clones in Moon – have used the exact same technique to squeeze extra depth out of limited studio space or construction resources.
Even a set that is entirely miniature can benefit from forced perspective, with a larger scale being used in the foreground and a smaller one in the background, increasing the perceived depth. For example, The Terminator’s “Future War” scenes employ skulls of varying size, with background ruins on an even smaller scale.
An early cinematic display of forced perspective was the 1908 short Princess Nicotine, in which a fairy who appears to be cavorting on a man’s tabletop is actually a reflection in a distant mirror. “The little fairy moves so realistically that she cannot be explained away by assuming that she is a doll,” remarked a Scientific American article of the time, “and yet it is impossible to understand how she can be a living being, because of her small stature.”
During the 1950s, B movies featuring fantastically shrunk or enlarged characters made full use of forced perspective, as did the Disney musical Darby O’Gill and the Little People. VFX supervisor Peter Ellenshaw, interviewed for a 1994 episode of Movie Magic, remembered the challenges of creating sufficient depth of field to sell the illusion: “You had to focus both on the background and the foreground [simultaneously]. It was very difficult. We had to use so much light on set that eventually we blew the circuit-breakers in the Burbank power station.”
Randall William Cook was inspired years later by Ellenshaw’s work when he was called upon to realise quarter-scale demonic minions for the 1987 horror movie The Gate. Faced with a tiny budget, Cook devised in-camera solutions with human characters on raised foreground platforms, and costumed minions on giant set-pieces further back, all carefully designed so that the join was undetectable. As the contemporary coverage in Cinefex magazine noted, “One of the advantages of a well-executed forced perspective shot is that the final product requires no optical work and can therefore be viewed along with the next day’s rushes.”
A subgroup of forced perspective effects is the hanging miniature – a small-scale model suspended in front of camera, typically as a set extension. The 1925 version of Ben Hur used this technique for wide shots of the iconic chariot race. The arena of the Circus Maximus was full size, but in front of and above it was hung a miniature spectators’ gallery containing 10,000 tiny puppets which could stand and wave as required.
Doctor Who used foreground miniatures throughout its classic run, often more successfully than it used the yellow-fringed chromakey of the time. Earthly miniatures like radar dishes, missile launchers and big tops were captured on location, in camera, with real skies and landscapes behind them. The heroes convincingly disembark from an alien spaceship in the Tom Baker classic “Terror of the Zygons” by means of a foreground miniature and the actors jumping off the back of a van in the distance. A third-scale Tardis was employed in a similar way when the production wanted to save shipping costs on a 1984 location shoot on Lanzarote.
Even 60 years on from Ben Hur, Aliens employed the same technique to show the xenomorph-encrusted roof in the power plant nest scene. The shot – which fooled studio executives so utterly that they complained about extravagant spending on huge sets – required small lights to be moved across the miniature in sync with the actors’ head-torches.
The Aliens shot also featured a tilt-down, something only possible with forced perspective if the camera pivots around its nodal point – the point within the lens where the light focuses. Any other type of camera movement gives the game away due to parallax, the optical phenomenon which makes closer objects move through a field of view more quickly than distant ones.
The 1993 remake of Attack of the 50ft Woman made use of a nodal pan to follow Daniel Baldwin to the edge of an outdoor swimming pool which a giant Daryl Hannah is using as a bath. A 1/8th-scale pool with Hannah in was mounted on a raised platform to perfectly align on camera with the real poolside beyond, where Baldwin stood.
The immediacy of forced perspective, allowing actors of different scales to riff off each other in real time, made it the perfect choice for the seasonal comedy Elf. The technique is not without its disadvantages, however. “The first day of trying, the production lost a whole day setting up one shot and never captured it,” recalls VFX supervisor Joe Bauer in the recent documentary Holiday Movies That Made Us.
Elf’s studio, New Line, was reportedly concerned that the forced perspective shots would never work, but given what a certain Peter Jackson was doing for that same studio at the same time, they probably shouldn’t have worried.
The Lord of the Rings employed a variety of techniques to sell the hobbits and dwarves as smaller than their human friends, but it was in the field of forced perspective that the trilogy was truly groundbreaking. One example was an extended cart built to accommodate Ian McKellen’s Gandalf and Elijah Wood’s supposedly-diminutive Frodo. “You could get Gandalf and Frodo sitting side by side apparently, although in fact Elijah Wood was sitting much further back from the camera than Gandalf,” explains producer Barrie Osborne in the trilogy’s extensive DVD extras.
Jackson insisted on the freedom to move his camera, so his team developed a computer-controlled system that would correct the tell-tale parallax. “You have the camera on a motion-controlled dolly, making it move in and out or side to side,” reveals VFX DP Brian Van’t Hul, “but you have another, smaller dolly [with one of the actors on] that’s electronically hooked to it and does the exact same motion but sort of in a counter movement.”
Forced perspective is still alive and kicking today. For Star Wars Episode IX: The Rise of Skywalker, production designer Kevin Jenkins built a 5ft sand-crawler for shooting in the Jordan Desert. “It was placed on a dressed table at height,” he explained on Twitter, “and the Jawa extras were shot at the same time a calculated distance back from the mini. A very fine powdery sand was dressed around for scale. We even made a roller to make mini track prints! Love miniatures :)”
Next month, Terminator 2: Judgment Day turns 30. Made by a director and star at the peaks of their powers, T2 was the most expensive film ever at the time, and remains both the highest-grossing movie of Arnold Schwarzenegger’s career and the sequel which furthest out-performed its progenitor. It is also one of a handful of films that changed the world of visual effects forever, signalling as it did – to borrow the subtitle from its woeful follow-up – the rise of the machines.
The original Terminator, a low-budget surprise hit in 1984, launched director James Cameron’s career and cemented Schwarzenegger’s stardom, but it wasn’t until 1990 that the sequel was green-lit, mainly due to rights issues. At the Cannes Film Festival that year, Cameron handed executive producer Mario Kassar his script.
Today it’s easy to forget how risky it was to turn the Terminator, an iconic villain, an unstoppable, merciless death machine from an apocalyptic future, into a good guy who doesn’t kill anyone, stands on one leg when ordered, and looks like a horse when he attempts to smile. But Kassar didn’t balk, granting Cameron a budget ten times what he had had for the original, while stipulating that the film had to be in cinemas just 14 months later.
Even with some expensive sequences cut – including John Connor sending Kyle Reese back through time in the heart of Skynet HQ, a scene that would ultimately materialise in Terminator Genisys – the script was lengthy and extremely ambitious. Beginning on October 8th, 1990, the shooting schedule was front-loaded with effects shots to give the maximum time for CGI pioneers Industrial Light and Magic to realise the liquid metal T-1000 (Robert Patrick).
To further ease ILM’s burden, every trick in the book was employed to get T-1000 shots in camera wherever possible: quick shots of the villain’s fight with the T-800 (Schwarzenegger) in the steel mill finale were done with a stuntman in a foil suit; a chrome bust of Patrick was hand-raised into frame for a helicopter pilot’s reaction shot; the reforming of the shattered T-1000 was achieved by blowing mercury around with a hair dryer; bullet hits on the character’s torso were represented by spring-loaded silver “flowers” that burst out of a pre-scored shirt on cue.
Stan Winston Studio also constructed a number of cable-controlled puppets to show more extensive damage to the morphing menace. These included “Splash Head”, a bust of Patrick with the head split in two by a shotgun blast, and “Pretzel Man”, the nightmarish result of a grenade hit moments before the T-1000 falls to its doom in the molten steel.
Traditional models and rear projection are used throughout the film. A few instances are all too obvious to a modern audience, but most still look great and some are virtually undetectable. Did you know that the roll-over and crash of the cryo-tanker were shot with miniatures? Or that the T-800 plucking John off his bike in the drainage channel was filmed against a rear projection screen?
Plenty of the action was accomplished without such trickery. The production added a third storey to a disused office building near Silicon Valley, then blew it up with 100 gallons of petrol, to show the demise of Cyberdyne Systems. DP Adam Greenberg lit 5.5 miles of freeway for the car chase, and pilot Chuck Tamburro really did fly the T-1000’s police helicopter under a 20ft underpass.
Chaotic, confusing action scenes are the norm today, but it is notable that T2’s action is thrilling yet never unclear. The film sends somewhat mixed messages though, with its horrific images of nuclear annihilation and the T-800’s morality lessons from John juxtaposed with indulgent violence and a reverence for firearms. “I think of T2 as a violent movie about world peace,” Cameron paradoxically stated. “It’s an action movie about the value of human life.”
Meanwhile, 25 person-years of human life were being devoted by ILM to the T-1000’s metallic morphing abilities. Assistant VFX supervisor Mark Dippé noted: “We were pushing the limits of everything – the amount of disc space we had, the amount of memory we had in the computers, the amount of CPUs we had. Each shot, even though it only lasted about five seconds on the screen, typically would take about eight weeks to complete.”
The team began by painting a 2×2” grid on a near-naked Patrick and shooting reference footage of him walking, before laser-scanning his head at the appropriately-named Cyberware Laboratory. Four separate computer models of the T-1000 were built on Silicon Graphics Iris 4Ds, from an amorphous blob to a fully-detailed chrome replica of Patrick, each with corresponding points in 3D space so that the custom software Model Interp could morph between them.
Other custom applications included Body Sock, a solution to gaps that initially appeared when the models flexed their joints, Polyalloy Shader, which gave the T-1000 its chrome appearance, and Make Sticky, with which images of Patrick were texture-mapped onto the distorting 3D model, as when he melts through a barred gate at the mental hospital.
The film’s legacy in visual effects – for which it won the 1992 Oscar – cannot be understated. A straight line can be drawn from the water tendril in Cameron’s The Abyss, through T2 to Jurassic Park and all the way on to Avatar, with which Cameron again broke the record for the highest-grossing film of all time. The Avatar sequels will undoubtedly push the technology even further, but for many Cameron fans his greatest achievement will always be Terminator 2: Judgment Day, with its perfect blend of huge stunts, traditional effects and groundbreaking CGI.