The Cinematography of “Doctor Who”

Just a quick post to say that the latest special edition of Doctor Who Magazine, out now, features an article I wrote about the history of the venerable series’ cinematography. From the cathode-ray tube multi-camera studio shoots of 1963 to the latest ARRI Alexa/Cooke Anamorphic photography, the technology and techniques of lighting and lensing Doctor Who encapsulate the history of TV making over the last six decades. I had a great time combining two subjects I know quite a bit about and was very excited to see the article in print. Look out for it in your local newsagent!

The Cinematography of “Doctor Who”

Mechanical TV: A Forgotten Format

Cathode ray tube televisions, those bulky, curve-screened devices we all used to have before the rise of LCD flat-screens, already seem like a distant memory. But did you know that they were not the first form of television, that John Logie Baird and his contemporaries first invented a mechanical TV system more akin to Victorian optical toys than the electronic screens that held sway for the greater part of the 20th century?

Mechanical television took several forms, but the most common type revolved, quite literally, around a German invention of 1884 called the Nipkow disc. This had a number of small holes around it, evenly spaced in a spiral pattern. In the Baird standard, developed by the Scottish inventor in the late 1920s, there were 30 holes corresponding to 30 lines of resolution in the resulting image, and the disc would revolve 12.5 times per second, which was the frame rate.

In a darkened studio, an arc light would be shone through the top portion of a spinning Nipkow disc onto the subject. The disc would create a flying spot – a spot of light that travelled horizontally across the scene (as one of the holes passed in front of the arc lamp) and then travelled horizontally across it again but now slightly lower down (as the next hole in the spiral pattern passed the lamp) and so on. For each revolution of the 30-hole disc, 30 horizontal lines of light would be scanned across the subject, one below the other.

A number of photocells would be positioned around the subject, continually converting the overall brightness of the light to a voltage. As the flying spot passed over light-coloured surfaces, more light would reflect off them and into the photocells, so a greater voltage would be produced. As the spot passed over darker objects, less light would reflect into the photocells and a smaller voltage would result. The voltage of the photocells, after amplification, would modulate a radio signal for transmission.

This picture from “Science and Invention”, November 1928, shows the radio receiver on the left and the Nipkow disc with its conical viewing shade on the right.

A viewer’s mechanical television set would consist of a radio receiver, a neon lamp and an upright Nipkow disc of a foot or two in diameter. The lamp – positioned behind the spinning disc – would fluctuate in brightness according to the radio signal.

The viewer would look through a rectangular mask fitted over the top portion of the disc. Each hole that passed in front of the neon lamp would draw a streak of horizontal (albeit slightly arcing) light across the frame, a streak varying in brightness along its length according to the continually varying brightness of the lamp. The next hole would draw a similar line just beneath it, and so on. Thanks to persistence of vision, all the lines would appear at once to the viewer, and it would be followed by 11.5 more sets of lines each second: a moving image.

A number of people were experimenting with this crude but magical technology at the same time, with Baird, the American Charles Francis Jenkins and the Japanese Kenjiro Takayanagi all giving historic public demonstrations in 1925.

The image quality was not great. For comparison, standard definition electronic TV has 576 lines and 25 frames per second in the UK, twice the temporal resolution and almost 20 times the spatial resolution of the Baird mechanical standard. The image was very dim, it was only an inch or two across, and it could only be viewed by a single person through a hood or shade extending from the rectangular mask.

The BBC began transmitting a regular mechanical TV service in 1929, by which time several stations were up and running in the USA. An early viewer, Ohio-based Murry Mercier Jr., who like many radio enthusiasts built his own mechanical TV from a kit, described one of the programmes he watched as “about 15 minutes long, consisting of block letters, from the upper left to the lower right of the screen. This was followed by a man’s head turning from left to right.” Hardly Breaking Bad.

John Logie Baird working on a mechanical TV set

Higher resolutions and larger images required larger Nipkow discs. A brighter image necessitated lenses in each of the disc’s holes to magnify the light. Baird once experimented with a disc of a staggering 8ft in diameter, fitted with lenses the size of bowling balls. One of the lenses came loose, unbalancing the whole disc and sending pieces flying across the workshop at lethal speeds.

Other methods of reproducing the image were developed, including the mirror screw, consisting of a stack of thin mirrors arranged like a spiral staircase, one “step” for each line of the image. The mirror screw produced much larger, brighter images than the Nipkow disc, but the writing was already on the wall for mechanical television.

By 1935, cathode ray tubes – still scanning their images line by line, but by magnetically deflecting an electron beam rather than with moving parts – had surpassed their mechanical counterparts in picture quality. The BBC shut down its mechanical service, pioneers like Baird focused their efforts on electronic imaging, and mechanical TV quietly disappeared.

Mechanical TV: A Forgotten Format

How to Make High-end TV During a Pandemic

Many productions are up and running again, and a recent ScreenSkills seminar revealed how two high-end series were amongst the first to tackle TV-making during a global pandemic.

Death in Paradise is a long-running crime drama about fish-out-of-water British detectives – the latest played by Ralf Little – heading murder investigations on the fictional Caribbean island of Saint Marie. Production of the show’s tenth season, originally scheduled for April, commenced instead in late July.

The Pursuit of Love is a mini-series based on the novel by Nancy Mitford, set between the two world wars. Lily James and Emily Beecham star as women in quest of husbands, in an adaptation written and directed by Emily Mortimer. Filming again began in late July, in South West England.

What both productions have in common, and a key reason why they were able to start up ahead of so many others, is that their insurance was already in place before lockdown hit. The policies include producer’s indemnity, covering costs outside of the production’s control.

Co-executive producer Alex Jones of Red Planet Pictures explained that Death in Paradise had a few other things going for it too. Most obvious of these was the location, the French archipelago of Guadeloupe, which formed a natural bubble. All cast and crew were tested for Covid-19 before flying out, then again seven days after arrival and at the start of each filming block. Having been around for ten years made adapting the production easier than starting one from scratch, Jones believes.

Ian Hogan, line producer of The Pursuit of Love, did not have the advantage of an established machine. He said that a full-time health and safety adviser with a background in location management spent weeks working out Coronavirus protocols for the period drama. Crew members each received a copy of these, and were required to agree that they would not go out in their spare time except for exercise and essential shopping. Every day they must declare remotely that they have no symptoms of Covid-19 before they can receive a green pass which allows them through location security. They must then take a temperature test before accessing the set.

Both producers insist that age and underlying health problems are not a barrier to work. Cast and crew who are particularly vulnerable to Covid-19 are given a personalised risk assessment with mitigation steps to follow.

Death in Paradise chose to film using the “one metre plus” social distancing rule common to both France and England. A former assistant director was hired as a Covid supervisor, a role which sometimes involved helping to re-block scenes to avoid physical proximity.

But for The Pursuit of Love, as the title suggests, intimacy was crucial. The producers opted for a close-contact system, dividing personnel into cohorts. A mobile testing lab with a capacity of 70 a day is always on location, and everyone is checked at least once a week. The Director’s Cohort – consisting of Mortimer, the cast, and key on-set crew like the DP, boom op and focus puller – are tested twice a week.

A monitor signal is distributed wirelessly around the set to production iPads and personal devices, to prevent a crowded video village. The DIT sends this camera feed via a local wifi network using Qtake.

Both productions require face-coverings. At least one director of Death in Paradise switched from a mask to a visor so that their cast and crew could read their facial expressions, so important when giving notes.

Visors are also used for close-contact work like make-up and costume, the two departments perhaps most affected by the pandemic. Hogan hired extra make-up trucks so that the chairs could be sufficiently spaced, and both productions expanded their crews to obviate the need for dailies. Instead, extra MUAs and dressers might be engaged for eight weeks out of 12, but on an exclusive basis so that they don’t risk spreading the virus to or from other sets.

Wardrobe fitting for supporting artists is much more involved than usual, as the same costume cannot be tried on multiple people without cleaning in-between. Greater numbers of costumes must be hired, and measurements that are taken remotely are much more important.

All of this is expensive, of course. Jones estimates it has added 15 per cent to Death in Paradise‘s budget, covered fortunately by the insurance. The pace of filming has slowed, but not as much as might be expected, with just two extra filming days per block, and slightly less coverage recorded than before.

Both Jones and Hogan praised the responsibility and enthusiasm with which their crews returned to work. They are positive about the future of TV production. While there have been fears that Coronavirus would shrink crews, Jones’s has actually grown, with a larger off-set support staff. “Our industry is booming,” he concluded, “and it will continue to boom when this is all over.”

This article first appeared on RedShark News.

How to Make High-end TV During a Pandemic

Will “The Mandalorian” Revolutionise Filmmaking?

Last week, Greig Fraser, ASC, ACS and Baz Idoine were awarded the Emmy for Outstanding Cinematography for a Single-camera Series (Half-hour) for The Mandalorian. I haven’t yet seen this Star Wars TV series, but I’ve heard and read plenty about it, and to call it a revolution in filmmaking is not hyperbole.

Half of the series was not shot on location or on sets, but on something called a volume: a stage with walls and ceiling made of LED screens, 20ft tall, 75ft across and encompassing 270° of the space. I’ve written before about using large video screens to provide backgrounds in limited aways, outside of train windows for example, and using them as sources of interactive light, but the volume takes things to a whole new level.

In the past, the drawback of the technology has been one of perspective; it’s a flat, two-dimensional screen. Any camera movement revealed this immediately, because of the lack of parallax. So these screens tended to be kept to the deep background, with limited camera movement, or with plenty of real people and objects in the foreground to draw the eye. The footage shown on the screens was pre-filmed or pre-rendered, just video files being played back.

The Mandalorian‘s system, run by multiple computers simultaneously, is much cleverer. Rather than a video clip, everything is rendered in real time from a pre-built 3D environment known as a load, running on software developed for the gaming industry called Unreal Engine. Around the stage are a number of witness cameras which use infra-red to monitor the movements of the cinema camera in the same way that an actor is performance-captured for a film like Avatar. The data is fed into Unreal Engine, which generates the correct shifts in perspective and sends them to the video walls in real time. The result is that the flat screen appears, from the cinema camera’s point of view, to have all the depth and distance required for the scene.

The loads are created by CG arists working to the production designer’s instructions, and textured with photographs taken at real locations around the world. In at least one case, a miniature set was built by the art department and then digitised. The scene is lit with virtual lights by the DP – all this still during preproduction.

The volume’s 270° of screens, plus two supplementary, moveable screens in the 90° gap behind camera, are big enough and bright enough that they provide most or all of the illumination required to film under. The advantages are obvious. “We can create a perfect environment where you have two minutes to sunset frozen in time for an entire ten-hour day,” Idoine explains. “If we need to do a turnaround, we merely rotate the sky and background, and we’re ready to shoot!”

Traditional lighting fixtures are used minimally on the volume, usually for hard light, which the omni-directional pixels of an LED screen can never reproduce. If the DPs require soft sources beyond what is built into the load, the technicians can turn any off-camera part of the video screens into an area of whatever colour and brightness are required –  a virtual white poly-board or black solid, for example.

A key reason for choosing the volume technology was the reflective nature of the eponymous Mandalorian’s armour. Had the series been shot on a green-screen, reflections in his shiny helmet would have been a nightmare for the compositing team. The volume is also much more actor- and filmmaker-friendly; it’s better for everyone when you can capture things in-camera, rather than trying to imagine what they will look like after postproduction. “It gives the control of cinematography back to the cinematographer,” Idoine remarks. VR headsets mean that he and the director can even do a virtual recce.

The Mandalorian shoots on the Arri Alexa LF (large format), giving a shallow depth of field which helps to avoid moiré problems with the video wall. To ensure accurate chromatic reproduction, the wall was calibrated to the Alexa LF’s colour filter array.

Although the whole system was expensive to set up, once up and running it’s easy to imagine how quickly and cheaply the filmmakers can shoot on any given “set”. The volume has limitations, of course. If the cast need to get within a few feet of a wall, for example, or walk through a door, then that set-piece has to be real. If a scene calls for a lot of direct sunlight, then the crew move outside to the studio backlot. But undoubtedly this technology will improve rapidly, so that it won’t be long before we see films and TV episodes shot entirely on volumes. Perhaps one day it could overtake traditional production methods?

For much more detailed information on shooting The Mandalorian, see this American Cinematographer article.

Will “The Mandalorian” Revolutionise Filmmaking?

The Rise of Anamorphic Lenses in TV

Each month I get a digital copy of American Cinematographer to my inbox, filled with illuminating (pun intended) articles about the lighting and lensing of the latest theatrical releases. As a rule of thumb, I only read the articles if I’ve seen the films. Trouble is, I don’t go to the cinema much any more… even before Coronavirus put a stop to all that anyway.

Why? TV is better, simple as that. Better writing, better cinematography, better value for money. (Note: I include streaming services like Netflix and Amazon under the umbrella of “TV” here.) But whereas I can turn to AC to discover the why and how of the cinematography of a movie, there is no equivalent for long-form content. I would love to see a magazine dedicated to the beautiful cinematography of streaming shows, but until then I’ll try to plug the gap myself.

I’d like to start with a look at the increasing use of anamorphic lenses for the small screen. Let’s look at a few examples and try to discover what anamorphic imaging adds to a project.

Lenses with an anamorphic element squeeze the image horizontally, allowing a wider field of view to be captured. The images are restored to their correct proportions in postproduction, but depth of field, bokeh (out of focus areas), barrel distortion and lens flare all retain different characteristics to those obtained with traditional spherical lenses.

 

The Cinematic look

“Doctor Who: The Woman Who Fell to Earth”, DP: Denis Crossan

The venerable Doctor Who, which started off shooting on 405-line black-and-white videotape more than half a century ago, has employed Arri Alexas and Cooke Anamorphic/i glass since the introduction of Jodie Whittaker’s 13th Doctor. “[Director Jamie Childs] suggested we shoot on anamorphic lenses to give it a more filmic look,” says DP Denis Crossan. “You get really nice background falloff and out of focus ellipses on light sources.”

While most viewers will not be able to identify these visual characteristics specifically, they will certainly be aware of a more cinematic feel to the show overall. This is because we associate anamorphic images – even if we do not consciously know them as such – with the biggest of Hollywood blockbusters, everything from Die Hard to Star Trek Beyond.

It’s not just the BBC who are embracing anamorphic. DP Ollie Downey contrasted spherical glass with vintage anamorphics to deliberate effect in “The Commuter”, an episode of the Channel 4/Amazon sci-fi anthology series Electric Dreams.

The story revolves around Ed (Timothy Spall) whose mundane but difficult life turns upside down when he discovers Macon Heights, a town that seems to exist in an alternate reality. “Tim Spall’s character is torn between his real life and the fantastical world of Macon Heights,” Downey explains on his Instagram feed. “We shot Crystal Express Anamorphics for his regular life, and Zeiss Super Speed Mk IIs for Macon Heights.”

The anamorphic process was invented as a way to get a bigger image from the same area of 35mm negative, but in today’s world of ultra-high-resolution digital sensors there is no technical need for anamorphics, only an aesthetic one. In fact, they can actually complicate the process, as Downey notes: “We had to shoot 8K on the Red to be able to punch in to our Crystal Express to extract 16:9 and still deliver 4K to Amazon.”

“Electric Dreams: The Commuter”, DP: Ollie Downey

 

Evoking a period

Back at the BBC, last year’s John le Carré adaptation The Little Drummer Girl uses anamorphic imaging to cement its late 1970s setting. The mini-series revolves around Charmian, an actress who is recruited by Israeli intelligence via the mysterious agent Becker. The truth is distorted throughout, just as the wide anamorphic lenses distort every straight line into a curve.

Reviewing the show for The Independent, Ed Cumming notes that director Park Chan-wook “does not aim to be invisible but to remind you constantly that what you are seeing is a creation. Take the scene at a beachside taverna in Greece, where Charmian and Becker start talking properly to each other. The camera stays still, the focus snaps between him and her.” Such focus pulls are more noticeable in anamorphic because the subject stretches vertically as it defocuses.


The Little Drummer Girl is slavish in its recreation of the period, in camera style as well as production design. Zooms are used frequently, their two-dimensional motion intricately choreographed with the actors who step in and out of multiple planes in the image. Such shots were common in the 70s, but have since fallen very much out of fashion. When once they would have passed unnoticed, a standard part of film grammar, they now draw attention.

“The Little Drummer Girl”, DP: Woo-Hyung Kim

 

Separating worlds

Chilling Adventures of Sabrina, a Netflix Original, also draws attention with its optics. Charting the trials and tribulations of a teenaged witch, the show uses different makes of lenses to differentiate two worlds, just like “The Commuter”.

According to DP David Lazenberg’s website, he mixed modern Panavision G series anamorphics with “Ultragolds”. Information on the latter is hard to find, but they may be related to the Isco Ultra Star adapters which some micro-budget filmmakers have adopted as a cheap way of shooting anamorphic.

The clean, sharp G series glass is used to portray Sabrina’s ordinary life as a small-town teenager, while the Ultragolds appear to be used for any scenes involving witchcraft and magic. Such scenes display extreme blur and distortion at the edges of the frame, making characters squeeze and stretch as the camera pans over them.

“Chilling Adventures of Sabrina: Chapter Ten: The Witching Hour”, DP: Stephen Maier

Unlike the anamorphic characteristics of Doctor Who or “The Commuter”, which are subtle, adding to the stories on a subconscious level, the distortion in Sabrina is extreme enough to be widely noticed by its audience. “Numerous posts on Reddit speak highly of Chilling Adventures of Sabrina’s content and cinematography,” reports Andy Walker, editor of memeburn.com, “but a majority have a collective disdain for the unfocused effect.”

“I hate that blurry s*** on the side of the screen in Sabrina,” is the more blunt appraisal of Twitter user @titanstowerr. Personally I find the effect daring and beautiful, but it certainly distracted me just as it has distracted others, which forces me to wonder if it takes away more from the story than it adds.

And that’s what it all comes down to in the end: are the technical characteristics of the lens facilitating or enhancing the storytelling? DPs today, in both cinema and long-form series, have tremendous freedom to use glass to enhance the viewers’ experience. Yes, that freedom will sometimes result in experiments that alienate some viewers, but overall it can only be a good thing for the expressiveness of the art form.

For more on this topic, see my video test and analysis of some anamorphic lenses.

The Rise of Anamorphic Lenses in TV

What Does “Cinematic” Mean?

Earlier this year I undertook a personal photography project called Stasis. I deliberately set out to do something different to my cinematography work, shooting in portrait, taking the paintings of Dutch seventeenth century masters as my inspiration, and eschewing traditional lighting fixtures in favour of practical sources. I was therefore a little disappointed when I began showing the images to people and they described them as “cinematic”.

An image from “Stasis”

This experience made me wonder just what people mean by that word, “cinematic”. It’s a term I’ve heard – and used myself – many times during my career. We all seem to have some vague idea of what it means, but few of us are able to define it. 

Dictionaries are not much help either, with the Oxford English Dictionary defining it simply as “relating to the cinema” or “having qualities characteristic of films”. But what exactly are those qualities?

Shallow depth of field is certainly a quality that has been widely described as cinematic. Until the late noughties, shallow focus was the preserve of “proper” movies. The size of a 35mm frame (or of the digital cinema sensors which were then emerging) meant that backgrounds could be thrown way out of focus while the subject remained crisp and sharp. The formats which lower-budget productions had thereto been shot on – 2/3” CCDs and Super-16 film – could not achieve such an effect. 

Then the DSLR revolution happened, putting sensors as big as – or bigger than – those of Hollywood movies into the hands of anyone with a few hundred pounds to spare. Suddenly everyone could get that “cinematic” depth of field. 

My first time utilising the shallow depth of field of a DSLR, on a never-completed feature back in 2011.

Before long, of course, ultra-shallow depth of field became more indicative of a low-budget production trying desperately to look bigger than of something truly cinematic. Gradually young cinematographers started to realise that their idols chose depth of field for storytelling reasons, rather than simply using it because they could. Douglas Slocombe, OBE, BSC, ASC, cinematographer of the original Indiana Jones trilogy, was renowned for his deep depth of field, typically shooting at around T5.6, while Janusz Kaminski, ASC, when shooting Kingdom of the Crystal Skull, stopped down as far as T11.

There was also a time when progressive scan – the recording of discrete frames rather than alternately odd and even horizontal lines to make an interlaced image – was considered cinematic. Now it is standard in most types of production, although deviations from the norm of 24 or 25 frames per second, such as the high frame rate of The Hobbit, still make audiences think of reality TV or news, rejecting it as “uncinematic”.

Other distinctions in shooting style between TV/low-budget film and big-budget film have slipped away too. The grip equipment that enables “cinematic” camera movement – cranes, Steadicams and other stabilisers – is accessible now in some form to most productions. Meanwhile the multi-camera shooting which was once the preserve of TV, looked down upon by filmmakers, has spread into movie production.

A direct comparison may help us drill to the core of what is “cinematic”. Star Trek: Generations, the seventh instalment in the sci-fi film franchise, went into production in spring 1994, immediately after the final TV season of Star Trek: The Next Generation wrapped. The movie shot on the same sets, with the same cast and even the same acquisition format (35mm film) as the TV series. It was directed by David Carson, who had helmed several episodes of the TV series, and whose CV contained no features at that point.

Yet despite all these constants, Star Trek: Generations is more cinematic than the TV series which spawned it. The difference lies with the cinematographer, John A. Alonzo, ASC, one of the few major crew members who had not worked on the TV show, and whose experience was predominantly in features. I suspect he was hired specifically to ensure that Generations looked like a movie, not like TV.

The main thing that stands out to me when comparing the film and the series is the level of contrast in the images. The movie is clearly darker and moodier than the TV show. In fact I can remember my schoolfriend Chris remarking on this at the time – something along the lines of, “Now it’s a movie, they’re in space but they can only afford one 40W bulb to light the ship.” 

The bridge of the Enterprise D as seen on TV (top) and in the “Generations” movie (bottom).

It was a distinction borne of technical limitations. Cathode ray tube TVs could only handle a dynamic range of a few stops, requiring lighting with low contrast ratios, while a projected 35mm print could reproduce much more subtlety. 

Today, film and TV is shot on the same equipment, and both are viewed on a range of devices which are all good at dealing with contrast (at least compared with CRTs). The result is that, with contrast as with depth of field, camera movement and progressive scan, the distinction between the cinematic and the uncinematic has reduced. 

The cinematography of “Better Call Saul” owes much to film noir.

In fact, I’d argue that it’s flipped around. To my eye, many of today’s TV series – and admittedly I’m thinking of high-end ones like The Crown, Better Call Saul or The Man in the High Castle, not Eastenders – look more cinematic than modern movies. 

As my friend Chris had realised, the flat, high-key look of Star Trek: The Next Generation was actually far more realistic than that of its cinema counterpart. And now movies seem to have moved towards realism in the lighting, which is less showy and not so much moody for the sake of being moody, while TV has become more daring and stylised.

A typically moody and contrasty shot from “The Crown”

The Crown, for examples, blasts a 50KW Soft Sun through the window in almost every scene, bathing the monarchy in divine light to match its supposed divine right, while Better Call Saul paints huge swathes of rich, impenetrable black across the screen to represent the rotten soul of its antihero. 

Film lighting today seems to strive for naturalism in the most part. Top DPs like recent Oscar-winner Roger Deakins, CBE, ASC, BSC,  talk about relying heavily on practicals and using fewer movie fixtures, and fellow nominee Rachel Morrison, ASC, despite using a lot of movie fixtures, goes to great lengths to make the result look unlit. Could it be that film DPs feel they can be more subtle in the controlled darkness of a cinema, while TV DPs choose extremes to make their vision clear no matter what device it’s viewed on or how much ambient light contaminates it?

“Mudbound”, shot by Rachel Morrison, ASC

Whatever the reason, contrast does seem to be the key to a cinematic look. Even though that look may no longer be exclusive to movies released in cinemas, the perception of high contrast being linked to production value persists. The high contrast of the practically-lit scenes in my Stasis project is – as best I can tell – what makes people describe it as cinematic.

What does all of this mean for a filmmaker? Simply pumping up the contrast in the grade is not the answer. Contrast should be built into the lighting, and used to reveal and enhance form and depth. The importance of good production design, or at least good locations, should not be overlooked; shooting in a friend’s white-walled flat will kill your contrast and your cinematic look stone dead. 

A shot of mine from “Forever Alone”, a short film where I was struggling to get a cinematic look out of the white-walled location.

Above all, remember that story – and telling that story in the most visually appropriate way – is the essence of cinema. In the end, that is what makes a film truly cinematic.

SaveSave

What Does “Cinematic” Mean?

5 Rebuffed Complaints About a Female Doctor Who

Reaction to Jodie Whittaker’s casting as the new Doctor pretty much broke the internet last month. While the majority appear to be in favour, a significant minority reacted with hostility.

At first glance, the haters did seem to have a reasonable point. The Doctor is a man, has always been a man, so it’s weird to regenerate them into a woman. After all, there are constants across every regeneration, different as it may be to its predecessors. For example, the Doctor always has a British accent. If the Doctor ever gained an American twang, there would be outrage; the Doctors’ Britishness is a fixed point of their ever-changing character. Is it so unreasonable for their gender to be another fixed point, something to anchor their character and reassure viewers that despite the new actor, this is still the Doctor you know and love?

But as soon as you start to think about it, this argument collapses completely. After all, Doctor Who‘s 54-year history is littered with contradictions and continuity errors. The majority of the episodes produced under Steven Moffatt were full of plot-holes, so to suggest that there is anything fixed, immutable and logical about the show is utterly ridiculous. It’s pure fantasy. Fantasy – that’s a key word that I’ll return to later.

Let’s consider some of the most common negative reactions that appeared online…

 

1. “It’s Not Doctor Who any more.”

People said that in 1966 when the Doctor first regenerated. They said it when he was exiled to Earth in the 70s. They said it when it got campy in the 80s. They said it when the American TV movie was made in 1996. They said it when Russell T. Davies resurrected the show in 2005. They said it when Tennant left in 2010. And now they’re saying it again.

Change, evolution, moving with the times – these are the reasons that Doctor Who is the longest-running sci-fi show on the planet. The world has changed enormously since William Hartnell first flickered onto the screen with his magic blue (grey) box. It’s the show’s ability to develop in step with the real world  that makes it a continued success. These changes are visible in the ever-improving VFX, the topical themes of the stories, the shifts in tone under new showrunners, and crucially through Who‘s groundbreaking concept of regeneration.

Doctor Who is change.

 

2. “We have lost an important male role model.”

I saw a post from a man who was angry and upset to lose what he saw as a crucial role model in his life. His argument was that male heroes are usually more physical and violent, whereas the Doctor’s more intelligent approach made him great for encouraging men into STEM (Science, Technology, Engineering, Maths) careers. Peter Davison, the fifth Doctor, expressed a similar concern.

But it is women who are under-represented in STEM industries, not men. And if you’re looking for other intelligent male role models, how about super-brainy Sherlock? Or engineering genius Tony “Iron Man” Stark? Or most of the Star Trek captains and science officers? Even if you reject every other film and TV show’s male heroes as not intellectual enough, you still have the other twelve Doctors. Can’t we let 50% of the population have one female Doctor in there to look up to?

 

3. “It’s a cynical move.”

It’s no secret that Doctor Who‘s ratings have been steadily declining in recent years, so some people have come to the conclusion that incoming showrunner Chris Chibnall cast a woman purely to generate controversy and draw attention to the show.

Undoubtedly Chibnall would have seen the press and social media interest as a bonus to casting a woman, but it can’t have been the sole or primary motivator. Chibnall is first and foremost a writer, and no writer would ever cast a lead actor to bring their character to life if they didn’t believe absolutely that that actor was right for the part. The first woman in the role is bound to attract a greater degree of scrutiny and criticism than another man when her episodes start screening, so if the show is to have a hope of impressing the critics then the Doctor has to be an excellent actor with an impeccable track record. And Whittaker is definitely that.

This move is far from cynical. It’s bold, refreshing and relevant, and for this fan at least it gives me more excitement about the next season than I have felt for some time.

 

4. “It’s political correctness gone mad.”

Political correctness has become a dirty phrase, but all it really means is being careful not to offend oppressed or minority groups unnecessarily. So to say that Whittaker’s casting is political correctness gone mad is to suggest that it’s placating people who have no valid complaint of oppression or under-representation.

Let me say it again: twelve of the thirteen Doctors are men. (Thirteen of fourteen if you count the War Doctor.) Only one is a woman. That’s less than 10%, compared with 50% of the population being female. That is the very definition of under-representation. And let’s not forget that Whittaker’s casting was announced after the men’s Wimbledon final, not the women’s, because we still live in a world where women, and all the things women do, are considered less important than their male counterparts.

Casting a female Doctor is not “political correctness gone mad”. It’s taking a small step towards correcting a huge imbalance.

 

5. “I won’t be watching any more.”

I suspect the men who wrote comments like this did not stop to consider the more limited choices their mothers, daughters and sisters have in this matter. If women threw their toys out of the pram every time a TV show or film came along with a male lead, they wouldn’t get much else done. Women have got used to watching stories led by the other gender; we men must learn to do the same.

To the people who still say, “but the Doctor is a man,” and suggest that casting female leads in new shows would be better than swapping the gender of an established character, you may be right. And when 50% of all big franchises have female leads there will be no need to do this kind of thing, but until then, it’s necessary. Until then, us men whining that we’ve lost something in this situation is like a millionaire crying because they dropped a penny down the drain.

 

Finally, let’s return to that keyword, fantasy. Because I think the most significant things about Whittaker’s casting are the kids in the playgrounds who will grow up with choice. The girls won’t always have to play the kidnapped princesses, or the love interests, or the companions, while the boys get the roles with agency; they can play Rey, or Wonder Woman, or the Doctor. That can only be beneficial to the future of our society.

5 Rebuffed Complaints About a Female Doctor Who

How “The Crown” Uses Broad Key Lighting to Evoke Tradition

Earlier this year I blogged about a visit to the National Portrait Gallery, studying the lighting in traditional portraits. I noted that, contrary to the current cinematographic trend for short key lighting, almost all of those paintings used broad key. And while watching the high-end Netflix series The Crown this week, I noticed the same thing. Why might this be?

Short key (left) vs. broad key (right). Photos from SLR Lounge
Short key (left) vs. broad key (right). Photos from SLR Lounge

First of all, a reminder: a short key is a key light on the side of the face away from camera, while a broad key hits the side of the face towards camera. Short key is generally preferred amongst cinematographers because it gives better “modelling” – i.e. a better sense of the shape of the face – and focuses the viewer ON the face, rather than the ear and the side of the head. A broad key, meanwhile, presents less shadow to the camera, and arguably shows the hairstyle and the shape of the head better – which may be reasons for the preponderance of broad key in classical portraiture, which were more concerned with overall appearance than with emotion/performance.

An array of broad key paintings at the National Portrait Gallery
An array of broad key paintings at the National Portrait Gallery

But I don’t believe these direct pros and cons were the primary motivation in cinematographer Ole Bratt Birkeland’s decision to use broad key lighting in a crucial scene from The Crown.

The central themes of the series, which dramatises the early life of the Queen, are tradition and duty. Queen Mary often reminds her granddaughter Queen Elizabeth II of the long and noble lineage of the English royal family, a weight of history and responsibility which Elizabeth keenly feels. “The crown must always win,” Mary intones in the trailer.

In episode 4 the young Queen seeks advice, desperate to ensure she does not tarnish the monarchy’s centuries-old reputation. To symbolise this burden, Birkeland evokes the imagery of traditional portraiture – the subjects of which were always high-born individuals, often royals. Consider this frame grab from the scene, beneath an official portrait.

mw58197

img_1647

See how the light models the face the same way in both images? Note also the absence of backlight in the frame grab, another feature common to traditional paintings, which typically relied on a single window light source. Elizabeth’s dark hair blends into parts of the dark background.

Combined with the timeless regal production design, this lighting subtly places the Queen within the frame of an official portrait, trapping her within the overwhelming tradition of the monarchy. Can I say for certain that Birkeland did this deliberately? No, but I’d be very surprised if he hadn’t looked at royal portraits while prepping the show, and I’d be equally surprised if they hadn’t at least influenced him unconsciously.

Either way, this is a first-rate example of the power of cinematography to enhance theme and narrative by guiding the viewer to make subconscious associations. If you haven’t seen The Crown, I can highly recommend it; it’s not just the cinematography that’s top notch.

How “The Crown” Uses Broad Key Lighting to Evoke Tradition

Forced Perspective

The Ark
The Ark

The other day I watched a 1966 Doctor Who story called The Ark. It’s easy to look at a TV show that old and laugh at the stilted acting, rubber monsters and crude effects. But given the archaic and draconian conditions the series was made under back then, I can only admire the creativity displayed by the director and his team in visualising a script which was scarcely less demanding than a contemporary Who story.

Studio floor plan from the very first episode of Doctor Who, showing camera positions (coloured circles)
Studio floor plan from the very first episode of Doctor Who, showing camera positions (coloured circles)

In the sixties, each Doctor Who episode was recorded virtually as live on a Friday evening, following a week of rehearsals. BBC rules strictly limited the number of times the crew could stop taping during the 90 minute recording session, which was to produce a 22 minute episode. Five cameras would glide around the tightly-packed sets in a carefully choroegraphed dance, with the vision mixer cutting between them in real-time as per the director’s shooting script. (Interesting side note: some of Terminator 2 was shot in a very similar fashion to maximise the number of angles captured in a day.) It’s no wonder that fluffed lines and camera wobbles occasionally marred the show, as there was rarely time for re-takes.

But what’s really hard for anyone with a basic knowledge of visual effects to get their head around today is that, until the Jon Pertwee era began in 1970, there was no chromakey (a.ka. blue- or green-screening) in Doctor Who. Just think about that for a moment: you have to make a science fiction programme without any electronic means of merging two images together, simple dissolves excepted.

Setting up a foreground miniature for a later Who story, Inferno (1970)
Setting up a foreground miniature for a later Who story, Inferno (1970)

So the pioneers behind those early years of Doctor Who had to be particularly creative when when they wanted to combine miniatures with live action. One of the ways they did this in The Ark was through forced perspective.

Forced perspective is an optical illusion, a trick of scale. We’ve all seen holiday photos where a friend or relative appears to be holding up the Eiffel Tower or the Leaning Tower of Pisa. The exact same technique can be used to put miniature spaceships into a full-scale live action scene.

In these frames from The Ark, two miniature landing craft are lowered into the background before the camera pans to a full-size craft in the foreground:

The camera pans from a miniature descending in the background to a full-scale craft in the foreground.
The camera pans from a miniature descending in the background to a full-scale craft in the foreground.

And in these later frames, another miniature craft is placed much closer to the camera than the Monoid (a.k.a. a man in a rubber suit). The miniature craft takes off, pulled up on a wire I presume – a feat which time, money and safety would have rendered impossible with the full-size prop:

The camera pulls focus from a foreground miniature taking off to an actor in the background. A greater depth of field would have made the shot more convincing, but  the principle is sound.
The camera pulls focus from a foreground miniature taking off to an actor in the background. A greater depth of field would have made the shot more convincing, but the principle is sound.

Of course, Doctor Who was not by any means the first show to use forced perspective, nor was it the last. This nineties documentary provides a fascinating look at the forced perspective work in the Christopher Guest remake of Attack of the 50 Ft. Woman, and other films…

And Peter Jackson famously re-invented forced perspective cinematography for the Lord of the Rings trilogy, when his VFX team figured out a way to maintain the illusion during camera moves, by sliding one of the actors around on a motion control platform…

So remember to consider all your options, even the oldest tricks in the book, when you’re planning the VFX for your next movie.

Forced Perspective

The Miniature Effects of “The Day of the Doctor”

The cannon miniature
The cannon miniature

The fiftieth anniversary special of Doctor Who has been lauded for its cinema quality FX; indeed, I saw it in a cinema and at no point did I feel like I was just watching a TV show on a big screen. The Time War sequence was particularly impressive, and in amongst the CGI and special effects you may be surprised to learn there were some miniature effects which helped to up the ante. These were created by Mike Tucker and his team at The Model Unit, who a few years back did such a brilliant job of building the Wooden Swordsman for my Dark Side of the Earth pilot. This press release from the Model Unit reveals their contribution and how it was done.

The Model Unit’s involvement in Doctor Who: Day of the Doctor was for the Time War section of this historic episode, providing several cutaways of the Time Lord staser cannon (including its destruction) and a longer sequence showing John Hurt’s TARDIS crashing through a wall and destroying several Daleks that are unlucky enough to be in its path.

Model Unit supervisor Mike Tucker working on the Wooden Swordsman for The Dark Side of the Earth back in 2008
Model Unit supervisor Mike Tucker working on the Wooden Swordsman for The Dark Side of the Earth back in 2008

Following an initial discussion with producer Marcus Wilson to establish the sort of shots that might be needed Miniature Effects Supervisor Mike Tucker met up with stereo supervisors Adam Sculthorp and David Wigram to work through the practicalities of shooting high speed miniature effects sequences in 3D – a first for a British television drama production.

A proof of concept test utilising an existing miniature established that the models shouldn’t be smaller than 1/6th scale, and ideally at 1⁄4 scale. Further research established that the miniature effects sequences for the Martin Scorsase movie ‘Hugo’ had been done at 1⁄4 scale and with the same Alexa high speed camera rigs that we were planning to use, and so we were able to proceed with a certain amount of confidence that what we were about to do was realistically achievable.

Blowing up the cannon
Blowing up the cannon

With a five-week lead-time and a two-day shoot in Cardiff in April of this year model construction was split between several Model Unit regulars. Alan ‘Rocky’ Marshal was given the task of constructing the staser cannon, Nick Kool took on the TARDIS model and associated rigs and Colin Mapson worked with new recruit Paul Jarvis on the ruined Arcadian buildings and breakaway wall sections.

In a nod to past effects sequences, the Dalek miniatures were achieved in the time honoured way by utilising off-the-shelf toys (in this case the 18 inch voice- interactive toys that had been produced by Character Options a few years back), albeit with a few careful modifications in order to match them more closely to the actual props. Further detail was added to the interiors, including a scaled model of the mutant creature.

Model Unit DoP Peter Tyler worked closely with main unit DoP Neville Kidd to establishing a lighting design for the miniatures as, due to camera rig availability, we were shooting our miniatures in advance of the live action unit – a complete reversal of how things are usually done.

Close collaboration was also needed with the production design team with Mike and assistant art director Richard Hardy constantly swapping notes about the final design details of both Time Lord machinery and architecture to ensure a seamless blend with the location.

Day one of the shoot concentrated on the shooting of the cannon allowing the more complex rig of the TARDIS to be set up and tested, whilst the second day took in several takes of the TARDIS shots. The 1⁄4 scale TARDIS miniature was fixed to a steel rig mounted on a trolley system that allowed us to fire it at the wall using bungee cord.

Filming the Tardis breaking through the wall
Filming the Tardis breaking through the wall

Two takes of each set up were shot on two high speed Alexa stereo rigs shooting at 120fps.

Mike and his crew watched the completed episode at the Doctor Who Celebration at Excel with an audience of 2000 fans.

Visit The Model Unit’s website at www.themodelunit.co.uk

The Miniature Effects of “The Day of the Doctor”