My new second online cinematography course, Cinematography for Drama, is now out on Udemy. The course explains the role of a DP on set, from collaborating with the director in blocking the cast and choosing the camera angles, to lighting the scene with depth and mood.
Across the four modules of the course, I set up and shoot scenes in common contemporary locations: domestic banter in a sunny kitchen, a monologue in a dark bedroom, an awkward first date in a restaurant, and a walk-and-talk in an outdoor bar. Watch me try out different blocking and camera angles to get the most depth and interest in the frame, create movement using a slider and a gimbal, and work out the coverage needed to complete the scene. Then learn the secrets of cinematic lighting as I set up LED, tungsten and practical lights to create a look. Witness the camera rehearsals through to the final take, then sit back and watch the final edited scene. Every step of the way, I explain what I’m doing and why, as well as the alternatives you could consider for your own films.
This is a follow-up to my best-selling Udemy course Cinematic Lighting, which has over 3,600 students and a star rating of 4.5 out of 5. Here is some student feedback:
“Excellent. Informative and enjoyable to watch.” – 5 stars – David C.
“Thank you to Neil and his team for a fantastic course that gives a real insight into the thought process of a cinematographer.” – 5 stars – Dan B.
“Some great tips in this. Really enjoyed watching the decisions being made as and when the scenario is actually being lit, some good workarounds and nice in depth descriptions to why he’s doing what he is. Genuinely feels like your taking in some advice on set! Well worth taking the time to do this!” – 5 stars – Ed L.
You can get the new course for a special low price by using the code IREADTHEBLOG before April 2nd.
RedShark News recently published an article called “The DSLR is now dead”, based on the fact that the Canon 1D X Mark III will be the last flagship DSLR from the company and that mirrorless cameras are now first choice for most photographers. This prompted me to reflect on some of the things I learnt when I bought my first (and only) DSLR.
It was 2011, and I documented some of the challenges my new Canon 600D created for me in this blog post. But what the DSLR did really well was to introduce me to a workflow very similar in many ways to the bigger productions I’m working on now. Previously I had shot everything on prosumer camcorders, so the following things were new to me with DSLRs and have been constant ever since.
Shallow Depth of Field
I had been used to everything being in focus, so not really thinking about my aperture setting, just turning the iris dial until the exposure looked right. My Canon 600D set me on a journey of understanding f-stops, and eventually choosing a target stop to shoot at for focus reasons and then using lighting or ND filters to achieve that stop.
Although for several years I owned a Canon XL1-S, which had interchangeable lenses, I only ever owned a couple of zooms for it. As far as I’m aware, no prime lenses to fit the XL1-S’s proprietary mount were ever made, so prime lenses were completely new to me when I got my 600D. As with aperture, it forced me to think about what field of view and degree of perspective or compression I wanted, select the appropriate lens, and then place the camera accordingly, rather than lazily zooming to get the desired framing.
It’s weird now to think that I used to be tethered to the sound recordist before I switched to DSLR shooting. At the time I was doing most of my own editing as well, so syncing the sound was a pain in the arse, but it was a valuable introduction to this industry-standard way of working. It’s also weird to think that clapperboards were optional for me before this.
Building a camera rig
All my cameras before the 600D had a built-in viewfinder, handgrip, shoulder mount (if the camera was large enough to need one) and lens (except the XL1-S), and there was no need to add an external battery plate or a follow-focus. The idea that a camera rig needed to be built, and that it could be customised to suit different operators and situations, was a novel one to me. I have to say that I still prefer cameras that have more things built in, like the Alexa Classic. A good part of the reason I rarely use Reds is because they don’t come with viewfinders. Why anyone ever thinks a viewfinder is an optional part of a camera is utterly beyond me. It’s an important point of stabilising contact for handheld work, and your face shields it completely from extraneous light, unlike a monitor.
The 600D was my first camera to record to memory cards rather than magnetic tape. It was certainly scary to have to wipe the cards during a shoot, being careful to back everything up a couple of times first. Data wrangling was a tricky thing to deal with on the kind of tiny-crewed productions I was usually doing back then, but of course now it’s completely normal. Just last week I shot my new cinematography course and had the fun of staying up until 2:30am after a long day of shooting, to make sure all the footage was safely ingested! More on that course soon.
Anyone else feel like this year was two steps forwards and two steps back? The current panic and looming threat of restrictions seems very much like how we all felt last year. All that’s needed to complete the effect is a last-minute U-turn to prevent Christmas mixing.
Anyway, I’m fortunate enough that the year as a whole has treated me quite kindly. In keeping with tradition, I’ll round it off with a list of my favourite blog posts.
This was originally written for RedShark News, a website about moving image technology and production news. The editor let me do a series of retrospectives about classic films and how they were made, most of which have subsequently made it onto this blog. Superman II was one of the first I did and is still one of my favourites. The story behind its production is so unique, with the first two films being initially shot back to back, then the second one being temporarily shelved due to budget overruns, the director being fired and much of it being re-shot. I had to cut a few hundred words out for the RedShark version, but you get the full-fat edition here on my site.
I first read about the Soviet probe Luna 3 in Giles Sparrow’s coffee-table book Spaceflight. I have been fascinated by space travel ever since watching all the programmes celebrating the 25th anniversary of the moon landing in 1994. When I discovered that Luna 3 had a photographic developing lab inside it, I knew it would make a great article. Again this appeared first on RedShark News.
I spent most of this February and March in prep for a feature adaptation of Hamlet starring Sir Ian McKellen, which was an absolute privilege to work on. Although I wasn’t allowed to name it at the time, I posted weekly blogs about the prep process, of which “Experimentation” is my favourite. This instalment covers the camera- and lens-testing process and includes a video of the results. Hamlet itself is likely still at least a year away from release, but rest assured that I have written a production diary and it will be posted when the film is out… or scroll down for a sneak preview!
This article gets to the core of what I like this blog to be about: sharing my own experiences of cinematography, analysing the decisions I made, and sharing the results. This is the story of Alder, a fairytale short filmed in a single, packed day!
Another piece that started life on RedShark News, this one looks at how VFX that nowadays would be computer-generated particle simulations were done in the pre-digital days. I’m fascinated by traditional VFX; I used to tape films and TV shows on VHS and use frame-by-frame playback to analyse how they were done. (One show that went under the microscope in this manner was the 1988 Doctor Who story “Remembrance of the Daleks”, and I was lucky enough this month to interview the digital matte-painter responsible. You’ll be able to read that piece in the January issue of Doctor Who Magazine.)
Every now and then I write what I think of as an “investigation” post; I dig into a concept like the Inverse Square Law, CRI or the Rule of Thirds and try to find out where it came from and whether it’s actually as useful or accurate as we tend to assume. In this particular post I try to find out where the idea of blue moonlight in cinema came from, and how the exact colour has developed over the years.
The second feature I shot this year was a micro-budget comedy based on a critically acclaimed Edinburgh Fringe show. I posted a production blog as we went along, and I’m picking week 2 as my favourite because it includes the crazy day we shot 11 scenes and over eight pages.
A tale from right back at the start of my filmmaking journey, this post brought up lots of fun memories as I was writing it. Quantum Leaper is an amateur spin-off of the 1980s-90s cult sci-fi series Quantum Leap, which my friend David Abbott and I made on a Video-8 camcorder in the mid ’90s. Sadly, not long after I wrote the piece, Quantum Leap star Dean Stockwell passed away, but I still hold hopes that Scott Bakula might one day appear in a sequel series to find out if Dr Sam Beckett ever returned home.
A Preview of Things to Come
I can’t say at present when Hamlet will be released, but when it is I’ll be publishing my diary from the shoot. I’ll leave you with a preview from Day 1…
We started with scenes at the stage door, one of the few spaces in the theatre that has natural light coming in. Gaffer Ben Millar and I considered trying to add artificial light outside to the main window which was backlighting the scene, but instead we opted to light through a little side window with a Fomex wrapped in unbleached muslin. After a hiccup about blocking and crew shows, we bashed through three set-ups including two using Wes Anderson-esque central framing and eye-lines very close to camera.
Next up was a scene in the substage, next to the boiler room. Here we installed a practical tungsten bulkhead light on the wall as our key, adding to the extant yellowy-green fluorescents that illuminated parts of the background, and the Fomex spilling down a staircase. Lots of black negative space in the frame added to the moody look.
After lunch – during which I sorted out the footage transcoding plan with line producer Stephen Cranny and data wrangler Max Quinton – we moved to the glamorous location of the gents’ toilets for Ian McKellen’s first scene. The location had been very flat and white originally, but Ben’s crew rigged three Astera tubes to the tops of two walls – the two walls that we were mainly shooting towards – and that created a nice wrappy backlit look. Director Sean Matthias embraced the weirder shots I had storyboarded, which I was very happy about!
This week issue 40 of Infinity magazine comes out, featuring a couple of articles I wrote, including one about the cult sci-fi series Quantum Leap. The show saw Dr. Sam Beckett (Scott Bakula) bouncing around time into other people’s bodies and striving to put right what once went wrong, while his holographic friend Al (Dean Stockwell) smoked cigars, letched, and relayed exposition from Ziggy the computer.
I end the article by wondering whether it’s time for someone like Netflix to bring the show back (it definitely is). What I don’t mention in the magazine is that – unbeknownst to almost everyone – Quantum Leap has already been rebooted once.
This, my loyal readers, is the story of Quantum Leaper.
Season One (1995)
As teenagers, my friend David Abbott and I were huge Quantum Leap fans, and were bereft when the show was axed in 1993. I was developing an interest in filmmaking, having dabbled in 2D computer animation on my Atari ST and borrowed my grandfather’s Video-8 camcorder on a couple of occasions. When I was given that camcorder for my 15th birthday, David and I decided that we would make our own version of Quantum Leap, which we imaginatively titled Quantum Leaper.
The first episode was called “Just What the Doctor Ordered” and saw my character – named, again with great imagination, Neil – leaping into a doctor just as his patient is flatlining. I don’t remember much about the plot, but I do remember that we climbed the nearby Malvern Hills to film a fight scene.
Dave played Albert, my holographic helper, communicating with Project Quantum Leap’s supercomputer Ziggy by means of a special hand-link, just like Dean Stockwell did. Unlike Dean Stockwell’s, this hand-link was a calculator.
The two of us also played all the supporting characters (often with the judicious addition of a hat or jacket) and operated the camera, unless we were both in shot, in which case it was locked off. Much of the the editing was done in camera – rewinding the 8mm videotape, cueing it up to the exact moment the last piece of action ended, then hitting record and calling action simultaneously – and the rest I did tape-to-tape with two VCRs connected together. A cheap four-track disco mixer enabled the addition of music (badly composed by me) and sound effects (many of which were sampled from Quantum Leap itself). As YouTube was still years away, the only viewers for the series were our parents and friends, forced to sit down in front of the TV and watch it off VHS.
Episode two, “Boom!”, saw the fictional Neil as a bomb disposal expert supposedly in Northern Ireland in 1980, though like the first episode it was all shot in and around my house. My sister Kate was drafted in to play a journalist whose life Neil has to save.
“A Leap into the Blue” was the next episode, with Neil in the body of a parachutist. Scenes of characters in free-fall were shot with us standing in front of a white wall; I digitised the footage on my ST with a Videomaster cartridge and composited scrolling clouds into the background. The resolution of the Videomaster was very limited – maybe 320×240 – the frame rate was very low too, and it could only do black and white.
Next we shot a “pilot” episode explaining how Neil and Albert switched places with Sam and Al. I remember digitising shots of Scott Bakula and Dean Stockwell from Quantum Leap and compositing them atrociously into our own footage. At about 30 minutes long, the pilot was double the length of our other episodes.
Then we continued the series where we’d left off. Dave’s script “One Giant Leap” has Neil on a space shuttle mission, an episode that included NASA footage taped off the TV. We made almost no attempt to create sets; the space shuttle cockpit was a plain wall, a computer keyboard and a piece of card to cover an incongruous bookcase.
The next two episodes find Neil meeting (and shooting) an evil future version of himself, then leaping into the crazy future space year of 2017. The latter involves a flying car – my mum’s Citroen AX with the wheels framed out, intercut with an extremely crude CGI model.
Dave’s episodes “Virtual Leaping” and “Bullets Over Leaping” see Neil become a VR programmer (with a headset made of Lego) and then an actor (in a studio suspiciously like Dave’s shed).
My next episode has Neil leaping into himself and saving his father’s life. (My actual dad provided some splendidly wooden acting.) But doing this causes a paradox, and the season finale sees Neil and Albert swap places (as Sam and Al do in a classic Quantum Leap episode) and Neil having to restore the timeline to prevent the destruction of the universe.
We were ambitious. You can say that much for us.
Season Two (1996)
The following year, while doing our GCSEs, we began work on a second season. In between I’d made a bad 40-minute comedy, Bob the Barbarian, and an appalling feature-length sci-fi film, The Dark Side of the Earth, and I’d learnt a few things that would lift the production values of Season Two very slightly. I’d also nagged my parents into buying me a genlock which would let me superimpose CGI over analogue video, meaning I didn’t have to digitise footage and suffer the horrendous image degradation any more.
The actual Quantum Leaping effect from this era of the show is surprisingly decent given the equipment we were working with. We would lock the camera off and jump-cut to a blue filter being over the lens, then a white glow would creep over me – an animation I achieved in software called Deluxe Paint – followed by tendrils of electricity. The screen would then fade to white and a similar effect would play out in reverse to show the leap in.
Another improvement was that we managed to convince a few other friends to act in the series, including fellow Quantum Leap fan Lee Richardson, as well as Chris Jenkins, Conrad Allen, Matt Hodges, Si Timbrell and Jim McKelvie. Recognising my lack of musical talent at last, I abandoned composing and instead used soundtrack CDs from Star Trek: Deep Space Nine (Dennis McCarthy), the John Woo film Broken Arrow (Hans Zimmer), and the Doctor Who story “The Curse of Fenric” (Mark Ayres). Albert’s hand-link prop got an upgrade too, from a calculator to a custom Lego build with flashing lights.
Season Two opens with Dave’s episodes “Project Hijacked” and “Oh Brother, Where Art Thou?” which focus on events at Project Quantum Leap, supposedly a high-tech facility in the New Mexico desert in 2005. In reality it was a living room with a control console made out of painted cardboard boxes and Christmas lights. In an early manifestation of my cinematography leanings, I snooted the ceiling light with a rolled-up piece of silver card, lending a little bit of mood to the look.
At the time, Dave’s family were training a hearing dog, Louis, so I wrote an episode to feature him; “Silence is Golden” sees Neil leap into a deaf man, and was followed by the morbid “Ashes to Ashes” where he leaps into a corpse.
The next episode, Dave’s “Driven to Distraction”, is probably the best of the lot. For once there were few enough characters that no-one needed to confusingly play dual roles, and there is plenty of action to boot. (I uploaded this episode to YouTube so long ago that the ten-minute time limit still applied.)
The X-Files-inspired “Close Encounters of the Leaping Kind” comes next, with Neil as a ufologist bothered by a shadowy government agent. Then Neil becomes a teenager who must prevent a drugs overdose, then a one-armed man who must overcome prejudice to hold down a job. Cringingly entitled “Not So Armless”, this latter was shot in a newsagent’s owned by a friend’s parents, one of the series’ few non-domestic locations.
Like Quantum Leap we had a mirror shot in every episode where Neil would see the leapee’s reflection looking back at him. Sometimes Dave would track the camera behind my back and we’d hide a cut in the darkness to swap me with whoever was playing the reflection. Another time we pretended the serving hatch in Dave’s house was a mirror and the two of us synchronised our movements. For a fight scene in “Not So Armless” Chris hid one arm inside his t-shirt so that Neil’s mirror image could appear to punch the antagonist with an invisible fist!
The penultimate episode of the season features several brief leaps, ending with one to Hiroshima in 1945, where the A-bomb detonation (more footage off the TV) causes both Neil and Albert to leap simultaneously. In the finale, Albert becomes a mountaineer caught in an avalanche, while Neil is a member of the rescue team – a premise thieved from the Quantum Leap novel “Search and Rescue”. We started shooting it during snowy weather, but the snow thawed and the episode was never completed. The friends who had been appearing as supporting characters now had part-time jobs and couldn’t spare the time for filming.
We wrote all six episodes of a third season which would have explained how Neil became the evil future version of himself seen in an earlier episode, but nothing was ever filmed.
In 1997 we began a remake of the pilot using the experience we had gained since shooting the original, but again it was never completed. One part we did film was an action sequence with me on the roof rack of a car while the driver swerves around trying to throw me off. We shot this on Malvern’s Castlemorton Common and used a dummy of me for some of the wider and more dangerous shots. Its acting was probably better than mine. We remade the scene four years later as part of my Mini-DV feature The Beacon.
Today only five of the 20 Quantum Leaper episodes that we made survive, the rest having been callously taped over at some point in my late teens. That’s probably for the best, as most of it was hilariously bad, but making it taught me a hell of a lot about filmmaking. Without it, I doubt I’d have a career in cinematography today.
Just a quick post to say that the latest special edition of Doctor Who Magazine, out now, features an article I wrote about the history of the venerable series’ cinematography. From the cathode-ray tube multi-camera studio shoots of 1963 to the latest ARRI Alexa/Cooke Anamorphic photography, the technology and techniques of lighting and lensing Doctor Who encapsulate the history of TV making over the last six decades. I had a great time combining two subjects I know quite a bit about and was very excited to see the article in print. Look out for it in your local newsagent!
Cathode ray tube televisions, those bulky, curve-screened devices we all used to have before the rise of LCD flat-screens, already seem like a distant memory. But did you know that they were not the first form of television, that John Logie Baird and his contemporaries first invented a mechanical TV system more akin to Victorian optical toys than the electronic screens that held sway for the greater part of the 20th century?
Mechanical television took several forms, but the most common type revolved, quite literally, around a German invention of 1884 called the Nipkow disc. This had a number of small holes around it, evenly spaced in a spiral pattern. In the Baird standard, developed by the Scottish inventor in the late 1920s, there were 30 holes corresponding to 30 lines of resolution in the resulting image, and the disc would revolve 12.5 times per second, which was the frame rate.
In a darkened studio, an arc light would be shone through the top portion of a spinning Nipkow disc onto the subject. The disc would create a flying spot – a spot of light that travelled horizontally across the scene (as one of the holes passed in front of the arc lamp) and then travelled horizontally across it again but now slightly lower down (as the next hole in the spiral pattern passed the lamp) and so on. For each revolution of the 30-hole disc, 30 horizontal lines of light would be scanned across the subject, one below the other.
A number of photocells would be positioned around the subject, continually converting the overall brightness of the light to a voltage. As the flying spot passed over light-coloured surfaces, more light would reflect off them and into the photocells, so a greater voltage would be produced. As the spot passed over darker objects, less light would reflect into the photocells and a smaller voltage would result. The voltage of the photocells, after amplification, would modulate a radio signal for transmission.
A viewer’s mechanical television set would consist of a radio receiver, a neon lamp and an upright Nipkow disc of a foot or two in diameter. The lamp – positioned behind the spinning disc – would fluctuate in brightness according to the radio signal.
The viewer would look through a rectangular mask fitted over the top portion of the disc. Each hole that passed in front of the neon lamp would draw a streak of horizontal (albeit slightly arcing) light across the frame, a streak varying in brightness along its length according to the continually varying brightness of the lamp. The next hole would draw a similar line just beneath it, and so on. Thanks to persistence of vision, all the lines would appear at once to the viewer, and it would be followed by 11.5 more sets of lines each second: a moving image.
A number of people were experimenting with this crude but magical technology at the same time, with Baird, the American Charles Francis Jenkins and the Japanese Kenjiro Takayanagi all giving historic public demonstrations in 1925.
The image quality was not great. For comparison, standard definition electronic TV has 576 lines and 25 frames per second in the UK, twice the temporal resolution and almost 20 times the spatial resolution of the Baird mechanical standard. The image was very dim, it was only an inch or two across, and it could only be viewed by a single person through a hood or shade extending from the rectangular mask.
The BBC began transmitting a regular mechanical TV service in 1929, by which time several stations were up and running in the USA. An early viewer, Ohio-based Murry Mercier Jr., who like many radio enthusiasts built his own mechanical TV from a kit, described one of the programmes he watched as “about 15 minutes long, consisting of block letters, from the upper left to the lower right of the screen. This was followed by a man’s head turning from left to right.” Hardly Breaking Bad.
Higher resolutions and larger images required larger Nipkow discs. A brighter image necessitated lenses in each of the disc’s holes to magnify the light. Baird once experimented with a disc of a staggering 8ft in diameter, fitted with lenses the size of bowling balls. One of the lenses came loose, unbalancing the whole disc and sending pieces flying across the workshop at lethal speeds.
Other methods of reproducing the image were developed, including the mirror screw, consisting of a stack of thin mirrors arranged like a spiral staircase, one “step” for each line of the image. The mirror screw produced much larger, brighter images than the Nipkow disc, but the writing was already on the wall for mechanical television.
By 1935, cathode ray tubes – still scanning their images line by line, but by magnetically deflecting an electron beam rather than with moving parts – had surpassed their mechanical counterparts in picture quality. The BBC shut down its mechanical service, pioneers like Baird focused their efforts on electronic imaging, and mechanical TV quietly disappeared.
Just a quick one to say that the Shakespearian film I have been blogging about for the past couple of months, and which we’re now halfway through shooting, has been announced to the world. It is none other than Hamlet starring Sir Ian McKellen. It is also opening as a stage play in June, and you can read about it on The Telegraph‘s website.
As I write this, I’ve just got back from my first trip to the cinema in six months. Although they have been allowed to reopen in England since July 4th, the higher operating costs in the pandemic kept many cinemas dark well into August. On Friday the 21st, my local branch of the Light here in Cambridge finally opened its doors, and I went along to experience post-Covid cinema.
Studios have been shifting their release dates throughout the lockdown, with some films giving up on theatrical exhibition altogether, so the Light, like its competitors, has filled its screens with classics for now. I selected Jurassic Park, which I haven’t seen on the big screen since its original release in 1993.
When I arrived, the lobby was dark and almost empty. Like most public spaces, it had sprouted new signage and a one-way system since March, and it took me a couple of attempts to find the right lane. Once inside the main corridor though, little had changed except the odd hand sanitiser dispenser on the wall.
I found my screen and took a seat. As with everything from trains to swimming pools, pre-booking is now strongly recommended, due to the diminished capacity caused by social distancing. When you pick your seat, the website makes you leave two empties between your party and the next. You can even pre-purchase your popcorn and bucket of cola.
I needn’t have booked, however. In a screen of about 100 seats, exactly ten were occupied. It will take the general public a while to cotton on that cinema-going is an option again, even before they decide whether they feel comfortable doing so.
As I sat masked and expectant, my hands sticky from sanitiser that refused to evaporate, I was treated to a rare site: a cinema employee inside the auditorium. He announced that they didn’t have any ads or trailers yet, so they would delay starting the film to give everyone a chance to arrive.
A few minutes later, the man reappeared and asked us all to decamp to the corridor. Apparently they had installed a new sound system, and they needed to test it, which could be very loud. Why they couldn’t have checked the system for eardrum bursting at some point in the last six months is beyond me.
The ten of us duly waited in the corridor. A snatch of the Imperial March from an adjacent screen betokened another classic being wheeled out. A woman with a spray bottle and a cloth, masked like all of her colleagues, worked her way down the corridor, cleaning the door handles. A group next to me (but, I hasten to add, appropriately distant) cracked jokes about the sex appeal of Jeff Goldblum’s Ian Malcom. Another group, evidently missing the trailers, watched one on a phone. (If that doesn’t sum up the existential crisis facing cinema, I don’t know what does.)
At last we were readmitted. The lights dimmed, the sounds of a jungle faded up on the brand new sound system, and the Universal logo appeared. But the trademark globe looked like a deflated football. The film was being projected in the wrong aspect ratio. And not just slightly. It was almost unwatchably stretched, like the flat 1.85:1 images were being shown through a 2:1 anamorphic lens.
By the time the first scene was dissolving away to Bob Peck’s cries of “Shoot her!” the problem hadn’t been corrected, so I stepped out to find a member of staff. The senior person on duty claimed that the problem lay with the file supplied by the distributor, not with the projection. “There’s nothing I can do,” he insisted, while I goggled over my mask in disbelief.
At this point, had I not had this article to write, I would have gone home and watched the film on Netflix, or even on DVD. (There’s that existential crisis again.) But I persevered, trying not to imagine Dean Cundey weeping tears of frustration into his beard.
Fortunately, Jurassic Park is such a great film that it could be appreciated even in the face of such technical incompetence. A larger audience would have been nice, to enjoy the scares and humour with, though since screaming and laughing project dangerous droplets further, perhaps that’s less than ideal these days.
Overall, I must say that I found the experience of going to the cinema less altered than many other aspects of life. I’ve got used to wearing a mask, so much so that I was halfway home before I remembered to take it off, and I normally avoid peak times so the emptiness didn’t feel too unusual.
But with the rise in streaming subscriptions during lockdown, and the understandable caution that many feel about going out, cinemas will need to work much harder to get bums back on flip-up seats. The kind of technical troubles that the Light suffered tonight will only strengthen the case for staying at home, mask-free and pyjama-clad, where you can control both the virus and the aspect ratio.
A week after writing this, I went to a Showcase to see Tenet. The member of staff who took our tickets unequivocally told us that the printed screen number was wrong, and that we should go to another one. We did so. The ads and trailers finally started, fifteen minutes late. We were just wondering why they were trailing such kid-friendly movies when another member of staff came in and told us that Tenet was showing in the original screen after all, and by the way, you’ve missed the first couple of minutes.
Earlier this year I undertook a personal photography project called Stasis. I deliberately set out to do something different to my cinematography work, shooting in portrait, taking the paintings of Dutch seventeenth century masters as my inspiration, and eschewing traditional lighting fixtures in favour of practical sources. I was therefore a little disappointed when I began showing the images to people and they described them as “cinematic”.
This experience made me wonder just what people mean by that word, “cinematic”. It’s a term I’ve heard – and used myself – many times during my career. We all seem to have some vague idea of what it means, but few of us are able to define it.
Dictionaries are not much help either, with the Oxford English Dictionary defining it simply as “relating to the cinema” or “having qualities characteristic of films”. But what exactly are those qualities?
Shallow depth of field is certainly a quality that has been widely described as cinematic. Until the late noughties, shallow focus was the preserve of “proper” movies. The size of a 35mm frame (or of the digital cinema sensors which were then emerging) meant that backgrounds could be thrown way out of focus while the subject remained crisp and sharp. The formats which lower-budget productions had thereto been shot on – 2/3” CCDs and Super-16 film – could not achieve such an effect.
Then the DSLR revolution happened, putting sensors as big as – or bigger than – those of Hollywood movies into the hands of anyone with a few hundred pounds to spare. Suddenly everyone could get that “cinematic” depth of field.
Before long, of course, ultra-shallow depth of field became more indicative of a low-budget production trying desperately to look bigger than of something truly cinematic. Gradually young cinematographers started to realise that their idols chose depth of field for storytelling reasons, rather than simply using it because they could. Douglas Slocombe, OBE, BSC, ASC, cinematographer of the original Indiana Jones trilogy, was renowned for his deep depth of field, typically shooting at around T5.6, while Janusz Kaminski, ASC, when shooting Kingdom of the Crystal Skull, stopped down as far as T11.
There was also a time when progressive scan – the recording of discrete frames rather than alternately odd and even horizontal lines to make an interlaced image – was considered cinematic. Now it is standard in most types of production, although deviations from the norm of 24 or 25 frames per second, such as the high frame rate of The Hobbit, still make audiences think of reality TV or news, rejecting it as “uncinematic”.
Other distinctions in shooting style between TV/low-budget film and big-budget film have slipped away too. The grip equipment that enables “cinematic” camera movement – cranes, Steadicams and other stabilisers – is accessible now in some form to most productions. Meanwhile the multi-camera shooting which was once the preserve of TV, looked down upon by filmmakers, has spread into movie production.
A direct comparison may help us drill to the core of what is “cinematic”. Star Trek: Generations, the seventh instalment in the sci-fi film franchise, went into production in spring 1994, immediately after the final TV season of Star Trek: The Next Generation wrapped. The movie shot on the same sets, with the same cast and even the same acquisition format (35mm film) as the TV series. It was directed by David Carson, who had helmed several episodes of the TV series, and whose CV contained no features at that point.
Yet despite all these constants, Star Trek: Generations is more cinematic than the TV series which spawned it. The difference lies with the cinematographer, John A. Alonzo, ASC, one of the few major crew members who had not worked on the TV show, and whose experience was predominantly in features. I suspect he was hired specifically to ensure that Generations looked like a movie, not like TV.
The main thing that stands out to me when comparing the film and the series is the level of contrast in the images. The movie is clearly darker and moodier than the TV show. In fact I can remember my schoolfriend Chris remarking on this at the time – something along the lines of, “Now it’s a movie, they’re in space but they can only afford one 40W bulb to light the ship.”
It was a distinction borne of technical limitations. Cathode ray tube TVs could only handle a dynamic range of a few stops, requiring lighting with low contrast ratios, while a projected 35mm print could reproduce much more subtlety.
Today, film and TV is shot on the same equipment, and both are viewed on a range of devices which are all good at dealing with contrast (at least compared with CRTs). The result is that, with contrast as with depth of field, camera movement and progressive scan, the distinction between the cinematic and the uncinematic has reduced.
In fact, I’d argue that it’s flipped around. To my eye, many of today’s TV series – and admittedly I’m thinking of high-end ones like The Crown, Better Call Saul or The Man in the High Castle, not Eastenders – look more cinematic than modern movies.
As my friend Chris had realised, the flat, high-key look of Star Trek: The Next Generation was actually far more realistic than that of its cinema counterpart. And now movies seem to have moved towards realism in the lighting, which is less showy and not so much moody for the sake of being moody, while TV has become more daring and stylised.
The Crown, for examples, blasts a 50KW Soft Sun through the window in almost every scene, bathing the monarchy in divine light to match its supposed divine right, while Better Call Saul paints huge swathes of rich, impenetrable black across the screen to represent the rotten soul of its antihero.
Film lighting today seems to strive for naturalism in the most part. Top DPs like recent Oscar-winner Roger Deakins, CBE, ASC, BSC, talk about relying heavily on practicals and using fewer movie fixtures, and fellow nominee Rachel Morrison, ASC, despite using a lot of movie fixtures, goes to great lengths to make the result look unlit. Could it be that film DPs feel they can be more subtle in the controlled darkness of a cinema, while TV DPs choose extremes to make their vision clear no matter what device it’s viewed on or how much ambient light contaminates it?
Whatever the reason, contrast does seem to be the key to a cinematic look. Even though that look may no longer be exclusive to movies released in cinemas, the perception of high contrast being linked to production value persists. The high contrast of the practically-lit scenes in my Stasis project is – as best I can tell – what makes people describe it as cinematic.
What does all of this mean for a filmmaker? Simply pumping up the contrast in the grade is not the answer. Contrast should be built into the lighting, and used to reveal and enhance form and depth. The importance of good production design, or at least good locations, should not be overlooked; shooting in a friend’s white-walled flat will kill your contrast and your cinematic look stone dead.
Above all, remember that story – and telling that story in the most visually appropriate way – is the essence of cinema. In the end, that is what makes a film truly cinematic.
Recently, having put it off for as long as possible, I upgraded to MacOS High Sierra, the first new OS to not support Final Cut Pro 7. It was a watershed moment for me. Editing used to comprise at least half of my work, and Final Cut had been there throughout my entire editing career.
I first heard of Final Cut in early 2000, when it was still on version one. The Rural Media Company in Hereford, which was my main client at the start of my freelance career, had purchased a copy to go with their shiny Mac G3. The problem was, no-one at the company knew how to use it.
Meanwhile, I was lobbying to get some time in the Avid edit suite (a much hallowed and expensive room) to cut behind-the-scenes footage from Integr8, a film course I’d taken part in the previous summer. The course and its funding were long finished, but since so much BTS footage had been shot, I felt it was a shame not to do something with it.
Being 19 and commensurately inexperienced, I was denied time on the Avid. Instead, the head of production suggested I use the G3 which was sitting idle and misunderstood in one of the offices. Disappointed but rising to the challenge, I borrowed the manual for Final Cut Pro, took it home and read it cover to cover. Then I came back in and set to work cutting the Integr8 footage.
Editing in 2000 was undergoing a huge (excuse the pun) transition. In the back of the equipment storeroom, Rural Media still had a tape-to-tape editing system, but it had already fallen almost completely out of use. Editing had gone non-linear.
In a room next to the kitchen was the Optima suite. This was a computer (I forget what type) fitted with a low resolution analogue video capture card and an off-line editing app called Optima. In this suite you would craft your programme from the low-rez clips, exporting an EDL (Edit Decision List) onto a floppy disc when you were done. This you took into the Avid suite to be on-lined – recapturing just the clips that were needed in full, glorious, standard definition. You could make a few fine adjustments and do a bit of grading before outputting the finished product back to tape.
It wasn’t practical to do the whole edit on the Avid because (a) hard drives big enough to store all the media for a film at full rez weren’t really available at that time, and (b) the Avid system was hellishly expensive and therefore time on it was charged at a premium rate.
As I edited the Integr8 BTS on Final Cut Pro, I believed I was using an off-line system similar to the Optima. The images displayed in the Viewer and Canvas were certainly blocky and posterised. But when I recorded the finished edit back to tape, I couldn’t quite believe what I was seeing. Peering through the viewfinder of the Mini-DV camera which I was using as a recording deck, I was astonished to see the programme playing at the exact same quality it had been shot at. This little G3 and the relatively affordable app on it were a complete, professional quality editing system.
I looked across the office to the sign on the Avid suite’s door. It might as well have read: “DINOSAUR”.
Within a few months I had invested in my own Mac – a G4, no less – and was using FCP regularly. The next year I used it to cut my first feature, The Beacon, and three more feature-length projects followed in the years after that, along with countless shorts and corporates. Using FCP became second nature to me, with the keyboard shortcuts hard-wired into my reflexes.
And it wasn’t just me. Final Cut became ubiquitous in the no-/low-budget sector. Did it have its flaws? Definitely. It crashed more often than Richard Hammond. I can think of no other piece of software I’ve screamed so much at (with the exception of a horrific early desktop publishing app which I masochistically used to create some Media Studies GCSE coursework).
And of course Apple shat all over themselves in 2011 when they released the much-reviled Final Cut X, causing many loyal users to jump ship. I stayed well away from the abomination, sticking with the old FCP 7 until I officially quit editing in 2014, and continuing to use it for personal projects long after that.
So it was quite a big deal for me to finally let it go. I’ve got DaVinci Resolve installed now, for the odd occasion when I need to recut my showreel. It’s not the same though.
Timelines aren’t my world any more, light is, but whenever I look back on my years as an editor, Final Cut Pro’s brushed-aluminium interface will always materialise in my mind’s eye.