Inside a lens, amongst the various glass elements, is an ingenious mechanism which we call the iris. Just like your biological iris, it controls the amount of light passing through the pupil to form an image. I’ve written about the iris’s use to control exposure before, and its well-known side effect of controlling depth of field. But here are five things that aren’t so commonly known about irises.
1. f-stops and the entrance pupil
The f-number of a lens is the ratio of the focal length to the diameter of the aperture, but did you know that it isn’t the actual diameter of the aperture that’s used in this calculation? It’s the apparent diameter as viewed through the front of the lens. A lens might have a magnifying front element, causing the aperture to appear larger than its physical size, or a reducing one, causing it to appear smaller. Either way, it’s this apparent aperture – known as the entrance pupil – which is used to find the f-number.
2. No-parallax point
The no-parallax point of a lens is located at its entrance pupil. Sometimes called the nodal point, although that’s technically something different, this is the point around which the camera must pan and tilt if you want to eliminate all parallax. This is important for forced perspective work, for panoramas stitched together from multiple shots, and other types of VFX.
3. Focus
If you need to check your focal distance with a tape measure, many cameras have a handy Phi symbol on the side indicating where the sensor plane is located so that you can measure from that point. But technically you should be measuring to the entrance pupil. The sensor plane marker is just a convenient shortcut because the entrance pupil is in a different place for every lens and changes when the lens is refocused or zoomed. In most cases the depth of field is large enough for the shortcut to give perfectly acceptable results, however.
4. Bokeh shape
The shape of the entrance pupil determines the shape of the image’s bokeh (out of focus areas), most noticeable in small highlights such as background fairy lights. The pupil’s shape is determined both by the number of iris blades and the shape of their edges. The edges are often curved to approximate a circle when the iris is wide open, but form more of a polygon when stopped down. For example, a Cooke S4 produces octagonal bokeh at most aperture settings, indicating eight iris blades. Incidentally, an anamorphic lens has a roughly circular aperture like any other lens, but the entrance pupil (and hence the bokeh) is typically oval because of the anamorphosing effect of the front elements.
5. Diffraction spikes
When the edge of an iris blade is straight or roughly straight, it spreads out the light in a perpendicular direction, creating a diffraction spike. The result is a star pattern around bright lights, typically most visible at high f-stops. Every blade produces a pair of spikes in opposite directions, so the number of points in the star is equal to twice the number of iris blades – as long as that number is odd. If the number of blades is even, diffraction spikes from opposite sides of the iris overlap, so the number of apparent spikes is the same as the number of blades, as in the eight-pointed Cooke diffraction pictured above right.
This is the first in a series of posts where I will look in detail at the four means of controlling the brightness of a digital video image: aperture, neutral density (ND) filters, shutter angle and ISO. It is not uncommon for newer cinematographers to have only a partial understanding of these topics, enough to get by in most situations; that was certainly the case with me for many years. The aim of this series is to give you an understanding of the underlying mechanics which will enable you to make more informed creative decisions.
You can change any one of the four factors, or any combination of them, to reach your desired level of exposure. However, most of them will also affect the image in other ways; for example, aperture affects depth of field. One of the key responsibilities of the director of photography is to use each of the four factors not just to create the ideal exposure, but to make appropriate use of these “side effects” as well.
f-stops and t-stops
The most common way of altering exposure is to adjust the aperture, a.k.a. the iris, sometimes described as changing “the stop”. Just like the pupil in our eyes, the aperture of a photographic lens is a (roughly) circular opening which can be expanded or contracted to permit more or less light through to the sensor.
You will have seen a series of numbers like this printed on the sides of lenses:
1 1.4 2 2.8 4 5.6 8 11 16 22 32
These are ratios – ratios of the lens’ focal length to its iris diameter. So a 50mm lens with a 25mm diameter iris is at f/2. Other lengths of lens would have different iris diameters at f/2 (e.g. 10mm diameter for a 20mm lens) but they would all produce an image of the same brightness. That’s why we use f-stops to talk about iris rather than diameters.
But why not label a lens 1, 2, 3, 4…? Why 1, 1.2, 2, 2.8…? These magic numbers are f-stops. A lens set to f/1.4 will let in twice as much light as (or “one stop more than”) a lens set to f/2, which in turn will let in twice as much as one set to f/2.8, and so on. Conversely, a lens set to f/2.8 will let in half as much light as (or “one stop less than”) a lens set to f/2, and so on. (Note that a number between any of these f-stops, e.g. f/1.8, is properly called an f-number, but not an f-stop.) These doublings or halvings – technically known as a base-2 logarithmic scale – are a fundamental concept in exposure, and mimic our eyes’ response to light.
If you think back to high-school maths and the πr² squared formula for calculating the area of a circle from its radius, the reason for the seemingly random series of numbers will start to become clear. Letting in twice as much light requires twice as much area for those light rays to fall on, and remember that the f-number is the ratio of the focal length to the iris diameter, so you can see how square roots are going to get involved and why f-stops aren’t just plain old round numbers.
If you’re shooting with a cine lens, rather than a stills lens, you’ll see the same series of numbers on the barrel, but here they are T-stops rather than f-stops. T-stops are f-stops adjusted to compensate for the light transmission efficiency. Two different lenses set to, say, f/2 will not necessarily produce equally bright images, because some percentage of light travelling through the elements will always be lost, and that percentage will vary depending on the quality of the glass and the number of elements. A lens with 100% light transmission would have the same f-number and T-number, but in practice the T-number will always be a little bigger than the f-number. For example, Cooke’s 15-40mm zoom is rated at a maximum aperture of T2 or f/1.84.
Fast and slow lenses
When buying or renting a lens, one of the first things you will want to know is its maximum aperture. Lenses are often described as being fast (larger maximum aperture, denoted by a smaller f- or T-number like T1.4) or slow (smaller maximum aperture, denoted by a bigger f- or T-number like T4). These terms come from the fact that the shutter speed would need to be faster or slower to capture the same amount of light… but more on that later in the series.
Faster lenses are generally more expensive, but that expense may well be outweighed by the savings made on lighting equipment. Let’s take a simple example, and imagine an interview lit by a 4-bank Kino Flo and exposed at T2.8. If our lens can open one stop wider (known as stopping up) to T2 then we double the amount of light reaching the sensor. We can therefore halve the level of light – by turning off two of the Kino Flo’s tubes or by renting a cheaper 2-bank unit in the first place. If we can stop up further, to T1.4, then we only need one Kino tube to achieve the same exposure.
Side effects
One of the first things that budding cinematographers learn is that wider apertures make for a smaller depth of field, i.e. the range of distances within which a subject will be in focus is smaller. In simple terms, the background of the image is blurrier when the depth of field is shallower.
It is often tempting to go for the shallowest possible depth of field, because it feels more cinematic and helps conceal shortcomings in the production design, but that is not the right look for every story. A DP will often choose a stop to shoot at based on the depth of field they desire. That choice of stop may affect the entire lighting budget; if you want to shoot at a very slow T14 like Douglas Slocombe did for the Indiana Jones trilogy, you’re going to need several trucks full of lights!
There is another side effect of adjusting the aperture which is less obvious. Lenses are manufactured to perform best in the middle of their iris range. If you open a lens up to its maximum aperture or close it down to its minimum, the image will soften a little. Therefore another advantage of faster lenses is the ability to get further away from their maximum aperture (and poorest image quality) with the same amount of light.
Finally it is worth noting that the appearance of bokeh (out of focus areas) and lens flares also changes with aperture. The Cooke S4 range, for example, renders out-of-focus highlights as circles when wide open, but as octagons when stopped down. With all lenses, the star pattern seen around bright light sources will be stronger when the aperture is smaller. You should shoot tests – like these I conducted in 2017 – if these image artefacts are a critical part of your film’s look.
Next time we’ll look at how we can use ND filters to control exposure without compromising our choice of stop.
Learn how to use exposure practically with my Cinematic Lighting online couse. Enter voucher code INSTA90 for an amazing 90% off.
Last autumn, after a few years away from it, I got back into 35mm stills photography. I’ve been reading a lot of books about photography: the art of it, the science and the history too. I’ve even taken a darkroom course to learn how to process and print my own black and white photos.
Shooting stills in my spare time gives me more opportunities to develop my eye for composition, my exposure-judging skills and my appreciation of natural light. Beyond that, I’ve discovered interesting parallels between electronic and photochemical imaging which enhance my understanding of both.
For example, I used to think of changing the ISO on a digital camera as analogous to loading a different film stock into a traditional camera. However, I’ve come to realise it’s more like changing the development time – it’s an after-the-fact adjustment to an already-captured (latent) image. There’s more detail on this analogy in my ISO article at Red Shark News.
The importance of rating an entire roll of film at the same exposure index, as it must all be developed for the same length of time, also has resonance in the digital world. Maintaining a consistency of exposure (or the same LUT) throughout a scene or sequence is important in digital filmmaking because it makes the dailies more watchable and reduces the amount of micro-correction which the colourist has to do down the line.
Anyway, this is all a roundabout way of explaining why I decided to make a pinhole attachment for my SLR this week. It’s partly curiosity, partly to increase my understanding of image-making from first principles.
The pinhole camera is the simplest image-making device possible. Because light rays travel in straight lines, when they pass through a very small hole they emerge from the opposite side in exactly the same arrangement, only upside-down, and thus form an image on a flat surface on the other side. Make that flat surface a sheet of film or a digital sensor and you can capture this image.
How to make a pinhole attachment
I used Experimental Filmmaking: Break the Machine by Kathryn Ramey as my guide, but it’s really pretty straightforward.
You will need:
an extra body cap for your camera,
a drill,
a small piece of smooth, non-crumpled black wrap, or kitchen foil painted black,
scissors,
gaffer tape (of course), and
a needle or pin.
Instructions:
Drill a hole in the centre of the body cap. The size of the hole is unimportant.
Use the pin or needle to pierce a hole in the black wrap, at least a couple of centimetres from the edge.
Cut out a rough circle of the black wrap, with the pinhole in the middle. This circle needs to fit on the inside of the body cap, with the pinhole in the centre of the drilled hole.
Use the gaffer tape to fix the black wrap tightly to the inside of the body cap.
Fit the body cap to your camera.
The smaller the pinhole is, the sharper the image will be, but the darker too. The first pinhole I made was about 0.1-0.2mm in diameter, but when I fitted it to my camera and looked through the viewfinder I could hardly make anything out at all. So I made a second one, this time pushing the pin properly through the black wrap, rather than just pricking it with the tip. (Minds out of the gutter, please.) The new hole was about 0.7mm but still produced an incredibly dark image in the viewfinder.
Exposing a pinhole image
If you’re using a digital camera, you can of course judge your exposure off the live-view screen. Things are a little more complicated if, like me, you’re shooting on film.
In theory the TTL (through the lens) light meter should give me just as reliable a reading as it would with a lens. The problem is that, even with the shutter set to 1 second, and ISO 400 Fujifilm Super X-tra loaded, the meter tells me I’m underexposed. Admittedly the weather has been overcast since I made the pinhole yesterday, so I may get a useful reading when the sun decides to come out again.
Failing that, I can use my handheld incident-light meter to determine the exposure…. once I’ve worked out what the f-stop of my pinhole is.
As I described in my article on aperture settings, the definition of an f-stop is: the ratio of the focal length to the aperture diameter. We’re all used to using lenses that have a clearly defined and marked focal length, but what is the focal length in a pinhole system?
The definition of focal length is the distance between the point where the light rays focus (i.e. converge to a point) and the image plane. So the focal length of a pinhole camera is very simply the distance from the pinhole itself to the film or digital sensor. Since my pinhole is more or less level with the top of the lens mount, the focal length is going to be approximately equal to the camera’s flange focal distance (defined as the distance between the lens mount and the image plane). According to Wikipedia, the flange focal distance for a Pentax K-mount camera is 45.46mm.
So the f-stop of my 0.7mm pinhole is f/64, because 45.64 ÷ 0.7 ≈ 64. Conveniently, f/64 is the highest stop my light meter will handle.
The website Mr Pinhole has a calculator to help you figure this sort of stuff out, and it even tells you the optimal pinhole diameter for your focal length. Apparently this is 0.284mm in my case, so my images are likely to be quite soft.
Anyway, when the sun comes out I’ll take some pictures and let you know how I get on!
Exposing the image correctly is one of the most important parts of a cinematographer’s job. Choosing the T-stop can be a complex technical and creative decision, but fortunately there are many ways we can measure light to inform that decision.
First, let’s remind ourselves of the journey light makes: photons are emitted from a source, they strike a surface which absorbs some and reflects others – creating the impressions of colour and shade; then if the reflected light reaches an eye or camera lens it forms an image. We’ll look at the various ways of measuring light in the order the measurements occur along this light path, which is also roughly the order in which these measurements are typically used by a director of photography.
1. Photometrics data
You can use data supplied by the lamp manufacturer to calculate the exposure it will provide, which is very useful in preproduction when deciding what size of lamps you need to hire. There are apps for this, such as the Arri Photometrics App, which allows you to choose one of their fixtures, specify its spot/flood setting and distance from the subject, and then tells you the resulting light level in lux or foot-candles. An exposure table or exposure calculation app will translate that number into a T-stop at any given ISO and shutter interval.
2. Incident meter
Some believe that light meters are unnecessary in today’s digital landscape, but I disagree. Most of the methods listed below require the camera, but the camera may not always be handy – on a location recce, for example. Or during production, it would be inconvenient to interrupt the ACs while they’re rigging the camera onto a crane or Steadicam. This is when having a light meter on your belt becomes very useful.
An incident meter is designed to measure the amount of light reaching the subject. It is recognisable by its white dome, which diffuses and averages the light striking its sensor. Typically it is used to measure the key, fill and backlight levels falling on the talent. Once you have input your ISO and shutter interval, you hold the incident meter next to the actor’s face (or ask them to step aside!) and point it at each source in turn, shading the dome from the other sources with your free hand. You can then decide if you’re happy with the contrast ratios between the sources, and set your lens to the T-stop indicated by the key-light reading, to ensure correct exposure of the subject’s face.
3. Spot meter (a.k.a. reflectance meter)
Now we move along the light path and consider light after it has been reflected off the subject. This is what a spot meter measures. It has a viewfinder with which you target the area you want to read, and it is capable of metering things that would be impractical or impossible to measure with an incident meter. If you had a bright hillside in the background of your shot, you would need to drive over to that hill and climb it to measure the incident light; with a spot meter you would simply stand at the camera position and point it in the right direction. A spot meter can also be used to measure light sources themselves: the sky, a practical lamp, a flame and so on.
But there are disadvantages too. If you spot meter a Caucasian face, you will get a stop that results in underexposure, because a Caucasian face reflects quite a lot of light. Conversely, if you spot meter an African face, you will get a stop that results in overexposure, because an African face reflects relatively little light. For this reason a spot meter is most commonly used to check whether areas of the frame other than the subject – a patch of sunlight in the background, for example – will blow out.
Your smartphone can be turned into a spot meter with a suitable app, such as Cine Meter II, though you will need to configure it using a traditional meter and a grey card. With the addition of a Luxiball attachment for your phone’s camera, it can also become an incident meter.
The remaining three methods of judging exposure which I will cover all use the camera’s sensor itself to measure the light. Therefore they take into account any filters you’re using as well transmission loss within the lens (which can be an issue when shooting on stills glass, where the marked f-stops don’t factor in transmission loss).
4. Monitors and viewfinders
In the world of digital image capture, it can be argued that the simplest and best way to judge exposure is to just observe the picture on the monitor. The problem is, not all screens are equal. Cheap monitors can misrepresent the image in all kinds of ways, and even a high-end OLED can deceive you, displaying shadows blacker than any cinema or home entertainment system will ever match. There are only really two scenarios in which you can reliably judge exposure from the image itself: if you’ve owned a camera for a while and you’ve become very familiar with how the images in the viewfinder relate to the finished product; or if the monitor has been properly calibrated by a DIT (Digital Imaging Technician) and the screen is shielded from light.
Most cameras and monitors have built-in tools which graphically represent the luminance of the image in a much more accurate way, and we’ll look at those next. Beware that if you’re monitoring a log or RAW image in Rec.709, these tools will usually take their data from the Rec.709 image.
5. Waveforms and histograms
These are graphs which show the prevalence of different tones within the frame. Histograms are the simplest and most common. In a histogram, the horizontal axis represents luminance and the vertical axis shows the number of pixels which have that luminance. It makes it easy to see at a glance whether you’re capturing the greatest possible amount of detail, making best use of the dynamic range. A “properly” exposed image, with a full range of tones, should show an even distribution across the width of the graph, with nothing hitting the two sides, which would indicate clipped shadows and highlights. A night exterior would have a histogram crowded towards the left (darker) side, whereas a bright, low contrast scene would be crowded on the right.
A waveform plots luminance on the vertical axis, with the horizontal axis matching the horizontal position of those luminance values within the frame. The density of the plotting reveals the prevalence of the values. A waveform that was dense in the bottom left, for example, would indicate a lot of dark tones on the lefthand side of frame. Since the vertical (luminance) axis represents IRE (Institute of Radio Engineers) values, waveforms are ideal when you need to expose to a given IRE, for example when calibrating a system by shooting a grey card. Another common example would be a visual effects supervisor requesting that a green screen be lit to 50 IRE.
6. Zebras and false colours
Almost all cameras have zebras, a setting which superimposes diagonal stripes on parts of the image which are over a certain IRE, or within a certain range of IREs. By digging into the menus you can find and adjust what those IRE levels are. Typically zebras are used to flag up highlights which are clipping (theoretically 100 IRE), or close to clipping.
Exposing an image correctly is not just about controlling highlight clipping however, it’s about balancing the whole range of tones – which brings us to false colours. A false colour overlay looks a little like a weather forecaster’s temperature map, with a code of colours assigned to various luminance values. Clipped highlights are typically red, while bright areas still retaining detail (known as the “knee” or “shoulder”) are yellow. Middle grey is often represented by green, while pink indicates the ideal level for caucasian skin tones (usually around 55 IRE). At the bottom end of the scale, blue represents the “toe” – the darkest area that still has detail – while purple is underexposed. The advantage of zebras and false colours over waveforms and histograms is that the former two show you exactly where the problem areas are in the frame.
I hope this article has given you a useful overview of the tools available for judging exposure. Some DPs have a single tool they rely on at all times, but many will use all of these methods at one time or another to produce an image that balances maximising detail with creative intent. I’ll leave you with a quote from the late, great Douglas Slocombe, BSC who ultimately used none of the above six methods!
I used to use a light meter – I used one for years. Through the years I found that, as schedules got tighter and tighter, I had less and less time to light a set. I found myself not checking the meter until I had finished the set and decided on the proper stop. It would usually say exactly what I thought it should. If it didn’t, I wouldn’t believe it, or I would hold it in such a way as to make it say my stop. After a time I decided this was ridiculous and stopped using it entirely. The “Raiders” pictures were all shot without a meter. I just got used to using my eyes.
After seeing Barry Lyndon (1975) on the big screen this week, I felt compelled to write a blog post about its cinematography. But what aspect of the cinematography? The painterly look? The many zooms? The use of natural light?
What I knew for certain is that I should definitely not write about the entirely candlelit scenes lensed on f/0.7 Nasa glass, because everyone knows that story. However, reading the vintage American Cinematographer article and some other material, I found the details surrounding this groundbreaking use of high-speed lenses so interesting that I decided to do it anyway.
The Vision
Barry Lyndon is the 18th century tale of a low-born Irishman who strives – through various misadventures, and ups and downs of fortune – to become a gentleman. The key visual influence of director Stanley Kubrick and DP John Alcott, BSC were the great painters of the story’s era, such as Vermeer.
Next week’s post will look at this painterly influence in Barry Lyndon more closely, but for now the important thing is the use of candlelight on those classical canvases, and Kubrick’s desire to replicate that look. According to lens expert Ed DiGuilio, who was tasked with adapting the f/0.7 glass for Lyndon, Kubrick “wanted to preserve the natural patina and feeling of these old castles at night as they actually were”.
Typically in movies, a candle in frame may motivate the lighting, but most of the illumination on the actors actually comes from an orange-gelled lamp just out of frame. Kubrick wasn’t interested in shooting Lyndon that way. He wanted all the light in those night interior scenes to genuinely come from the candles themselves.
The Problem
How much light does a candle shed? Conveniently, there is a unit of illumination called the foot-candle. One foot-candle is the amount of light received from a standard candle one foot away. Without going into the detail of what a “standard” candle is, it is enough for our purposes to say that the scene below has a key light of about three foot-candles…
… because there are three candles, about a foot away from the actor’s face. (The level of your key light, and consequently where you set your aperture, is almost always measured at your subject’s face, as that is usually the focus of the shot and the most important thing to get correctly exposed. This is why we DPs are always waving light meters in actors’ faces.)
If we look at an exposure table, such as this one, we can see that a three foot-candle key can be correctly exposed with an aperture of T1.4 and an EI (exposure index) of 800. Today that would be no problem, with many digital cameras having a native EI of 800, and the availability of fast lenses like Zeiss Master Primes and Super Speeds.
In the mid-seventies however, long before the advent of digital cameras, things were not so simple. Kubrick and Alcott had little choice but to shoot on Eastman Kodak 100T 5254. Those first three digits denote the film stock’s exposure index: 100. Alcott pushed the stock (brought the brightness up during processing) one stop, re-rating it to an EI of 200. But it still needed four times more light, or two stops more light than our modern-day Alexa or Red. (Check out my post on f-stops and T-stops if you’re getting lost.)
If we’re losing two stops on the EI, we need to gain two stops on the aperture to compensate. And two stops up from T1.4 is T0.7. You may notice that T0.7 isn’t on that table I linked to. This is because a lens with such a large relative aperture pretty much doesn’t exist.
Pretty much…
The Solution
Kubrick obsessively researched the problem. He eventually discovered that Nasa had commissioned Carl Zeiss to build ten Planar 50mm f/0.7 stills lenses in the sixties, which were used to take photos of the dark side of the moon. (I was unable to find out the T-stop of these lenses, but I’ll assume it was close enough to T0.7 for it to make little difference to my calculations above.) The developments leading to these lenses stretched back through Nazi military applications during WW2 all the way to the late Victorian era, when the double-Gauss cell at the core of the lenses was first invented.
Anyway, Kubrick promptly bought three of the Zeiss Planars. He liked to own equipment himself, rather than hire it in, and to this end he had also purchased at least one Mitchell BNC camera. As befits Kubrick’s perfectionism, these were perhaps the world’s most precisely engineered cameras, previously used for special effects work.
This is where Ed DiGuilio comes in: “[Kubrick] called one day to ask me if I thought I could fit a Zeiss lens he had procured… to his BNC.” It wasn’t simply a case of the f/0.7 glass having the wrong mount. The rear element was so large and needed to be so close to the film plane that DiGuilio had to extensively modify the camera, literally cutting parts out of it.
Once this was done, extensive testing ensued. The focus scale (distances marked on the barrel) had to be calibrated from scratch, and indeed the focus ring was re-engineered to allow the precision focusing that the lens’ tiny depth of field would require. Whereas the focus ring on a stills lens will turn about 90° to go from infinity to close focus, and the ring on a cine lens might turn 270°, the rings on these unique Planars now turned a whopping 720° – two whole revolutions!
50mm is a very useful lens length for close-ups, but Kubrick understandably wanted a wider option as well. Accordingly, DiGuilio located an adapter designed to adjust the throw of cinema projector lenses. Mounted onto one of the 50s, it gave an effective focal length of 36.5mm with only very minor light loss. A 24mm version was also tested, but Kubrick disliked the amount of distortion in its images, and rejected it.
The Execution
The colour brown and the trousers of Doug Milsone, Barry Lyndon‘s focus puller, cannot have been strangers to each other. Imagine trying to hold focus on this dolly-back at f/0.7!
By my calculations (which were difficult, because most depth of field tables/calculators don’t go to f/0.7!) an MCU on Kubrick’s 50mm Planar with the subject at 2.5m (8.2ft) and the iris wide open would have had a depth of field of about 43mm (1.7″). To get this same depth of field at f2.8, a popular working stop for cinematographers today, the subject would have to be just 1m (3.3ft) from the sensor plane, which would be a biggish close-up. And remember that focus monitors, peaking and Cine Tape did not exist in the seventies.
To give Milsone a fighting chance, a unique system of focus assist was developed. While the main camera shot an actor from the front, a CCTV camera captured them in profile. This profile image was piped to a monitor, over which a grid was placed. This grid was marked off with distances so that Milsone could see how much the actor had moved by, far more accurately than judging it by eye from beside the lens.
Another problem thrown up by the low-light cinematography was with the viewfinder. Interestingly, the Mitchell BNC was a non-reflex camera, meaning that it didn’t have a mirror on the shutter, reflecting the image to the viewfinder when the shutter was closed. Instead, the camera body racked over to one side to allow the viewfinder to get an image during line-ups and rehearsals, and when it was actually rolling the operator got their images from a side viewfinder with its own lens – just like in a disposable 35mm stills camera. The original prism-based viewfinder on Kubrick’s Mitchell BNC suffered from far too much light loss for a candlelit image to be visible through it, so it was replaced with a mirror-based viewfinder adapted from a Technicolor camera.
The shots resulting from all of these technical challenges are quite soft to the modern eye, but I think that only adds to their beauty. Barry Lyndon captured the exquisite fragility of candelight, and 42 years on the images are still unique and captivating.