The Normal Lens

Today I’m investigating the so-called normal (a.k.a. standard) lens, finding out exactly what it is, the history behind it, and how it’s relevant to contemporary cinematographers.

 

The Normal lens in still photography

A normal lens is one whose focal length is equal to the measurement across the diagonal of the recorded image. This gives an angle of view of about 53°, which is roughly equivalent to that of the human eye, at least the angle within which the eye can see detail. If a photo taken with a normal lens is printed and held up in front of the real scene, with the distance from the observer to the print being equal to the diagonal of the print, then objects in the photo will look exactly the same size as the real objects.

Asahi Pentax-M 50mm/f1.4 – a normal lens for 35mm stills

Lenses with a shorter focal length than the normal are known as wide-angle. Lenses with a greater focal length than the normal are considered to be long lenses. (Sometimes you will hear the term telephoto used interchangeably with long lens, but a telephoto lens is technically one which has a focal length greater than its physical length.)

A still 35mm negative is 43.3mm across the diagonal, but this got rounded up quite a bit — by Leica inventor Oskar Barnack — so that 50mm is widely considered to be the normal lens in the photography world. Indeed, some photographers rarely stray from the 50mm. For some this is simply because of its convenience; it is the easiest length of lens to manufacture, and therefore the cheapest and lightest. Because it’s neither too short nor too long, all types of compositions can be achieved with it. Other photographers are more dogmatic, considering a normal lens the only authentic way to capture an image, believing that any other length falsifies or distorts perspective.

 

The normal lens in cinematography

SMPTE (the Society of Motion Picture and Television Engineers), or indeed SMPE as it was back then, decided almost a century ago that a normal lens for motion pictures should be one with a focal length equal to twice the image diagonal. They reasoned that this would give a natural field of view to a cinema-goer sitting in the middle of the auditorium, halfway between screen and projector (the latter conventionally fitted with a lens twice the length of the camera’s normal lens).

A Super-35 digital cinema sensor – in common with 35mm motion picture film – has a diagonal of about 28mm. According to SMPE, this gives us a normal focal length of 56mm. Acclaimed twentieth century directors like Hitchcock, Robert Bresson and Yasujiro Ozu were proponents of roughly this focal length, 50mm to be more precise, believing it to have the most natural field of view.

Of course, the 1920s SMPE committee, living in a world where films were only screened in cinemas, could never have predicted the myriad devices on which movies are watched today. Right now I’m viewing my computer monitor from a distance about equal to the diagonal of the screen, but to hold my phone at the distance of its diagonal would make it uncomfortably close to my face. Large movie screens are still closer to most of the audience than their diagonal measurement, just as they were in the twenties, but smaller multiplex screens may be further away than their diagonals, and TV screens vary wildly in size and viewing distance.

 

The new normal

To land in the middle of the various viewing distances common today, I would argue that filmmakers should revert to the photography standard of a normal focal length equal to the diagonal, so 28mm for a Super-35 sensor.

Deleted scene from “Ren: The Girl with the Mark” shot on a vintage 28mm Pentax-M

According to Noam Kroll, “Spielberg, Scorsese, Orson Wells, Malick, and many other A-list directors have cited the 28mm lens as one of their most frequently used and in some cases a favorite [sic]”.

I have certainly found lenses around that length to be the most useful on set.  A 32mm is often my first choice for handheld, Steadicam, or anything approaching a POV. It’s great for wides because it compresses things a little and crops out unnecessary information while still taking plenty of the scene in. It’s also good for mids and medium close-ups, making the viewer feel involved in the conversation.

When I had to commit to a single prime lens to seal up in a splash housing for a critical ocean scene in The Little Mermaid, I quickly chose a 32mm, knowing that I could get wides and tights just by repositioning myself.

A scene from “The Little Mermaid” which I shot on a 32mm Cooke S4

I’ve found a 32mm useful in situations where coverage was limited. Many scenes in Above the Clouds were captured as a simple shot-reverse: both mids, both on the 32mm. This was done partly to save time, partly because most of the sets were cramped, and partly because it was a very effective way to get close to the characters without losing the body language, which was essential for the comedy. We basically combined the virtues of wides and close-ups into a single shot size!

In addition to the normal lens’ own virtues, I believe that it serves as a useful marker post between wide lenses and long lenses. In the same way that an editor should have a reason to cut, in a perfect world a cinematographer should have a reason to deviate from the normal lens. Choose a lens shorter than the normal and you are deliberately choosing to expand the space, to make things grander, to enhance perspective and push planes apart. Select a lens longer than the normal and you’re opting for portraiture, compression, stylisation, maybe even claustrophobia. Thinking about all this consciously and consistently throughout a production can add immeasurably to the impact of the story.

The Normal Lens

6 Ways to Judge Exposure

Exposing the image correctly is one of the most important parts of a cinematographer’s job. Choosing the T-stop can be a complex technical and creative decision, but fortunately there are many ways we can measure light to inform that decision.

First, let’s remind ourselves of the journey light makes: photons are emitted from a source, they strike a surface which absorbs some and reflects others – creating the impressions of colour and shade; then if the reflected light reaches an eye or camera lens it forms an image. We’ll look at the various ways of measuring light in the order the measurements occur along this light path, which is also roughly the order in which these measurements are typically used by a director of photography.

 

1. Photometrics data

You can use data supplied by the lamp manufacturer to calculate the exposure it will provide, which is very useful in preproduction when deciding what size of lamps you need to hire. There are apps for this, such as the Arri Photometrics App, which allows you to choose one of their fixtures, specify its spot/flood setting and distance from the subject, and then tells you the resulting light level in lux or foot-candles. An exposure table or exposure calculation app will translate that number into a T-stop at any given ISO and shutter interval.

 

2. Incident meter

Some believe that light meters are unnecessary in today’s digital landscape, but I disagree. Most of the methods listed below require the camera, but the camera may not always be handy – on a location recce, for example. Or during production, it would be inconvenient to interrupt the ACs while they’re rigging the camera onto a crane or Steadicam. This is when having a light meter on your belt becomes very useful.

An incident meter is designed to measure the amount of light reaching the subject. It is recognisable by its white dome, which diffuses and averages the light striking its sensor. Typically it is used to measure the key, fill and backlight levels falling on the talent. Once you have input your ISO and shutter interval, you hold the incident meter next to the actor’s face (or ask them to step aside!) and point it at each source in turn, shading the dome from the other sources with your free hand. You can then decide if you’re happy with the contrast ratios between the sources, and set your lens to the T-stop indicated by the key-light reading, to ensure correct exposure of the subject’s face.

 

3. Spot meter (a.k.a. reflectance meter)

Now we move along the light path and consider light after it has been reflected off the subject. This is what a spot meter measures. It has a viewfinder with which you target the area you want to read, and it is capable of metering things that would be impractical or impossible to measure with an incident meter. If you had a bright hillside in the background of your shot, you would need to drive over to that hill and climb it to measure the incident light; with a spot meter you would simply stand at the camera position and point it in the right direction. A spot meter can also be used to measure light sources themselves: the sky, a practical lamp, a flame and so on.

But there are disadvantages too. If you spot meter a Caucasian face, you will get a stop that results in underexposure, because a Caucasian face reflects quite a lot of light. Conversely, if you spot meter an African face, you will get a stop that results in overexposure, because an African face reflects relatively little light. For this reason a spot meter is most commonly used to check whether areas of the frame other than the subject – a patch of sunlight in the background, for example – will blow out.

Your smartphone can be turned into a spot meter with a suitable app, such as Cine Meter II, though you will need to configure it using a traditional meter and a grey card. With the addition of a Luxiball attachment for your phone’s camera, it can also become an incident meter.

The remaining three methods of judging exposure which I will cover all use the camera’s sensor itself to measure the light. Therefore they take into account any filters you’re using as well transmission loss within the lens (which can be an issue when shooting on stills glass, where the marked f-stops don’t factor in transmission loss).

 

4. Monitors and viewfinders

The letter. Photo: Amy Nicholson

In the world of digital image capture, it can be argued that the simplest and best way to judge exposure is to just observe the picture on the monitor. The problem is, not all screens are equal. Cheap monitors can misrepresent the image in all kinds of ways, and even a high-end OLED can deceive you, displaying shadows blacker than any cinema or home entertainment system will ever match. There are only really two scenarios in which you can reliably judge exposure from the image itself: if you’ve owned a camera for a while and you’ve become very familiar with how the images in the viewfinder relate to the finished product; or if the monitor has been properly calibrated by a DIT (Digital Imaging Technician) and the screen is shielded from light.

Most cameras and monitors have built-in tools which graphically represent the luminance of the image in a much more accurate way, and we’ll look at those next. Beware that if you’re monitoring a log or RAW image in Rec.709, these tools will usually take their data from the Rec.709 image.

 

5. Waveforms and histograms

These are graphs which show the prevalence of different tones within the frame. Histograms are the simplest and most common. In a histogram, the horizontal axis represents luminance and the vertical axis shows the number of pixels which have that luminance. It makes it easy to see at a glance whether you’re capturing the greatest possible amount of detail, making best use of the dynamic range. A “properly” exposed image, with a full range of tones, should show an even distribution across the width of the graph, with nothing hitting the two sides, which would indicate clipped shadows and highlights. A night exterior would have a histogram crowded towards the left (darker) side, whereas a bright, low contrast scene would be crowded on the right.

A waveform plots luminance on the vertical axis, with the horizontal axis matching the horizontal position of those luminance values within the frame. The density of the plotting reveals the prevalence of the values. A waveform that was dense in the bottom left, for example, would indicate a lot of dark tones on the lefthand side of frame. Since the vertical (luminance) axis represents IRE (Institute of Radio Engineers) values, waveforms are ideal when you need to expose to a given IRE, for example when calibrating a system by shooting a grey card. Another common example would be a visual effects supervisor requesting that a green screen be lit to 50 IRE.

 

6. Zebras and false colours

Almost all cameras have zebras, a setting which superimposes diagonal stripes on parts of the image which are over a certain IRE, or within a certain range of IREs. By digging into the menus you can find and adjust what those IRE levels are. Typically zebras are used to flag up highlights which are clipping (theoretically 100 IRE), or close to clipping.

Exposing an image correctly is not just about controlling highlight clipping however, it’s about balancing the whole range of tones – which brings us to false colours. A false colour overlay looks a little like a weather forecaster’s temperature map, with a code of colours assigned to various luminance values. Clipped highlights are typically red, while bright areas still retaining detail (known as the “knee” or “shoulder”) are yellow. Middle grey is often represented by green, while pink indicates the ideal level for caucasian skin tones (usually around 55 IRE). At the bottom end of the scale, blue represents the “toe” – the darkest area that still has detail – while purple is underexposed. The advantage of zebras and false colours over waveforms and histograms is that the former two show you exactly where the problem areas are in the frame.

I hope this article has given you a useful overview of the tools available for judging exposure. Some DPs have a single tool they rely on at all times, but many will use all of these methods at one time or another to produce an image that balances maximising detail with creative intent. I’ll leave you with a quote from the late, great Douglas Slocombe, BSC who ultimately used none of the above six methods!

I used to use a light meter – I used one for years. Through the years I found that, as schedules got tighter and tighter, I had less and less time to light a set. I found myself not checking the meter until I had finished the set and decided on the proper stop. It would usually say exactly what I thought it should. If it didn’t, I wouldn’t believe it, or I would hold it in such a way as to make it say my stop. After a time I decided this was ridiculous and stopped using it entirely. The “Raiders” pictures were all shot without a meter. I just got used to using my eyes.

6 Ways to Judge Exposure

Anamorphic Lens Tests

Anamorphic cinematography, first dabbled with in the 1920s, was popularised by Twentieth Century Fox in the fifties as CinemaScope. Television was growing in popularity and the studios were inventing gimmicks left, right and centre to encourage audiences back into cinemas. Fox’s idea was to immerse viewers in an image far wider than they were used to, but with minimal modifications to existing 4-perf 35mm projectors. They developed a system of anamorphic lenses containing elements which compressed the image horizontally by a factor of two. By placing a corresponding anamorphosing lens onto existing projectors, the image was unsqueezed into an aspect ratio of 2.55:1, or later 2.39:1.

Since those early days of CinemaScope, anamorphic cinematography has become associated with the biggest Hollywood blockbusters. Its optical features – streak flares, oval bokeh and curved horizontal lines – have been seared into our collective consciousness, indelibly associated with high production values.

I’ve not yet been fortunate enough to shoot anamorphic, but I was able to test a few lenses at Arri Rental recently, with the help of Rupert Peddle and Bex Clives. Last week I wrote about the spherical lenses which we tested; our anamorphic tests followed the same methodology.

Again we were shooting on an Alexa XT Plus in log C ProRes 4444 XQ, this time in 4:3 mode, a resolution of 2048×1536. Since all of the lenses had a standard 2:1 anamorphosing ratio, the images unsqueezed to a super-wide 2.66:1 ratio. (This is because the lenses were designed to be used on 35mm film with space left to one side for the optical soundtrack.) You can see the full width of this ratio in the first split-screen image in the video, at 2:08, and in the second image below, but otherwise I have horizontally cropped the footage to the standard 2.39:1 ratio.

We tested the following glass:

Series Length Speed CF* Weight
Hawk V 35mm T2.2 30″ 5.6kg
Cooke Xtal 30mm T2.8 ? 3kg
Kowa Mirrorscope 40mm T2.2 36″ 1.15kg
Kowa Mirrorscope 30mm T2.3 ? ?

* CF = close focus

For consistency with the spherical lenses, we used lengths around 32mm, but in the anamorphic format this is a pretty wide lens, not a mid-range lens. We shot at T2.8, again for consistency, but I hear that many anamorphics don’t perform well wider than T4.

We were only able to test what Arri Rental happened to have on the shelves that day. The biggest and presumably most expensive was the Hawk V-series. Next  in size and weight was the Cooke Xtal – pronounced “crystal” – a 1970s lens based on the much-loved Speed Panchros. The smallest and lightest, was the Kowa Mirrorscope, with a list price of £1,200 per week for a set of four. (Sorry, I couldn’t find any pricing info for the others online.) Note that there isn’t really a 30mm Mirrorscope; to get this length you put a wide angle adapter on the 40mm. As this extra element decreases the optical performance, we tested it with and without, hence the two lengths.

Here’s the video…

 

Skin tones

Click on the image to see it at full quality.

To my eye, the Hawk has a fairly rich, warm skin tone, while the Cooke – as with the spherical S4 tested last week – seems a little grey and flat. The Kowa is inexplicably brighter than the other two lenses, which makes it hard to compare, but perhaps it’s a little cooler in tone?

 

Sharpness

Focus is more critical with anamorphic lenses than spherical ones. From a forum posting by Max Jacoby:

Anamorphic lenses have what is known as a “curved field of focus” that works similarly to the curved movie screens in some large Cinerama theatres. This is one reason that one needs to expose these lenses at a deeper stop. If one doesn’t, the curved field will not be covered by depth of field and either the edges or centre of the frame will be soft.

One day I’d like to re-test these lenses at a lower stop, T4 or T5.6, where they will all undoubtedly perform much better. But in this T2.8 test, on Bex’s face in the centre of frame, the Hawk V and the Kowa Mirrorscope 40mm – both almost a full stop from their maximum apertures – are clearly the sharpest of the bunch. The Cooke Xtal, which is wide open, is unsurprisingly softer. The 30mm adapter on the Mirrorscope completely destroys the image, not only making it very soft but also introducing colour aberration.

Now let’s look at the checkerboard at the side of frame and see if we can spot any differences in sharpness there…

It seems to me that the Kowa, both with and without the adapter, has a greater difference in sharpness between the centre and edges of frame than the the Hawk and Cooke. With the latter two lenses, the checkerboard is reasonably sharp, at least on the lefthand side, with some ghosting/blur visible towards the righthand side. The same thing can be observed on the chart in the flare tests at the end of the video.

 

Breathing & Bokeh

All of these lenses have a noticeable degree of breathe, which I suppose is to be expected from anamorphics. The Hawk V has roughly oval bokeh, the Cooke’s is more circular, while the Mirrorscope has interesting D-shaped bokeh.

 

Flare

The Hawk V doesn’t flare much at all, which is apparently due to the anamorphic element being in the middle of the lens, rather than at the front. The Kowa has a nice streak and glow around the light source, with a funky purple artefact on the opposite side of frame. But it’s the Cooke Xtal which provides the most classic lens flare, with a horizontal line across most of the frame and a partial star pattern around the source, despite the lens being wide open.

At the end of the video you can see how the flares develop on each lens as the light source moves horizontally across frame.

 

Distortion

A bulging effect is very obvious on all of these lenses, due to the focal lengths being quite wide for anamorphic. Notice how at 40mm on the Kowa Mirrorscope this curvature of the image is significantly reduced.

It’s hard to compare the levels of distortion because none of the focal lengths are exactly the same, except for the Cooke Xtal and the Kowa Mirrorscope with the 30mm adapter on. The Cooke’s top right and bottom left corners appear to be stretched away from the centre relative to the other two corners. I suppose that strange and funky stuff like this is exactly why you choose vintage glass.

Interestingly, the Cooke’s image appears a little tighter than the Kowa’s, which combined with my inability to find any evidence online of the existence of a 30mm Xtal, leads me to suspect we may have been given a mislabelled 32mm.

 

Conclusions

When we got to the end of our spherical tests and started putting the anamorphics on, I was shocked by the drop in sharpness. But as noted earlier, this is because anamorphics really need to be used with a smaller aperture than the T2.8 I often shoot at. If I learnt nothing else from this test, I learnt that anamorphic needs more light!

I would love to put the Cooke Xtal’s lovely flares and general vintage look to good use on a period movie one day. The Hawk V would be a good choice if I wanted the anamorphic look with warm, dynamic skin tones. The Kowa system seemed a little cheap and cobbled-together, but could well be a good solution for anamorphic on a budget, as long as I stayed away from the 30mm adapter!

I hope you’ve found these tests useful. Thanks again to 1st AC Rupert Peddle, 2nd AC Bex Clives and Arri Rental UK for making them possible.

Anamorphic Lens Tests

Spherical Lens Tests

The other week I spent a day at Arri Rental in Uxbridge, in the Bafta Room no less, conducting various camera and lens tests. I’ve done a number a productions now where I wanted to test but there wasn’t the time or money, so for a while I’ve been meaning to go into Arri on my own time and do some general tests for my education and edification. An upcoming short provided the catalyst for me to get around to it at last.

Aided by 1st AC Rupert Peddle and 2nd AC Bex Clives, I tested a dozen lenses, some spherical, some anamorphic. Today I will cover the spherical lenses; next time I’ll look at the anamorphics.

 

Method

We shot on an Alexa XT Plus in log C ProRes 4444 XQ at 3.2K. In the video the image has been downscaled to 1080P and a standard Rec.709 LUT has been added.

I set the Alexa to ISO 800 and lit Bex to a T2.8 using a 650W tungsten fresnel bounced off poly. For fill I caught a little of the spill from the fresnel with a matte silver bounce board on the opposite side of camera. I placed fairy lights in the background to observe the bokeh (out of focus areas) and turned on a 100W globe during each take to see what the flare did.

We shot all the lenses at 2.8 – the stop I most commonly use – and also wide open (compensating with the shutter angle), but the direct 2.8 comparison proved most useful, so that’s mainly what you’ll see in the video. We tested a single length: 35mm or the closest available to it.

What we didn’t do was shoot grey-scale or colour charts, or do any testing of vignettes or distortion. (The day after doing these tests, Shane Hurlbut, ASC published an Inner Circle post about how to tests lenses, so I immediately learnt what my omissions were!)

We tested the following lenses:

Series Length Speed CF* Weight Price
Leica Summilux-C 29mm T1.4 18″ 1.7kg £27K
Arri/Zeiss Master Prime 35mm T1.3 14″ 2.2kg £16K
Cooke S4 32mm T2 6″ 1.85kg £14K
Leica Summicron-C 35mm T2 14″ 1.3kg £13K
Zeiss High Speed
(a.k.a. Superspeed Mk III)
35mm T1.3 14″ 0.79kg £12K
(refurb)
Arri/Zeiss Ultra Prime 32mm T1.9 15″ 1.1kg £10K
Zeiss T2.1 32mm T2.1 24″ 0.45kg £4K
(used)
Canon 35mm T1.5 12″ 1.1kg £3K

* CF = close focus

Here’s the video…

 

Skin tones

Click the image to see it at best quality.

The Arri/Zeiss Master Prime and the two Leicas seem to have the most vibrant skin tones. To my eye, the Leicas have a slight creaminess that’s very pleasing. The Canon looks just a little cooler and less dynamic. I was surprised to find that the Cooke S4, the lens I’ve used most, appears to have a grey, flat skin tone compared with the Master Prime, Leicas and Canon. I would rank the Ultra Prime and Superspeed next, on a par except that the Ultra Prime has a noticeable magenta cast. My least favourite skin tones are on the Zeiss T2.1, which comparatively makes poor Bex look a little bit ill!

Some of the nuances will be lost in the YouTube and Jpeg compression, but this is a very subjective assessment anyway, so feel free to completely disagree with all of the above. Any of the differences noted above could be corrected by grading, to some extent . But remember that the lens is at the very start of the light’s journey from set to screen, and any wavelengths that don’t get through it are lost forever. It’s like fluorescent lamps with colours missing from the spectrum; you can’t put those back in in post.

 

Sharpness

I have to say, I’m unable to detect any difference in sharpness between the Master Prime, Cooke S4, Canon and Leicas. The Ultra Prime and Superspeed both look a hair softer, while the T2.1 is very soft.

 

Breathing

Breathing is the slight zooming effect that you get with some lenses when you pull focus. Looking at 4:44 in the video you can clearly see the differences in breathing between the eight lenses. Because this part of the video is showing a crop of the bottom left corner of the image, the breathing manifests as a shift to the left (zoom in) as the lens is racked closer (goes soft) and a shift to the right (zoom out) as it’s racked deeper (goes sharp).

All the Zeiss lenses except the Master Prime have a significant amount of breath when seen in isolation like this, but not enough to be noticeable to an audience in most real-world situations. The Cooke S4 has a little bit of breathe, and the Canon a hair less. The Master Prime and the Leicas are rock solid.

 

Bokeh

Small points of light, when thrown out of focus, most clearly demonstrate the bokeh pattern of a lens. The shape of the bokeh is determined by the number of iris blades and the shape of those blades. Generally a circle is preferred, because it’s a natural shape, but for certain stories a more unusual shape might be appropriate. The shape of the iris changes with the T-stop, hence the T2.8 and wide open images above.

Immediately noticeable is the difference in the Cooke S4’s bokeh between wide open (circular) and T2.8 (octagonal). All of the other lenses have round bokeh at T2.8, apart from the Superspeed, which has heptagonal (seven-sided) bokeh.

It’s entirely subjective which bokeh you prefer. The only other thing I’ll point out is that the Canon’s bokeh wide open is very fuzzy, with noticeable colour aberration, though this may be due to the bright highlight rather than the defocusing.

 

Flare

Flare patterns also vary with aperture. The smaller the aperture, the more of a star effect you will get, as the light interacts with the corners in the iris blades. The Summilux shows this most clearly, with a pronounced star at T2.8 (two stops down from its maximum aperture) and almost none when wide open. The Cooke S4 also has a nice star pattern at T2.8. With the other lenses it’s much more subtle, and the Canon has almost none.

 

Conclusions

The real revelations in these tests, for me, were the Leicas. The Summilux in particular is a beautiful lens, with rich, dynamic skin tones, nice bokeh, no breathing, plus the bonus of nice star flares. I will definitely be looking to work with this glass in the future, although given the price tag that may be optimistic!

The Summicron also performed incredibly well, matching the more expensive Summilux and Master Prime in every respect except speed. I can see this becoming my new go-to lens.

The Master Prime of course produced a beautiful, sharp, clean image, but it lacks character. It might work nicely for science fiction, a drama requiring a neutral look, or something where filtration was being used to give the image character.

The Canon impressed me too – no mean feat given that it’s the cheapest lens we tested. With nice skin tones and attractive flares, I could see this working well for a romantic movie.

The Zeiss T2.1 did not appeal to me, with poor sharpness and cold, washed-out skin tones, so I would avoid it.

The Superspeed is a decent lens, but in most cases I’d plump for an Ultra Prime instead. Ultra Primes are certainly easier to work with for the 1st AC, and have proven to be a good workhorse lens for drama. (I shot Above the Clouds on them.)

The Cooke S4 has been my go-to glass up to now, and while it will probably remain my first choice for period pieces, due to its gentle focus fall-off, I’m excited to try some of the other glass in this test on other productions.

I’ll say it one last time: this is all subjective. Our visual preferences are what make every director of photography unique.

Tune in next week when I’ll look at the anamorphic lenses: Hawk-V, Cooke Xtal and Kowa Mirrorscope.

Spherical Lens Tests

Choosing an ND Filter: f-stops, T-stops and Optical Density

A revised and updated version of this article can be found here (aperture) and here (ND filters).

Imagine this scenario. I’m lensing a daylight exterior and my light meter gives me a reading of f/11, but I want to shoot with an aperture of T4, because that’s the depth of field I like. I know that I need to use a .9 ND (neutral density) filter. But how did I work that out? How on earth does anyone arrive at the number 0.9 from the numbers 11 and 4?

Let me explain from the beginning. First of all, let’s remind ourselves what f-stops are. You have probably seen those familiar numbers printed on the sides of lenses many times…

1      1.4      2      2.8      4      5.6      8      11      16      22

They are ratios: ratios of the lens’ focal length to its iris diameter. So a 50mm lens with a 25mm diameter iris is at f/2. If you close up the iris to just under 9mm in diameter, you’ll be at f/5.6 (50 divided by 5.6 is 8.93).

A stills lens with its aperture ring marked in f-stops
A stills lens with its aperture ring (top) marked in f-stops

But why not label a lens 1, 2, 3, 4? Why 1, 1.2, 2, 2.8…? These magic numbers are f-stops. A lens set to f/1 will let in twice as much light as (or ‘one stop more than’) one set to f/1.4, which in turn will let in twice as much as one set to f/2, and so on. Conversely, a lens set to f/2 will let in half as much light as (or ‘one stop less than’) one set to f/1.4, and so on.

 

If you think back to high school maths and the Pi r squared formula for calculating the area of a circle from its radius, the reason for the seemingly random series of numbers will start to become clear. Letting in twice as much light requires twice as much area for those light rays to fall on, and remember that the f-number is the ratio of the focal length to the iris diameter, so you can see how square roots are going to get involved and why f-stops aren’t just plain old round numbers.

A Zeiss Compact Prime lens with its aperture ring marked in T-stops
A Zeiss Compact Prime lens with its aperture ring marked in T-stops

Now, earlier I mentioned T4. How did I get from f-stops to T-stops? Well, T-stops are f-stops adjusted to compensate for the light transmission efficiency. Two different f/2 lenses will not necessarily produce equally bright images, because some percentage of light travelling through the elements will always be lost, and that percentage will vary depending on the quality of the glass and the number of elements. A lens with 100% light transmission would have the same f-number and T-number, but in practice the T-number will always be a little higher than the f-number. For example, Cooke’s 15-40mm zoom is rated at a maximum aperture of T2 or f/1.84.

So, let’s go back to my original scenario and see where we are. My light meter reads f/11. However,  I expressed my target stop as a T-number though, T4, because I’m using cinema lenses and they’re marked up in T-stops rather than f-stops. (I can still use the f-number my meter gives me though; in fact if my lens were marked in f-stops then my exposure would be slightly off because the meter does not know the transmission efficiency of my lens.)

By looking at the series of f-numbers permanently displayed on my light meter (the same series listed near the top of this post, or on any lens barrel) I can see that f/11 (or T11) is 3 stops above f/4 (or T4) – because 11 is three numbers to the right of 4 in the series. I can often be seen on set counting the stops like this on my light meter or on my fingers. It is of course possible to work it out mathematically, but screw that!

CameraZOOM-20140309092150072_zps94e90ea4
A set of Tiffen 4×4″ ND filters

So I need an ND filter that cuts 3 stops of light. But we’re not out of the mathematical woods yet.

The most popular ND filters amongst professional cinematographers are those made by Tiffen, and a typical set might be labelled as follows:

.3      .6      .9      1.2

Argh! What do those numbers mean? That’s the optical density, a property defined as the natural logarithm of the ratio of the quantity of light entering the filter to the quantity of light exiting it on the other side. A .3 ND reduces the light by half because 10 raised to the power of -0.3 is 0.5, or near as damn it. And reducing light by half, as we established earlier, means dropping one stop.

If that fries your brain, don’t worry; it does mine too. All you really need to do is multiply the number of stops you want to drop by 0.3 to find the filter you need. So to drop three stops you pick the .9 ND.

And that’s why you need a .9 ND to shoot at T4 when your light meter says f/11. Clear as mud, right? Once you get your head around it, and memorise the f-stops, this all becomes a lot easier than it seems at first glance.

Here are a couple more examples:

  • Light meter reads f/8 and you want to shoot at T5.6. That’s a one stop difference. (5.6 and 8 are right next to each other in the stop series, as you’ll see if you scroll back to the top.) 1 x 0.3 = 0.3 so you should use the .3 ND.
  • Light meter reads f/22 and you want to shoot at T2.8. That’s a six stop difference (scroll back up and count them), and 6 x 0.3 = 1.8, so you need a 1.8 ND filter. If you don’t have one, you need to stack two NDs in your matte box that add up to 1.8, e.g. a 1.2 and a .6.

 

Choosing an ND Filter: f-stops, T-stops and Optical Density

Depth of Field

Although I use Ebay quite a bit, I rarely bid in the auctions. It annoys me too much how the price always seems so low and then jumps up exponentially in the closing minutes of the auction as everyone leaves bidding until the last possible moment. But when I saw a Sigma 20mm/f1.8 EF lens I couldn’t help myself and it was me that pounced at the last minute with my bid and won the lens.

The Sigma 20mm/f1.8 on my Canon 600D
The Sigma 20mm/f1.8 on my Canon 600D

What’s so great about this lens? I already have a Canon 18-55mm zoom – what’s wrong with that? The answer is: it’s all about depth of field.

Every filmmaker knows what depth of field is – the range of depth within an image which is in focus. Those of us cursed by tiny budgets to shoot on prosumer video formats have spent many years bemoaning how everything’s always in focus. Then HD-DSLRs came along and suddenly it all changed. Now you can control your depth of field. Now you can throw your background beautifully out of focus and keep your subject crisp and sharp, just like in real movies. But you can’t just turn on your DSLR and expect to get stunning depth of field straight away. So how can you make sure you’re always getting the shallowest possible focal depth? (Not that that is always the best look for every shot, but it’s nice to have the option.)

Let’s go back to basics and look at what affects depth of field. Most of us learnt all this when we first started making films, but let it drain from our brains over the years as our photographic dreams were crushed by the obstinately sharp backgrounds of a thousand Mini-DV frames.

1. Image size.  The larger the image, the smaller the depth of field. That’s why DV cameras with their tiny image sensors give such large depth of field, while at the other end of the scale a 35mm celluloid frame will permit lovely narrow focal depth. It’s also why a “full frame” DSLR like the Canon 5D Mark II will supply smaller depth of field than a “crop chip” DSLR like my Canon 600D.

2. Lens length.  The longer the lens, the smaller the depth of field. We all know this one well enough. How many times when DPing on DV have I heard the director ask me to zoom right in so the background goes nicely out of focus? But in the DV days it never went as out of focus as we wanted it to.

3. Subject distance.  People commonly forget this one. The closer the subject is to the lens, the smaller the depth of field. This is why sometimes you can achieve shallower focal depth by using a wide lens and placing the camera close to your subject than by zooming right in and moving the camera back. It’s also why miniatures will have a tell-tale small depth of field (the distance between lens and subject is miniature, just like everything else in the set-up) unless you take steps to counter it.

Depth of field varying with aperture
At f5.0 (left) almost all of the DeLorean is in focus, but at f1.8 (right) the depth of field is much smaller.

4. Aperture size.  The larger the aperture (i.e. the smaller the f-stop number) the smaller the depth of field. This is the crucial one with DSLRs. This is why I jumped on the Sigma f1.8 lens and why the f1.4 I borrowed on Field Trip was so beautiful. Of course, if you’re shooting in a bright environment then an aperture of f1.8 will give you a very over-exposed image, even with your camera on the lowest ISO. (Remember that you can’t compensate by changing the shutter speed, because that will also change the amount of motion blur in your footage, which unless you’re remaking Saving Private Ryan you normally don’t want to do.) The solution is to use an ND (neutral density) filter to cut down the amount of light entering the lens.

Of course there are far more technical details behind all of this, which frankly I don’t understand but fortunately I don’t need to in order to make films. I hope this post has refreshed your memory or tied together what fragments you already knew. I’ll let you know how I get on with the 20mm Sigma in the field. No pun intended. Well, maybe a little.

Depth of Field