
90% of the rest is on an iPhone, but I do use a Sony a7 when my client (usually my wife) insists. 75% of my photography is on film or early historic processes. Personally, I do film testing for fun (see attachment). If you want to take an image, analyze it as fitting 10 zones, and call it Zone System, then most photographers won't argue with you. That line of thinking is equally applicable to digital photography. This worked best when they followed the advise from their teachers: to choose a single film, and get totally dialed into it, so you know what it'll do and you won't mess up an important assignment. For others, it's a boring distraction that's like reading a high school math book, so they say go have fun shooting instead, and your prints are going to look fine. The zone system is a very technical process (but a simplification of the field of sensitometry) that encourages an artist to slow down and produce something really good that will hopefully put them on the map, or keep their clients coming back. So you take a lot of notes on where your spotmeter says the darkest and brightest area is, and the put extra work into developing a few sheets of film, all so you don't need to futz around a lot in the darkroom wasting expensive paper until you're satisfied. Really, the zone system is an attempt where you decide when you take a picture what you want it to look like after you've finished printing it.

The only thing where the analogy doesn't fit in the DR discussion of film vs digital, is that with film, you control the dynamic range during the development phase by deciding how much long you let chemicals act on the latent image and with digital, you're locked in. The only trade-off is you have to do at least a basic grade in post: that only sucks if your client expects to see your best right out of camera and there is no time for post production. It has something to do with the physics of light triggering electrical currents.ĮTTR is a great tool, because it lets you use the least-noisy part of your sensor, minimizing the risk of clipping. effectively all digital sensors (and film does too) have more noise in the shadows than in the highlights. when shooting in RAW, it might look like after shooting like you have clipped shadows, you might discover that the DR of your expensive camera surprises has saved your ass, and you can recover them it's less likely you can get the highlights backģ. the scene you're shooting might actually fit in the dynamic range, but if you meter for 18% gray default, you might still clip even in RAW.Ģ. The analogy is a real thing, for a few reasons:ġ. (If you are set to JPG, you are stuck with the dynamic range of that color profile with RAW, you have the entirety of the dynamic range available in post, especially shadows). The "why it works" to the analogy is more that the default matrix meter in most digital cams expose for middle gray, which can leave sometimes shadows, sometimes highlights clipped. I think the analogy "expose for the shadows, develop for the highlights" to "expose to the right, and let the shadows fall where they will" is pretty accurate and good advice for a technically careful photographer wanting to use their gear to the best.

Also, it's more of a strict cliff than analog, where old films had a gentle fall-off in the "shoulder" or "toe" which still have some data, which you could use as-is, or compensate for by changing the development time, at least for highlights. Jan Kruize, you're right that once you've gone past the dynamic range of your sensor, the data is indeed lost. You can read more about this method in my previous article. By temporary converting it to black and white, it might become possible to successfully use the zone-system of Ansel Adams again. Sometimes it can be difficult to recognize the different highlights in a color photo.

How about color? The zone-system of Ansel Adams is invented for black and white photography, of course, but it can be used for color photography as well. It is almost as if we stepped into the darkroom of Ansel Adams again. With proper post-processing you will end up with a perfect contrast in your black and white photo. The black point and white point slider will let you manipulate the boundaries, and locale adjustments make it possible to optimize any part of the photo to your liking. When using Lightroom you can use the sliders highlight and shadow to manipulate the shades of gray. The detail in the shadows is present, thanks to the Exposure to the Right method.

The latter makes it possible to make small adjustments in the photo. With just four sliders in Lightroom, we can manipulate the contrast in the photo.
