PREV
NEXT

Expose to the right and many other absurdities: do not tell me that you really believed it.

Over the past 20 years, the digital photography market has been distorted by false myths and populist simplifications, which have compromised the technical level of the discipline. Fueled by an amateur vision and aggressive marketing of the industry, these myths — such as relying blindly on the histogram or the misunderstanding of ISO sensitivity — have led to wrong practices and a superficial understanding of the photographic medium. Digital photography, on the other hand, requires technical awareness, understanding of the real nature of the sensors and careful management of raw data, without relying on empirical shortcuts that risk degrading the final quality of the images.

Over the past 20 years, many self-styled experts have manipulated the digital image market with misleading information devoid of any logical sense or scientific basis that has brought photography to a very low technical level, leaving room for the birth of myths and popular beliefs.

In reality, false myths were born from the amateur vision of photography, similar to the idea that the artist does not need technique but only creative ideas. In practice, the history of art and its evolution over the last 3000 years have been thrown into the basket in the face of a mere ideal of popularity and simplified dissemination. Photographic populism has become a trend that has elected questionable characters as gurus and experts and relegated creativity and research to a simple annoyance that adds nothing to the final result.

It all started with a very simple idea: to spread digital photography without doing harm, let's make software for managing digital backs that proof a photographer, because they don't understand and are only capable of doing damage. So, starting in 1998, software began to be simplified to allow photographers to 'touch' as little as possible. Imacon, Sinar, Jenoptick and the whole gang decided that it was not necessary to give the photographer other tasks, otherwise he would have messed up the file and the life of those who really managed their images. And to make sure that the market became popular, and populist, several companies began to fill the gaps that existed at that time. Software, cameras, backs, lights, lenses, accessories, all destined for the nascent digital photography market where a simple D or a “Digital” brand made everything attractive. A colossal well-structured marketing operation that still continues today.

It was too difficult to explain to a photographer that a back or a digital camera does not have ISO sensitivity. In fact, there are 6 different models for converting the exposure index of a digital system into ISO sensitivity, but the market has long since adopted a standard (SOS) based on the reading of the JPG developed by the camera in sRGB or, alternatively, a more personal standard (REI) based on the manufacturer's deductions. Let me simplify the concept to you, there is no ISO sensitivity of the digital sensor, it is a convention that is not complied with by all manufacturers in the same way and before 2006 it was even worse, a real battlefield, the discretion was total.

In 2003, digital photography was based on large imbalances between the expected result and the one obtained, it was in a phase in which the empirical calibration of its system had a higher value than that of a technical data and, above all, there were no exposure meters or advanced profiling systems dedicated to digital photography. Just at that stage, while still trying to find a way for the correct management of a complex system, someone began to theorize various absurdities based on empirical simplifications of complex technical processes.

One of these is ETTR Expose To The Right, better known as Expose Too Much - So Much Restyling.

The idea that the histogram represents a universal solution for correct exposure is one of the biggest misconceptions of modern digital photography (Fantozzi would have said otherwise). This false myth, often perpetuated by self-styled experts, has led many photographers to believe that it is enough to move as much information as possible to the bright areas of the histogram to obtain higher quality images, with less noise and better color rendering. However, this approach completely ignores the real context of the scene, the characteristics of the sensor, and the workflow necessary to translate the raw data into a developed image.

In shooting, the histogram is nothing more than a graphic representation of the JPEG file generated by the camera. This file undergoes compression, processing and adaptation in a limited color space, making the histogram unreliable as the only reference for evaluating the distribution of raw data. The idea of using it as an absolute guide for exposure is an empirical process that clashes with the technical complexity of the photographic medium. Without a detailed analysis of the scene and a thorough understanding of the sensor's limitations, the histogram becomes more of an obstacle than an aid.

A further widespread misunderstanding concerns ISO sensitivity. Many photographers believe that digital cameras have an intrinsic sensitivity similar to that of analog film. In reality, the ISO parameter in digital cameras is a simulation that translates the sensor signals into terms understandable to the photographer. Changing the ISO does not affect the sensor's sensitivity to light, but it acts on the electronic amplification of signals, which can result in loss of dynamic range and increased noise.

The idea that moving as much information as possible to the bright areas of the histogram guarantees higher quality is just as misleading. Digital sensors record data linearly, and light tones actually require more information than dark tones to represent shades consistently. However, insisting on assigning too much data to bright areas can compromise overall image quality, causing irreversible clipping in highlights and loss of critical detail in shadows.

In addition to this, it should be noted that the histogram does not provide real data and therefore does not allow us to verify the correct clipping depending on the color profile that will be used in the development phase. Another big problem that is not detected is that the shades tend to twist when we return the compressed range of colors to a larger progression, which means that the colors in the dark areas will be perceived with less color variations in the light areas that will inevitably produce a chromatic difference and in some cases postorizations. This in short is the reason why there is less information in dark areas: less is needed!!!

In addition to these limitations, there are other known variables that come into play: the RAW file has a linear contrast that is converted into a logarithmic or gamma curve under development, provided that the color torsion is preserved, this cannot guarantee maximum saturation throughout the development process.

Ok, this is a very simplified technical explanation, but I think for most people it's already complex, I summarize it with simple and understandable parameters:

- The sensor has no ISO sensitivity

- The RAW file contains only raw data and preferences

- The information provided by the camera (histogram, clipping, etc.) is not linked in any way to the RAW file

- The camera exposure has no correlation with the RAW file

- Varying the ISO on the camera does not increase sensitivity but corresponds to the amplification or contraction of the signal

- Translated: your camera does not provide you with useful information to determine how to place the data in the histogram since this does not match the data you are storing.

But what if I connect the camera to the computer and work in tethering using the histogram of my software development?

In this case, something will solve, but since you are already in the development phase you must work on the correct and not hypothetical data, so you apply a profile and the related corrections to obtain a definitive image, possibly with the knowledge that the histogram is only a tip and not a feature, in most cases the software lacks some essential tools to limit the error:

• The Waveform analyzes the distribution of brightness over the entire space of the image, showing in detail any clipping areas and ensuring a correct exposure of the medium tones.

• The Vectorscope focuses on chroma, highlighting the position of the hues and the saturation of the colors. It's essential for balancing colors, especially in workflows that require color accuracy.

• The RGB Parade separates the color channels (red, green, blue) to evaluate their distribution and balance, essential for achieving neutrality and tonal consistency.

In advanced workflows, such as in the video log, tools such as Waveform, Vectorscope and RGB Parade are in fact indispensable for analyzing and managing image data. These instruments provide a precise view of the light and color distribution, going far beyond the capabilities of the histogram. Thanks to these tools, a log recording can be carried out by applying an appropriate conversion LUT and controlling the entire spectrum of the resulting file so as to establish the actual brightness gain such as to allow adequate recovery in post-production phases. Which is actually essential when working in photography.

But let's see now why the ETTR cannot work in modern photography and honestly it didn't work even at the time:

TheSpack Photo Standard
Adobe Color
TheSpack Linear

Each color profile corresponds to a Tone Curve that allows linear data to be transformed into data with a specific range. This curve also affects the average gray point and, consequently, the average exposure value. In addition to this, it also acts on the values of dark tones and light tones and can consequently produce alterations during development. The tonal curve of the profile is therefore directly responsible not only for the correct exposure but also for the relative densities of the sensor data. This means that the incidence of the profile curve is higher than any creative shooting intervention since it intervenes directly on the distribution of image data.

Therefore, to think that an empirical rule disconnected from a conscientious application is well-founded is already implausible. To this we add that, in high-end photography, the exposure is carried out with external exposure meters capable of taking average and spot readings and, thanks to the most modern products, of developing an adequate sensor response curve capable of returning the entire dynamics of each individual device. This curve should be developed according to the basic DCP profile that is used in the development phase in order to compensate for the exposure and to be able to carry out the correct N+X and N-X calibrations related to the image.

Starting from the left we see a comparison of the same file developed with an Adobe profile starting from +2.00 EV • +1.00 EV • 0.00 EV • -1.00 EV • -2.00 EV and related compensation

Every single exposure variance involves a variation in the color rendering of the image and an alteration of colors in both high and low lights. As a rule, under standard shooting conditions, the sensor, regardless of the color profile used, does not support variations compared to the correct exposure. By increasing exposure, during recovery during development, the torsion of colors in the highlights increases and then the clipping. So overexposing in beauty photography is out of a logical context, but in many other cases it can be even more harmful. By reducing exposure, on the other hand, during development, the disturbance generated in low lights increases. In this case, underexposing during an event could lead to shadows that are too closed, which, when exposed, generate a noise that is not functional to the intended use of the image. Any alteration of the exposure results in an alteration of the image and, consequently, a lower color fidelity. The obvious example is the recovery of skies or full shadows, in these cases, operating in areas well outside the correct exposure range, over +2.00 EV we have a chromatic gap that desaturates light colors and changes them in tone. Conversely, on the other hand, -2.00 EV, we have an increase in noise in dark tones such as to make evident the disturbance and the consequences of generating texture and grain. But what is not said is that overcoming the disturbance in the shadows is easier than solving the clipping of the highlights. Using a cinematographic light and therefore built for the shadows and maintaining a very pronounced relative contrast in them, we can recover the data without disturbance because the relative contrast of the light source allows a recovery of more EV compared to a diffused light of the brightest areas. Ultimately, using a low-power flash to lighten shadows and underexpose them during shooting paradoxically reduces noise and increases the tonal range of the sensor rather than overexposing and then underdeveloping. Photography is based on complex and articulated technical elements that for years have distinguished professionals, simplifying the laws that govern physics does not solve the problem but sharpens it.

Ah, do you want to know the other story? Actually the other stories?

The shutter speed is 1/ (focal length)... You've heard it before, haven't you? To take sharp photos without moving, it is best to use a minimum shutter speed equal to the value produced by this formula. So with a 600mm I have to use at least 1/600. In digital photography this is not the case, quite the contrary. With stabilized body and lens, you can use longer times to avoid microblur, but you must bear in mind that all full-frame sensors above 40 megapixels require a much shorter shutter speed, at least one if not two stops. To avoid micro movement, then, other values influence, such as the solidity of the shooting position, how the lens is held and other environmental situations. Therefore, if you can give maximum stability to the lens, the ideal shooting time to avoid micro blur is 1/ (focal length x 3)

You must always follow the rule of thirds. Another triviality associated with the golden zone and the golden spiral. The composition is not based on a single technical dictation but is the result of an analysis of the elements of the scene and of the solids and voids that form. It can be a framing, a Phi grid, a golden spiral or it can arise from combinations of these elements or diagonals of the frame. All the elements must be balanced to build an image in which solids and voids are functional to the final dynamics of the scene.

Always use low ISO! False, in digital photography, as mentioned, there are no low or high ISOs, they are all sensor settings. There is only one correct ISO value, which is the native value of the sensor, i.e. the value for which the sensor returns the best performance and stability. All other values set in the room are calibrated from the nominal value and therefore introduce basic errors in the sensor response.

The f: /8.0 rule for maximum lens quality. Wrong: each lens has a different optimal rendering value and it is not certain that it can be higher or lower than f: /8.0, among other things, each lens is designed for different types of images so it is not certain that it is optimized with more open or more closed diaphragms.

There are many myths and it would be worth exploring them all, for now we have only delved into the most harmful but without going too technical.

Leica Q (Typ 116) - Profile comparison
Before
After

Sometimes a picture is worth a thousand words. On the left, a photograph taken in an environment with obvious lighting complexities, developed with the Adobe Color profile; on the right, the same image, but with the TheSpack profile. For this comparison, second-generation profiles were used, optimized in 2021, so they are still far from subsequent progress. This image is particularly critical because of a nuance in saturation, which, if not properly normalized, generates irregularities. Often, the result obtained with the Adobe profile leads to a negative judgment on the quality of the file and the camera itself. While using a similar tonal curve for contrast, the TheSpack profile produced a much better result. There is greater chromatic consistency, extension of detail and legibility in all areas of the image. Noise and granularity, evident with Adobe, have been reduced thanks to the structure of the TheSpack profile, designed to correctly balance the output channels. This limit in Adobe profiles often causes a drop in quality that is wrongly attributed to the technical medium. The best detail, superior tonal rendering and the absence of irregularities are not the result of post-production corrections, but of a carefully studied and developed color profile.

Panasonic S1R - Imperceptible defects
Before
After

We are often used to looking at the whole of an image, losing sight of the detail that defines it. This reflection, in itself, might seem out of place, considering that photography is based on visual perception, on the impact that a subject, light, interpretation and dynamics of a scene transmit to us. It would therefore be natural not to focus on the details. And yet, here comes a great paradox: we invest in expensive lenses, glorifying their performance. We try to correct aberrations, chase resolution, apply textures and contrast masks to emphasize details, and yet we often forget one fundamental element: the color profile, which can destroy all this work. Now looking at the enlarged detail of a photograph developed with the Adobe Color color profile and the same image with TheSpack. The choice of how to intervene on a color profile, which parameters to consider and how to optimize the rendering of a sensor inevitably leads to consequences that impact the final quality of the image. This can even frustrate the work of engineers and designers who have created the highest quality optics. In the image developed with the Adobe Color profile, the light of a neon is dispersed, leaving an obvious halo around the light source. This phenomenon reduces texture in highlights, compromising texture and detail, and altering the overall quality of the photo. A small defect that, however, has a heavy impact on the performance of the lenses and is manifested throughout the image, regardless of the lighting conditions. Obviously, this consideration stems from the fact that a color profile can be generated taking into account different parameters, including those that determine the variation of hue and saturation as the brightness changes. For this reason, we have chosen to divide our system to make it effective in a wide range of situations. We have implemented specific solutions for each individual camera, so as to obtain impeccable results, regardless of the shooting conditions. This approach allows us to guarantee a consistent and accurate color rendering, minimizing deviations that may compromise image quality.

OTHER ARTICLES

GO BACK TO THE BLOG