Colorimetry and the Leica M9
The human visual system is an extremely efficient system for color perception, but also very flexible and easily fooled. Any color can be characterized by three color quantities: brightness, hue and saturation. The physical nature of color is based on the fact that different wavelengths display different hues. The colors we see depend on the spectral composition of the illuminant and the nature of the object (what wavelengths are being absorbed and what is being reflected) and the physiological and psychological make-up of the viewer. This combination accounts for the widely diverging opinions on color appreciation and evaluation of a certain scene. Two observers of the same picture (nowadays almost invariably on a computer screen) may see different skin colors of the same face and one may conclude that the reproduction of the skin is wrong. This conclusion is a psychological one and depends on a wide range of variables, the specific illuminant being only one of these. Even the fatigue of the viewer, the color memory of the viewer, the specific spectral sensitivity of the viewer’s retina, the spectral balance of the computer system and so on have influence on the color perception.
Given this wide variety of conditions, it is logical that color scientists have established standards for every component in the reproduction chain. It would be easy to relate the wavelength frequency to the color sensation, but that is not the case. The ICI (or CIE) commission has first defined a standard observer with fixed primary colors (blue, green and red with wavelengths of 435.8, 546.1, 700 mu) and fixed intensity in lumens. Secondly there have to be standard illuminants with well described spectral composition. The CIE has defined three standards: A, B and C representing tungsten filament, sunlight and overcast sky.
The standard illuminants are derived from a black body radiator where the spectral power distribution is known and can be expressed as a color temperature in kelvins.
It is an established fact that three well-chosen primary colors can generate any other color. Two popular systems are the additive system (RGB), used in solid state imagery and the subtractive system (magenta, cyan, yellow) used in silver halide color photography.
It is simple to create a triangle with red, green and blue as its corner points. On the green-red line we find all mixtures of red and green from yellow-green to orange, on the red-blue line we find all shades of purple and on the green-blue line there are all blue-green hues. This triangle is called the RGB Chromaticity Diagram.
In the silver halide technology the situation is quite simple: we know the illuminants, we know the chromaticity diagram and the chemical behavior of the color dyes in the emulsion layers. With this information we can create films with the required color representation. We have two options here: accurate and pleasing representation with most manufacturers going for the second option.
If the light source is not one of the standard sources, which is most often the case, we can use CC filters to change the color temperature. Using a color temperature meter, we can find exactly the difference in kelvins and select the appropriate filter (in fact the meter will tell us what CC filter to use). The u-v diagram shows the chromaticity of the full radiators as a black line. The u-v diagram is constructed such that the distance from a color to the black line can be used to estimate (calculate) the required correction.
The need to work with exact numerical values came when color television was introduced. A color television system has color filters and light sources used at the transmitter and receiver component and they must match over the whole chain. So numerical specifications of a color that define hue and saturation (brightness is not defined as it simplifies the calculations) are required. These specifications are based on the chromaticity diagram and the standard observer.
The spectral points that have been chosen implies that not all spectral colors can be represented. See shaded part in picture below. The triangle is a two dimensional construct and every point can be found and calculated with simple algebra and when necessary with matrix operations.
Given the coordinates and the reference points one can define a new set of coordinate axes and transform from one system to another. The RGB diagram cannot handle all colors and a new one has been defined, the XYZ chromaticity diagram. This one can represent all spectral colors but has one disadvantage. The colors are described as a pair of numbers, say 0.3 and 0.7. There is no clue what wavelength is represented (in this case 550 mu). Therefore yet another system is used, one based on dominant wavelength, purity and luminosity. The advantage of this system is that it can be transformed into two signals namely chroma and luminance and this is the basis of color processing in digital cameras.
It is in fact amazing how much television and video technology is implanted in the digital photography processes and techniques.
Color information must be coded and normalized if we want to ensure that we get the colors we want. Basically we choose the primary colors (coordinates in the XY system), the white point (more difficult: the white point in the diagram is called white C, but for the recording system we use white C with maximum luminance (if the luminance is less than maximum we have a grey shade)) and finally we need to normalize the luminance signal. The assumption for this approach is the linearity of the transmittance system. This is not the case and we need yet another correction in the system: the gamma correction which has its influence on saturation and sharpness.
The above description is valid for television systems, but needs some additional information to apply to digital camera systems.
The M9 uses a CCD sensor with a Bayer pattern color filter array (CFA). The single pixels record only a luminance value for the particular color in the CFA pattern and a full color image requires a method of color interpolation.
Stage 1:The raw output from the sensor is a mosaic of red, green and blue pixels with different luminosity/intensity. An RGB image requires that every pixel has numeric values for red, green and blue for exact location of the color with the coordinates in the chromaticity diagram.
Stage 2: Interpolation: The camera employs specialized demosaicing algorithms to calculate the missing values for every pixel. The basic idea is to use a 2 x 2 array as one full color spot. This works fine, but has a major drawback: the resolution in horizontal and vertical direction is halved! One could use overlapping 2x2 arrays to extract more image information. The amount of algorithms is quite large and every one has its own balance of qualities. But the algorithm can be tricked: fine image detail near the resolution limit can produce moiré artifacts in several patterns depending on the original texture and the software used to develop the raw image.
Below: a comparison between the scene and the Bayer representation
Stage 3: AWB: the color values after interpolation must be adjusted such that they reproduce natural looking images that please the human visual system. The image will be adjusted as if it were taken under a canonical (reference) illumination, often daylight. The automatic white balance does this. But the AWB performance and efficiency depends on the quality of the demosaicing algorithm (DA). Both are mutually dependent. The final resulting image will be different if we first do AWB and then DA or the other way around. The AWB values are adjustments of the blue and red channels in the image (green is always set to 1). These parameters are referred to as gain. Changing the gain of the red and blue channels changes the White balance.
Stage 4: Color rendering: the color signal values processed in the previous stages must be mapped on a defined color space to create physical meaning. This color space (Farbraum in German) is crucial for the understanding of the color management. The color space is constructed as a 3 x 3 matrix where the numerical values are arbitrarily chosen. There are two options: scene-referred image data and output-referred image data, the first tries to reproduce the original colors (tristimulus values) of the scene, the second tries to produce user-preferred colors. Most cameras, including the M9 use the second approach: the color space is typical for the Leica M9 and designed by the engineers such that the sensor and filter characteristics and the idea what a good color picture should be. The usual color test to find color value differences between a test target (Macbeth chart) and the digital image have only limited value as the color spaces often have dedicated changes in the color reproduction.
An additional complication is the exposure setting of the camera. Exposure for optimum raw processing is too dark for normal processing.
Stage 5: the pre-processed raw data are stored in a file that can be read by a Raw developer program. A simple but very effective one is Dcraw, but more common ones are Lightroom and Aperture and so on. Leica stores the data in the DNG format.
Below you see a portion of the EXIF data as stored in the DNG file.
Color Matrix 1: 0.856 -0.2034 -0.0066 -0.424 1.36 0.292 -0.074 0.247 0.898
Color Matrix 2: 0.626 -0.1019 -0.047 -0.373 1.145 0.193 -0.1409 0.295 0.621
Camera Calibration 1: 1 0 0 0 1 0 0 0 1
Camera Calibration 2: 1 0 0 0 1 0 0 0 1
As Shot Neutral: 0.4599663111 1 0.7648926237
Baseline Exposure: -0.5
The tag “AsShotNeutral” has the gain values for the red and blue components
Above you see the values for the Automatic setting in the M9. Below you see the values for a chosen Kelvin setting
As Shot Neutral: 0.4614301405 1 0.7568366593
The values of the color matrix and the as-shot-neutral tags are used by the RAW programs to calculate the final colors. Every program has its own interpretation and calculation and results will differ.
I made a simple test with a Kodak Grey Card and a Sekonic Color meter. The meter gave for the ambient light a value of 5950 K. I set the camera to the full range of options for white balance, including the K values.
Below you find a table of white balance settings of the camera in kelvins and the results as calculated by some popular programs. Note that RAW Developer, based on DCraw produced the most accurate results.
With every program it is possible to change the Kelvin values and therefore the white balance. We know now that the K values are not registered in the EXIF file and that the internal processes of the camera change the gain values when you select different white balance options.
The basic question now is what sense does it make to change the white balance settings when you take the pictures, as the camera records the gain values and the RAW programs do their own (not known) interpretations.
For this moment we can conclude that the camera settings give approximate values and do their own interpretations of the internal color space and gain settings.
More in the next article.