A wide range of topics is being discussed here. Take your time to roam through this information
M3 rangefinder issue (October 25, 2003)
I noted in my FAQ that the rangefinder of the M3 will get a yellow tint at a certain age.
Here is my quote:
"The optical construction of the M3 range/viewfinder is different from that of all successor models. It is more elaborate to build, more sensitive to shocks and abuses and it will get a yellow-tinted color cast when aging."
A reader noted that on another website a different explanation has been given. The quote is below:
"Another difference, one that is not immediately apparent, is that newer Ms have aluminized (or platinized) beamsplitters. The M3 finder, which has a gold-coated beamsplitter, looks like it has a bluish tint, with a gold rangefinder spot. This is intentional, and designed to increase contrast. The newer finders are brighter, but do not have the same snap."
Which explanation is the correct one?
Mine is and here is why:
The beamsplitter (in fact both prisms sides) are joined in the M3 with Canada balsam, which ages after time and then becomes yellow, even a quite strong yellowish tint, so much that it is difficult to see both rangefinder frames for alignment. It is true that the beamsplitter face in the M3 was a bit darker than in the M6. But the M6 has no longer the Canada balsam to cement both parts together and has a silverized prism side. The story about the gold plating is as so often in Leica world a myth to boost the fame of the M3. It is plain Canada balsam, which will get a yellowish color over time.
Finding a clean M3 becomes ever more impossible as the all M3 cameras are having the effect of the ageing of the Canada balsam. It it also hardly possible to dismantle the prism and add new balsam.
The Nokton 1.2/35mm Aspherical review (October 25, 2003)
This review has been the subject of some discussion at various newsgroups. As happens with every review I write, there is the standard remark by some persons that my report is biased in the extreme and not worth reading. My first reaction would be: I write reports now since 1997 on my website and the same comment is heard from that very moment and by the same persons.My loyal critics will always comment that I am biased whatever I write.
My second reaction would be a bit more serious. If I am biased, I must get my facts wrong: being biased means that you will press the facts in a preconceived frame and you are not able to see facts with a fresh and balanced eye.
When doing the Nokton test, I loaned the lens from a fellow reviewer, who writes (as I do) for the Dutch magazine "Camera Magazine". He is a Pentax guy (wrote a book about these cameras) and does not like Leica cameras. We are in friendly competition, but do respect each other and we decided to share the test results. He did the Nokton test in his usual way and I did mine in my way. We both made practical tests on slide film (slow speed and high speed). This was done independently. Then we shared results and compared notes. His verdict was remarkably close to mine. I had more measurements than he had, but his concusions did fit well with my facts. His report in the Dutch magazine hs the same conclusions as I describe in my review.
I did this on purpose. It is always preferable to exchange findings and compare conclusions to see where you may have been wrong or biased. And whenever possible I do share my results with others before I write my own review.
Why do I not present pictures to document my conclusions. The same reason why Geoffrey Crawley of BJP fame, CdI and others do not present pictures: in this case a picture is mostly misleading and prone to erroneous interpretations. There are situations where numbers and words do a better job than pictures.
Perspective, focal length, depth of field and digital sensors
There seems to be a lack of insight in the relationship between perspective and focal length. The Digital Back for the R8/9 series is now on sale and its sensor area creates a shift in focal length by the factor 1.37. The sensor area has the size of the APS film format. A 90mm lens, when used with the Digital Back operates as a lens with a focal length of 123.3mm. But is it a 125mm lens in perspective and viewing angle?
Perspective and viewing field
The situation is not new. Rolleiflex users had the possibility to change from roll film to miniature format long ago. The focal length of a lens depends on size of the image and the field of view. We could dispense with some confusion, if we would work with the field of view as parameter, but the focal length is now the usual figure. The size of the image detector (or image size) limits the field of view of the system. When the size of the detector becomes smaller, so will be the field of view of the system, and this is independent from the lens in front of the sensor. The reduction of the field of view is identical to enlarging only the centre portion of your miniature negative.
The visual perspective is not in any way related to the focal length of a lens. Perspective is only governed by the standpoint of the viewer (the person that looks at a scene). It is well known that two objects with the same size, but at different distances form the observer will look larger or smaller dependent only on the distance difference. This is the normal perspective, technically known as the central perspective. The relative size of the objects as reproduced on the negative or sensor area is decided fully by the choice of standpoint. The only influence of the focal length is on the size of the image, not on the relative sizes of the different parts of the image on the negative. Whether a lens covers a wide or small field determines how much of the scene we can capture, but has NO effect on the perspective. We can easily verify this by taking two pictures from the same standpoint, one with a 21mm lens and one with a 135mm lens. When we now enlarge the negative taken with the 21mm lens to the same size as what we have with the 135mm lens, you will notice that the relative sizes of the objects are not changed by the use of the different focal length.
The second type of perspective is the telecentric perspective, Now objects with the same size appear on the detector with equal size, independent from the distances they have form the observer position. We can achieve this telecentric perspective by a virtual relocation of the normal central perspective point to an infinity position: the rays that in the normal case converge to a point are now refocused as parallel rays. The entocentric perspective is the same as the central perspective, but with the size relations changed to the opposite dimensions.
Let us briefly return to the topic of the 1.37 factor of the digital back. When using a 90mm lens with miniature film or with the digital back from the same position, there will be no change in perspective. The only visible effect is the larger size of the image when using the digital back. If you want the identical size as with miniature film, you need to step back some distance to get the same object size.
Depth of field
Depth of field (DoF) is not caused by optical aberrations or diffraction phenomena. DoF is only related to the limited ability of the human eye to resolve small details. The DoF equations are dependent on the size of the circle of confusion (CoC) and this size is defined conventionally and with some arbitrariness. The size of the CoC is dependent on the size of the sensor or film area. For 35mm film (24x36mm) the CoC has the value of 0.033. For 18x24mm it is 0.025mm. If you look at a picture from a distance that is equal to the diagonal of the sensor area, the eye subtends an angle that corresponds to the angular resolution of the eye. If you enlarge the picture, the distance of viewing will have to be proportionally larger.
It is customary to define a lens with a focal length that equals the length of the diagonal as the normal or standard lens. When the size of the sensor area decreases, the DoF increases. If we compare the DoF of a Summicron 50mm and a negative area of 24x36mm with the lens of the Digilux-2 at the equivalent focal length of 50mm, corresponding to a physical focal length of 12.5mm (the diagonal of the sensor area is four times smaller) we will discover that the DoF of the Digilux lens is four times larger at the same apertures. Or we need to stop down the Summicron by four stops to get the same DoF.
In this case the diagonal of the sensor area and the physical focal length and the CoC size are connected. When we enlarge the picture from the 35mm negative and the small Digilux sensor to A4 format, we need a bigger enlargement factor for the Digilux picture, which means that the advantage of the large DoF is a bit less. Still when comparing pictures made with the Digilux-2 and the M7, we see more DoF in the Digilux case. We cannot explain it when referring to the usual DoF equations, but then we may have to adjust these equations.
The Digital Back for the R8/9, and the announced M-digital camera.
For both systems the sensor area is smaller than the full format 35mm negative. When you use lenses designed for the 35mm format on these cameras, there will be a magnification factor of the image of 1.37 or somewhat smaller. See above for the implications on perspective. The focal length of the Leica lens itself does not change. Because of the smaller sensor size, we simply cover a smaller part of the scene. The magnification factor does not change, when we look at the subject on the sensor area itself. A smaller sensor when everything else is the same, acts as a mask that is set before the 35mm negative area. Imagine this: we have the R8/9 with a film inside and the usual 24x36 area, defined by the film gate size. If we would be able to reduce the film gate to a smaller area, we have a smaller negative area, but not a larger size of the object. Look at the normal 35mm negative and tape off all four sides by a tape that is 5 mm wide. The resulting negative area will be 14 x 26mm, close to the sensor format of the Digital back. But, and this is really important, the size of the scene on the negative has not changed in both situations. All we have done is to reduce the size of the negative area. From this perspective, it is wrong to speak of a magnification factor of 1.37. We do not magnify the objects in the scene at the capturing stage: they are stiil of the same size, but projected onto a smaller capture area. What does happen is an apparent magnification, because we will enlarge the smaller negative/sensor size by a bigger enlargement factor compared to the 35mm negative. If we want to have a A4 size print (length of diagonal is 36cm) with both negative/sensor sizes, we need to enlarge the 35mm negative by a factor of 36/4.3 = 8.4 times and the digital sensor by 36/3.13 = 11.5 times. The size of the objects in the scene is bigger, because of the higher magnification at the printing stage, not at the capture stage! How does this affect the DoF? Theoretically there should be no change: same focal length, same distance, same CoC, only a smaller sensor area. But experiments show that a smaller sensor area is accompanied by a larger DoF. On the other hand, a larger magnification ratio implies an increase in the size of the CoC. And last but not least, we have to put into the equation the eye as its own image processor.
To be specific: let us put a 50mm on the R9 with a film inside. The focal length will be 50mm and the DoF as specified in the DoF tables depending on distance, aperture and focal length. Let us say the distance is 5 meters and the aperture 5.6. Now switch to the Digital Back. The only change is the smaller negative/sensor area. Assume we make a picture of a person's face that will occopy an height of 10mm on the negative in the normal position. If the face is centered in the middle of the negative area, we have a blank space of 7mm above and 7mm below the face. When capturing the same scene with the sensor, the face is still 10mm in size, but now the blank space above and below the face is only 3mm on each side. Enlarging both pictures to A4 gives a size of the face of 85mm in the case of the negative area (35mm film) and of 115mm in the case of the sensor. The result is a larger face when we compare both A4 prints and so an apparantly magnifier effect. Indeed, there is a magnifier effect, but only because of the bigger magnification at the printing stage. DoF would be smaller in the case of the bigger enlargement, because of the increased size of the CoC, but the size of the CoC in the case of the sensor capture is smaller. In the end, the DoF would be the same. We will have to wait for real life comparisons to see the effects of the equaition, where the eye and the CoC are the most imporant (subjective) ingredients.
We use the 1.4/50 ASPH on the announced Mdigital with a reduction factor of an assumed 1.37 for the sensor diagonal (same as the DMR). The lens is still a 50mm with the same magnification as in the case with the film. But we have to enlarge the capture area by a factor of 1.37 to fill the same print area. The size of the objects on the capture area will be 1.37 times larger, and the observer will assume that we used a lens with a focal length of 69mm.
The upshot is this:
When we use our normal Leica lenses (that is lenses designed for the 35mm (24x36mm) negative format) on cameras with a smaller size of the capture area (sensor or film) of 1.37 or 1.6, we will have to enlarge our sensor areas with a factor of 1.37 or 1.6 compared to the full frame negative area to fill the same print size. This additional enlargement at the printing stage will produce a larger size of the subjects on the print and gives the illusion that we have used a lens with a larger focal length.
Zeiss and resolution and fairy tales (October 15, 2004)
In the Zeiss magazine 'Camera Lens News' issue 20 we found this amazing statement: "ZM-Objektive bilden auf Gigabitfilm Strukturen ab mit feinen Details von 400 Linienpaaren pro Millimeter!" Or in translation: "ZM lenses are able to reproduce on Gigabit film structural detail with a resolution of 400 linepairs per millimetre." ZM refers to the new line of lenses for the Cosina based Zeiss-Ikon rangefinder camera, introduced at Photokina 2004. This is a remarkable statement and most certainly untrue. Let us see why this claim belongs in the realm of the fairy tale.
The Gigabit film has been discussed widely in the internet. It is a repackaging of the Agfa Copex document film. Agfa itself has extensive documentation about the film. Agfa states that the film is capable of resolving about 600 lines/mm. Agfa tells you that this resolution can only be achieved when developing the film to a contrast index of 3.0, which in effect reduces the film to a line copy film, suitable for the reproduction of black on white lettering (as is intended when making microfiches). This high contrast and the steep black/white gradient reduce the flare effect around image points and effectively can optimize resolution. Agfa also states that for continuous tone reproduction the resolution figures are much lower as can be expected. There is some confusion whether this value of 600 refers to lines (as separate bars) or refers to optical lines (a set of one black and white line or a linepair). If we calculate with linepairs, then we get a linewidth of 0.0008mm! The Airy disc diameter of 0.0001 to match this resolution requires an f/2 lens and a wavelength of 0.412 or in the far blue region. To be on the safe side, it would be better to interpret the Agfa value as 600 separate lines or 300 linepairs (or cycles/mm). The Gigabit marketing documentation states that a special developer formulation can bring 700+ lines with a low contrast negative suitable for continuous tone pictorial photography. On the assumption that we are talking cycles/mm here, we get a linewidth of 0.0007mm. This would be a physical miracle: to add resolution to the inherent resolution that the film emulsion is capable of by chemical means would be a revolution in photographic chemistry. And now Zeiss even claims a resolution of 400 linepairs/mm or 800 lines/mm on film(!). Zeiss and Gigabit both claim that these resolution figures are realistic and have been documented with real photographs. I asked both companies for proof, but did never get it. These claims cannot be realistic! If we look at the smallness of details on the film. we are talking about sizes of 0.001mm. The circle of confusion on film is normally taken as 0.03. The Copex film plus Zeiss lens would be able to record details 30 times smaller.
The confusion about resolution
There are a number of concepts that should be clearly understood. In discussions and literature you may encounter lines oer millimeter, linepairs per millimeter, cycles per mm and abbreviations like l/mm, lp/mm, cy/mm, lppmm. A line is a white or a black bar of some width and height. The well known barline test chart of the USAF 91951) is a good example. here we have a pattern of three black and two white lines. A bar plus adjacent space is an "optical line" or a "line-pair" or a "spatial cycle". Optical designers use these terms
There are a number of concepts that should be clearly understood. In discussions and literature you may encounter lines per millimetre, line pairs per millimetre, cycles per mm and abbreviations like l/mm, lp/mm, cy/mm, lppmm. A line is a white or a black bar of some width and height. The well-known bar line test chart of the USAF 91951) is a good example. Here we have a pattern of three black and two white lines. A bar plus adjacent space is an "optical line" or a "line-pair" or a "spatial cycle". Optical designers mix these terms freely as they understand that a line always means a bar plus adjacent space (the width of the adjacent space is half the width of the bar: the total width is the same as that of a white and a black bar). When they say that a lens can resolve 50 lines, line-pairs or cycles they refer to the same phenomenon. The factual width of any line (black or white) is in this case 0.01mm. Fifty cycles/mm implies 100 separate lines and every line is 0.01mm.
Resolution is expressed as spatial frequency in cycles per millimetre and is the measure for resolving power.
This is easy. But in a number of cases we are not sure that we are talking about the same thing. In television cameras the notion of a line is either a dark or a white line. Resolution here is twice as high as in the photographic world. And it is also common practice to refer to the white bar as a line and to the black bar as a space. If we see the word 'line' referred to, we are not sure if we talk about the white bar or about the optical line.
The classical measure for resolving power was the ability to separate closely spaced objects (stars). In an aberration-free lens the image of a point object (a star) is a bright central zone with annular alternating dark and light rings of varying width. This pattern is called the Airy disc. The Rayleigh criterion tells you that for a two-point resolution problem (two closely spaced stars) the limit is reached when we calculate the radius R of the first dark ring from the centre of the bright spot.
R= 1.22 x Lambda x Aperture.
The radius is not the diameter, which is calculated by
D= 2.44 x Lambda x Aperture.
The radius is the distance from the centre of the bright spot from one object to the first dark ring, where the centre of the bright spot of the second object is located. The value of the radius then refers to cycles/mm. The resolving power of the lens is the reciprocal of R or
S = 1/R in cy/mm = linepairs/mm
The Rayleigh criterion is only valid for one specified wavelength and assumes an unobscurated pupil (no vignetting). To be one the safe side, it is often better to calculate with the D-equation, which will half the value of the resolving power.
Let us assume that the claims are indeed true. Is this physically possible? We know that the theoretical limit of resolution is defined by the diffraction limit. A perfect lens, without any aberration, will reproduce a point source as a patch of light with a certain dimension as defined by the Airy Disk. I am sure that Zeiss is capable of producing excellent lenses, but I really question their ability to produce aberration-free lenses for the Zeiss-Ikon camera. Again, let us assume Zeiss can do it. What are then the theoretical limits. The size of the image spot, as limited by diffraction effects, depends on wavelength and aperture, nothing else. The wavelength is measured in micrometers and designated as L (lambda). The aperture is denoted with K and the formula then becomes as follows:
R= 1.22 x Lambda x Aperture.
The visible spectrum runs from blue to red (0.45 micrometer to 0.70 micrometer with green in the middle at 0.55 micrometer). For easy calculation we use the green light in our equation. The apertures of the new ZM lenses run from f/2 to f/2.8. The results are seen in this table.
From the table we can see that a diffraction-limited lens of aperture 2.8 has a limiting resolving power of 526 cy/mm = linepairs/mm. Using the R-equation assumes that the brightness difference between the white core and the dark ring is 26%. This brightness difference is good enough for telescopes and microscopes to differentiate between two small and very bright spots (stars).
These figures are in the same legue as the claims of both Gigabit and Zeiss. But we assume that the lens is perfect and that there is no loss of resolution in the imaging system from lens to film. Both these assumptions are wrong as we all know too well!
Based on the purely theoretical calculations of aberratio-free lenses the Zeiss/Gigabit claims seem to be right. But they both transfer these theoretical claims into practical photography without any reduction in values.
The D-equation would be far more realistic for the wider apertures and the wide angle of view of the ZM lenses and then the Zeiss/Gigabit claims are too high, even in this theoretical exercise.
MTF based resolution
For extended objects (as is the case with photographic motives, the Rayleigh criterion based of radius is not the best approach. Extended objects have spots of varying size and varying brightness. In this case the light distribution in an extended image is derived mathematically by the two dimensional convolution of the object with the Point Spread Function. Here we calculate the Fourier Transform to get the OTF from which we derive the MTF. We have to distinguish two cases:
Diffraction limited MTF (without aberrations) and Geometrical MTF (with aberrations). Values from the first type of MTF are almost identical to what you get when using the R-equation. The second type of values is much lower and is probably the best representation for real-world lenses in practical use.
D-MTF and G-MTF values can be found with optical design programs. As example for an f/4 lens I got the following results. Limiting value (D-MTF) for f/4 lens is about 400 cy/mm and G-MTF for same conditions is 170 cy/mm. I selected f/4 as at this aperture we might expect the best quality based on the lowest amount of residual aberrations.
Remember also that we are discussing the limiting frequency at an MTF value of zero. To see details on film, we need an MTF of at least 15%, and this will shift the limiting frequency to much lower values. In both cases the practical values are lower. With values of 150 to 200 cy/mm for outstandingly good lenses we talk sense. If we now combine these lens values with film MTF values of 100 to 200 cy/mm, we get the results I have referred to in my lens reports, where the best film/lens combo's deliver resolving powers between 80 cy/mm and 130 cy/mm.
When we try to be realistic and base our assumptions on practical assumptions:
no lens with the wide angles of view of the ZM lenses is really aberration diffraction limited at f/2, 2.8 or even 4
the Gigabit film can resolve around 200 linepairs/mm in continuous tone reproduction and
there will be a strong drop in resolution in the whole chain of reproduction from lens to film to enlarger
we have to conclude that Zeiss and Gigabit wanting us to believe in fairy tales.
The internet has generated an enormous amount of information fleeting freely around the world. In the academic and scientific world, there is a tradition (not always adhered to) to back up statements by facts and research and by peer reviews. At the limit there is always a theory that checks the wildest fantasies, unless you design your own theory of course. In the Leica world, there is no such thing. Real knowledge and valuable experience is mixed indiscriminately with myths and explanations that do not even have a remote connection to facts and/or accumulated time-honored knowledge.
Leica users have quite often strong opinions and views about the product, the technique behind it (mechanical, optical and photographic) and the results that can be achieved (the pictures). Photography is for the most part workmanship and for the rest it is art, even when both are grounded on a scientific base. It is natural that workmanship is oriented to tradition and traditional values and knowledge. Good workmanship has evolved out of a trial and error method and the secrets of the trade and the really important facts are closely guarded. Photograpic workmanship follows the same rules. A photographer who has success with a certain approach and technique will not be easily convinced to share his secrets with anybody. It is a part of his trade success and money earning power.
This attitude, fine in the world of workmanship, has migrated into the Leica world where so many 'secrets' abound. The famous Leica 'glow' (as example), has never been convincingly demonstrated, but is part of the Leica myth. You must become a believer to see it. (?). And to protect this elusive characteristic, which for some might re the motivation to buy and use or admire a Leica, all kinds of defensive acts are played: if you cannot measure or even compare this characteristic, you either cannot test Leica lenses in the usual way (with an optical bench or an MTF analysis), or you are not qualified or you lack Leica experience and so on. All of this is very common, but if you want to search for the true nature of the Leica and to research the best combination of tools to elaborate on that nature, that attitude is killing. When I recently noticed that the Tri-Elmar lens at the 50mm position delivered better images than the Summicron 50mm at aperture f/4.0, some individuals immediately tried to discredit me with every trick of the trade. The obvious course of action would have been to do a test and see if my statement was indeed correct. But in the Leica world, facts are not always appreciated. Authority, self imposed or not, on the other hand is valued more highly. Facts have a nasty habit. When found true, you are invited to change your mind and opinions. Authority is much more comfortable: you are not bothered by facts, and whatever you say, can be repeated forever, as long as you wish. And you are not forced to change your mind, which for some is very pleasant.
The Leica world is a difficult one. Partly populated by collectors whose fascination with the instrument generates a different kind of appreciation and knowledge than that of a user whose approach is to exploit the mechanical and optical capabilities of that same instrument. And there are numerous users who admire the deployability of that instrument to make better photographs or at least more inspiring pictures because that instrument is the best tool for the job as a photographer. The Leica is a most fascinating and very potent instrument for taking and making photographs. The art and nature of this high precision engineered product is not yet explored fully. Not in its basic properties (mechanical, optical) and not in its capabilities (the pictures).
This FAQ is dedicated to the ongoing research into these aspects of the Leica camera and its lenses and related techniques. Every topic is based on the best of my knowledge at the moment of writing. But knowledge grows and changes and therefore some answers may evolve over time. If you do not want to learn and occasionally change your mind, there are many opportunities in the internet and elsewhere, where the status quo is preserved and defended. (TOP)
What are the most important sources of image degradation?
Movement of the camera should be number one. Using a camera without support, that is: handholding, is the most important cause, especially when combined with slow shutterspeeds. A speed of 1/250, pressing the camera against your forehead will generate more degradation than using a full second on a stable tripod. The validity of the classical rule: a safe shutter speed is the reciprocal of the focal length (1/50 for a 50mm lens, 1/250 for a 200mm lens) has never been demonstrated. My testting indicates that the minimum is 1/250 for focal lengths from 15 to 50mm ans at least 1/250 for 75 and 90mm. A longer focal length and of course the variolenses demand 1/500 to 1/1000 for best performance. This is defined as a high quality print with 12 times enlargement or a slide projection of one meter wide. For R-lenses a mirror lock-up, a very stable support that fixes body and lens with one mechanism is imperative.
Inaccurate focusing is a second major cause: a slight defocus is already visible as a drop in contrast.
Wrong exposure (over- and under-) is a third major source. Overexposure is the worst of the two. For best optical performance a half stop underexposure is beneficial. But has to balanced against shadow detail definition. When using colour neg film, this rule is not OK: it is best to overexpose coloutr neg film by one stop for best performance. (As for the chromogenic films like Ilford XP2 Suprer).
A small aperture is also a potential cause of image degradation. For current Leica lenses the range of f/2,8 to f/4 will be best for optimum results. Beyond f/5,6 or f/8 image quality will degrade as can be seen in drop of overall contrast and loss of definition in small details and textures.
High speed film and colour neg film in general will not support the best optical performance. use slow speed BW-film and slide film for best results.
Filters, when not accurately plane, are also a sorce of problems, but less so than the above sources. (TOP)
What is a diffraction limited lens?
Wow! Well you asked for it. In raytracing, light has wave-like characteristics and normally light rays travel along paths that are indicated by ray optics, that is straight lines. But edges of aperture blades, edges of lenses and other optical obstacles, cause the rays to divert from their path of direction. Thus light wil spread into the shadow of an object or away from a point image. So even the tiniest point source of light will be recorded as a very small spot of high intensity and a series of rings of diminishing luminance. A point of light then has always a certain fuzzyness, which cannot be improved and which sets a limit to what can be observed. The Airy disc pattern and Raleigh's criterion quantify the smallest point that can be seen or recorded. So a tiny spot of light (or luminous energy) will be recorded as a patch of light that is always surrounded by a fuzzy edge. When two such spots/patches are very close in space their fuzzy edges start to overlap and make it impossible to separate the two original spots. We have reached the limit of resolution.
In a normal lens the optical aberrations also deform the spot of light to a patch of various shapes. In most cases the aberrations produce a much larger patch than the diffraction effects. When an optical designer designs an optical system where the optical aberrations are so well corrected that the shape and area of the patch are so small that the diffraction effects are visible, we call such a lens a diffraction limited lens. Leica has several such lenses in the program. One of them is the Apo-Telyt-R 1:4/280mm.
Such a lens approaches the theoretical limit of resolution and is aberration free.
The diffraction effects are most visible when the area of obstruction of the light path is small in relation to the wavelength of the light and the light intensity is high. Stopping doen to f/11 will produce diffraction effects for the monochromatic light of the longer wavelenghts. Quite small of course, but the edges of very fine detail will get blurred and so will mash together and produce noise. (TOP)
Should I use filters?
A hotly debated topic, this one. Any filter in front of the lens will add one additional airspace and two additional surfaces. So by definition image quality should be degraded. How visible will this be? One obvious case: when strong light sources are shining directly or obliquely into the lens + filter, severe flare and secondary (ghost) images will be detected. Even when we are taking pictures in situations where contrast between dark and light areas is very strong, some degradation can be expected.
These effects will also be stronger when we are using the wider apertures. Stopped down the flare will be less noticeable, but the ghost images will still be visible. If this is objectionable to you depends on subject matter and your own criteria.
A filter will be useful for protecting the lens surface. Leica front lenses are hardcoated, but not invulnerable to dust and chemical reactions. So I prefer to use a filter when I am sure image quality is not degraded by its use. In sensitive cases I just remove the filter. In low-contrast situations. landscapes, reportage etc, everywhere when the use of a filter is acceptable from an image quality view and helps to keep the front lens clean and protected I use a filter. When using B&W some filters must be used to get the correct tonal reproduction. (TechPan for instance).
You should realize that the degrading effect of a filter is much lower, in most situations,than using a shutter speed of 1/15 sec. Many Leica users feel no inhibition to use slow shutter speeds, but are afraid to use a filter, bcause ofiits impact on image degradation.
Stopping down to f/11, or using a speed of 1/30 or an inaccuracy of rangefinding aproduce more disaster than a filter (except the cases first mentioned of course). Let us keep the things in perspective and first attack the big causes of image degradation before going to the smaller evils.
Should I then forget about tests and do my own testing?
Most Leica photographers are very sensitive about the performance of their lenses. Arguments about lens performance and the relative merits of one lens as compared to others constitute a large part of the discussion whenever Leica photographers gather.
These discussions are for the most part futile form a technical view, although from a social standpoint they might be very desirable.
First of all any person has his own personal set of test criteria and the testing itself is never done in a consistent way, that is keeping all parameters under control and comparable. If you 'test' lens A with a 400ISO color neg film and lens B with a 160ISO color negfilm with a different characteristic curve any valid conclusion is exit. Take pictures without a tripod and again you loose very possibility for a meaningful test. Use your 90mm for a portrait and the 24 for a landscape and agian no comparison is possible.
Of course you can argue that you are not interested in testdata, but want to know how a certain lens performs in the typical deployment situations for this lens. Well, then you are doing an enquiry to see if this lens satisfies your needs. That is a very good and valid way of looking at a lens. A lens that satisfies your needs or lives up to your expectations, is always a good buy. These criteria, valid as they are, are not to be confused with the procedure of testing a lens. If a user tells you that he is very happy with a certain lens, you can look at his photographs to see if his pictures match your demands. If this user tells you that he finds this lens the best there is, he is overstretching his credibility.
Testing and comparing lenses is quite different from enjoying a lens and feeling pleased with its performance.
Hardly any uer of Leica lenses is able to extract the full image quality and recording capabilities of the better Leica lenses. If we give the optical performance potential of a lens a value of 100, we are on safe ground when stating that most users extract at most 60 to 70% of that potential out of the lens. Many would use only 40 to 50% of the potential. That is a pity as getting the best out of Leica lense is a very rewarding activity.
Now assume lens A has a performance value of 80 on a scale from 30 to 100. The second lens B has a value of 95. Assume user X gets 50% performance out of lens B, that is about 43%. Now this same user also has lens A and here he gets 55% performance out of the lens, that is 44%. Based on these results he might be inclined to say that both lenses have equal performance.
To study the true maximum performance of a lens you need to have equipment, knowledge and procedures to extract the full potential out of a lens.
It is true that there are photographic magazines whose lens testing is on the same level as the user acceptability procedure outlined above.
The best advice
* Stop trying to test lenses and
* look for resources that are reliable in their conclusions whens electing lenses and
* enjoy taking pictures and
* improve on your technique
Someone told me that Leica argues against the use of the MTF?
In the eighties Leitz indeed published a few articles, stating that the use of MTF graphs might give wrong information about the performance of Leica lenses. In fact they noted that the publication of a few selected figures without background info would be partially misleading. Two arguments were used, one correct and one not so correct. The MTF data will be totally different, depending on the choice of focal plane and contrast threshold. See above, where I argue along the same lines.
The second argument is the basis for much discussion among Leica users. Leitz remarks that Leica lenses in many tests get lower figures than in the photographic practice. MTF tests and the then often used resolution test charts are based on a flat (two dimensional) test object. But Leica lenses are designed for recording solid objects with depth and when recording three dimensional objects some aberrations like astigmatism and curvature of field will not be noticed or can even enhance the quality of the image. These aberrations are easily detected on a flat test object or a MTF measurement. So Leica lenses get low notes because of these aberrations, that in the photographic practice are not detectable.
There is a grain of truth here. When a designer finds he has to accept a certain amount of residual aberrations, a choice has to made which one to correct and which ones to accept or to balance to a certain degree.
But the argument of Leitz in those days that MTF data are not representative of the optical performance in the field because of the difference between a real-life three dimensional object and a flat test chart does not sound convincing.
How is the MTF measured or generated?
Many magazines (Popular Photography, Chasseurs d'Image, Photodo etc) use the Ealing optical bench to generate MTF data. The Ealing projects a small slit of as example 0.01 mm through the lens to be tested, and the lens produces an image of that slit on a screen or a CCD capturing instrument. The brightness difference from the center of the slit to the edges is mesured. The steeper the slope and the higher the difference from the maximum to the minimum reading, the higher the MTF value. By varying the width of the slit you can generate data for several spatial frequencies.
It is not possible to compare MTF data from different sources, unless you know they are made under identical parameter settings. Some of the most important, but never disclosed, parameters are the spectral composition and weighting of the white light that is used and the spatial frequency that is used for focusing.
Most published data do not give the raw figures, but use an average to get an one merit figure that is supposedly easy to interpret. Te averaging method can be very strange, as is the case with the Photodo figures. They use the MTF data from two apertures (f/4 and f/8) that are weighted, and center performance again gets a strong weighting. The choice of 4 and 8 is made to be able to compare the many zoomlenses that do have modest apertures. But for the modern Leica lenses, where optimum performance quite often starts at f/2 and is at its height around f/2,8 this choice is bad, as the best part of the data is not put into the equation.
Chasseurs d'Image also uses the Ealing equipment, but the generated figures are fed into the merit figure indirectly, as this magazine has grouped the lenses into categories and uses a reference lens per category, and also uses several more test targets and methods. All these are mixed together and the resulting figure is an expert assessment of the lens. How 'expertly' the judgement is, can not be known as we do not know what balancing act the testers have performed. Again the corners of the lens figure quite high into the judgment, which might distort the conclusions.
All these methods then filter the data to get a more or less convenient merit figure for easy comparison. But comparison is not possible and the filter is quite peculiar, so the conclusions from the data are a bit thinly supported. The link from these merit figures to the performance in the the field is not always strong.
As Leica users we are lucky. Leica publishes the MTF data for all recently introduced lenses and these data are very closely linked to the real performance the user can expect.
It is also possible to generate MTF data from the design data that are computed by the computer program employed to design a lens. These data are as accurate, if not better, than the experimentally derived values, if the subsequent engineering and quality control at the production stage is good enough.
(TOP)
Why are MTF graphs important? How to interpret them?
Unless a lens is fully defraction limited, there is a certain amount of residual aberrations present in the optical system that will degrade the image away from a perfect image. The MTF graph relates the spatial frequency (measure of resolution) to the contrast of the lens at all apertures and over the whole image field. It presents the most comprehensive information about the optical performance of the lens and correlates very well with the perceived image quality. But the MTF graphs are difficult to interpret and when generated by different methods are not comparable.
Of course the MTF does not tell you a word about distortion, colour rendition, and flare. So it is not perfect nor comprehensive. A well-trained individual can interpret the curves and may infer many propertes of the lens in question. Just looking at the curves and comparing them to other curves is a bit dangerous.
A very well corrected lens will always produce a good MTF. But is also possible to design a lens that will generate a good MTF while not being well-corrected optically. To get the real picture of the performance of a lens, you will have to analyse a whole family of MTF graphs, which are however not published.
Still the MTF graph is at this moment the best 'picture' one can get about the optical performance of a lens. Generally the 5 and 10 linepairs/mm give an indication of the overall contrast and the contrast of subject outlines. Preferably here the contrast figure should be above 95%. If the figure drops below 90% we have a low contrast lens, that gives a soft image.
The 20 and 40 linepairs/mm define the clarity and crisp rendition of very fine detail. The level of fine detail is so small, that it takes some photographic craft to get these details on film. In this case we should look at contrast figures above 60%, but if the 40lp/mm are around 40% still a very good image can be expected. The variation of the percentage is quite high at the 40lp/mm graph. A plus or minus of 5% will hardly make any difference.
The solid lines are the most important and these should be as straight as possible. Do not be overly alarmed if the edges of the image area are curving downward steeply. The corners are mostly out of the image field when enlarging or projecting. If the solid lines and the dotted lines are close together the image quality is excellent, as many aberrations (chromatic ones and astigmatism) are very well corrected. If the solid and dotted lines differ widely, more aberrations are left in the system. Again be not alarmed. It depends on the designer what the real image quality will be.
(TOP)
How meaningful are resolution figures for films. Kodachrome versus Velvia
A comment has been made that Velvia is better than K'chrome because the resolution figure of V is higher than K (±160 versus ± 125). No doubt that the cited figure is put in the data sheets. Has it any relevance?
No. I will as usual give a solid explanation why not.
Film manufacturers produce data sheets with info about resolution. They give resolution figures for low (1:80) and high (1:1000) contrast targets. And they give an MTF graph. For optical analysis the resolution data are completely obsolete and so they should also be buried for film emulsions. Because of the same reasons. I do not have to recall these, as they are amply documented. Now look at the high contrast figure. What does it mean. The test pattern is the well known and much abused barline target: black and white lines of diminishing width per mm produce a test pattern of ever increasing spatial frequency. (more lines per mm). This target is illuminated in such a way that the luminance difference between adjacent black and white lines is 1:1000 or 10 stops contrast difference. This type of contrast you might encounter when taking a silhouet aganst blue sky. But than we have a low resolution target (only the silhouet line has the contrast figure). Itis nay impossible to find in high res targets. Look at any picture you took the last decade and see if you can find a detail with very fine structures in it and ask yourself: do I see two adjacent very small object details that differ in luminance by more than 10 stops? You will not find any detail! So the high contrast figure is meaningless. If you are in need of a figure go for the low contrast value and now we see that V and K are identical. NO advantage for either one. Take a look at the MTF graph and now you see a big difference, The K graph tells you that from 1 to 20 lp.mm the MTF value is far above 100%! The same values for V are lower. So in the critical areas where sharpness is all there is the K wins. Why the threshold of 20 lp/mm. That is almost the value Leica lenses are calibrated for!! Why then is K for many purposes the better film: it is grain based where the V is dye cloud based. Recall that a dye cloud image is being generated by arificially restraining the growth of clumps of grain and replacing them by dye clouds of about the same dimension at about the same location. Note the vagueness here? A grain image is an exact replica of the optical image falling onto the emulsion. The dye cloud image is a chemical interpretation of this image. The capture of fine detail is better preserved with the grain based image and its 'hard' edges against the finer (smaller) dye cloud based image with the soft edges.
This is same the reason why fine grain developers in fact kill fine detail and acutance developers enhance fine detail up to the limit of grain noise. Recall the Rodinal discussion?
As in optical evaluation we must become accustomed to the fact that resolution figures are out and MTF graphs are in. That 's reality.
(TOP)
What about the 'edge spread function?
Recently a very interesting discussion has been launched about the difference in characteristics between actual and earlier generations of Leica lenses, Some of these differences are undoubtedly real. Some one valued the characteristics of the older lenses for many reasons. And I would not in the least question his opinion or choice. We have to be very careful around this topic. To like some characteristics is not the same as stating that these characteristics are virtues or to imply that they are desirable in themselves. The discussion now is focused on a perceived change in Leica design philosophy as exemplified by statements from Mr. Osterloh and Mr.Laney. The core argument seems to revolve around Laney's statement that "They started from the proposition that the subjects we photograph very rarely consist of grids of black and white lines on flat sheets of cards." The corollary is that one can design lenses to do very well on standard test subjects and lenses that are designed for real life subjects. These latter designs invariably (by design and subject matter) perform poorly on standard test subjects.
First of all: this argument is one big fallacy. Every lensdesigner and every lens ever designed is designed for real life subjects. The only exception might be special reproduction lenses where flatness of field is of utmost importance. All lens designers assume in their design that we have in front of the lens an object in three dimensional space. As this object has to be recorded on a flat plane (the emulsion) and because of light rays behaving a bit weird when eased through an optical system of several glass elements, we encounter aberrations. These aberrations have to becorrected and balanced. This is the task of the designer and he can do this with more or less creativity and expertise. There do not even exist any design rules or computer programs that are tuned to twodimensional objects as black and white barlines. Furthermore: Zeiss engineer Mr
Hanson introduced in 1943!! the notion of contrast (what later became MTF) and its exact correlation between optical quality and human sharpness perception. And Zeiss have always followed that lead. I would not be surprised if the 'defensive line' of Leitz/Leica in the past has
a lot to do with the image quality of Zeiss lenses. We have two types of testing equipment. One group that checks for production tolerances and are used in factories to assure prescribed tolerances. The second group is designed to test lenses for their capabilities in recording real life objects. Be it a barline test, an MTF graph or a beautiful girl, each one tries to find out the characteristics of the lens in question for recording 3-D objects. It is BTW remarkable that the Elmarit-M 2.8/28 (fourth generation) has been referred to as the last lens of the great old generation, while it in fact is one of the first of the NEW generation. Perception is a very quirky business. Also the notion that modern Leica lenses move too far in the direction of high resolution and therefore lose many of their former unique characteristics is not substantiated in practice. Yes modern Leica lense have a superb clarity of very fine image detail, a stunning repression of veiling glare and a very high correction of spherical aberration and many lateral chromatic aberrations, all characteristics that give 3-D objects a very faithful rendering in a flat plane. As a fact when comparing real life pictures (not barlines) taken with the old and new Summilux-M 1,4/35 (asph and non-asph) any viewer, not just me commented on the sparkling lifelike representation of the asph version. Let us not try to create false dichotomies where none exists. Specimen of older Leica lens-generations are very good and sometimes surprisingly competent lenses. And one should admire the perseverance and competence of the designers of these lenses. Newer generations incorporate more research and more knowledge about the way an on object in space should be recorded as an image on film. The Noctilux does not perform poorly on any testobject (girls or barlines or MTF graphs). It performs quite good on all test objects. If a tester would note that the Noctilux has less contrast and whatever
else he would like to mention than a Summicron he is probably correct. If he would conclude that it is therefore a bad lens, he only proves his own incompetence. A lens with the specs of a Noctilux cannot ever produce the image quality of the Summicron. No tester can use one yardstick and evaluate all lenses in a onedimensional way. Laney refers to a socalled ' edge spread width criterion' and somepersons now infer that this a criterion that favours older lenses while asph versions can cope better with normal test targets. Let me be quick and mercifully: this 'edge spread width criterion' does not exist. We have the point spread function, and the linespread function and we have the acutance measurement. What Laney does is copying the contents of a research paper that is just that: a research paper. An "Ansatz" as the Germans
would say that never was followed up and went the way many research papers do: they evaporate.
Are working photographers better judges of image quality than professional testers?
My brother Hubert is a conservative and a pragmatic. He does not believe that progress is real and he cannot imagine that anything written since 1970 has any value. He also refuses to consider facts or logical reasoning that contradicts what he believes is true. But he is deeply
involved in modern gadgetry (that is pragmatism). He uses these instruments as he pleases, whatever the intention of the designer might have been. He correctly justifies his approach with two arguments: it is my money and I like what I like. He constantly urges me to stop trying to update conventional wisdom in photographic lore to a level required by the actual state of the art of photographic and optical sciences.
Follow the mainstream, he pleads, then your life is easy. Repeat sales reps, marketing brochures and every snippet of Leica lore that is floating around in this info-soaked world. He made these remarks after my recent visit to Solms and Oberkochen, where I was inundated by MTF
graphs, spotdiagrams, ray fans and a plethora of optical aberrations that need to be corrected in a very subtle and artistic way in order for us mortals to start raving about Leica lenses. It took me three full years of testing and thousands of pictures (yes real pictures of real life subjects) to come to grips with the M-line of Leica optics.
Now the daunting and very exciting task is to engage my self in the R-line. Another three years? Hubert tells me to use my spare time for more pleasant activities as studying the theoretical base of Startrek physics. I think we are at an intellectual and moral low point in history. Is there a chance that the classical Greek ideals of beauty, rational discourse and lust for truth will survive? At least in Startrek I presume. I am very pleased that Data has a cat (or the other way: the cat has Data). If ever a creature has the basic characteristics of a Leica M camera, it is the cat. This species survives in even the harshest and unfriendly environments, it never loses its character, it does not compromise, it is the most effective small predator in evolution and it is a thing of beauty. And above all: you can study it for years and not know anything about it. It still is a mystery, but a very nice one. These thoughts have been inspired by the recent discussion around that most elusive of topics: image quality. This topic pops up quite often and then fades out without any real progress. Still the topic is of utmost importance. So let me try to make some observations. Sharpness does not exist in any objective way. It is a subjective impression of the eye/brain mechanism and cannot be measured or defined. Note that I in my reports never use this word 'sharpness'. I use the optically correct words 'contrast of fine detail' and 'edge contrast' to describe image characteristics. Generally we may surmise both words under the umbrella word "clarity".
When a designer creates an optical system, he/she has always a clear purpose in mind about what the lens has to accomplish imagewise. As any lens is always a compromise between many demands and variables, the resulting imaging characteristics can be related to the ideal lens: that is a lens without any defect, that will reproduce the object in front of the lens with 100% faithfulness in three dimensions. Let us make very clear that Leica designers NEVER assume that their lenses should or could be optimized for flat two dimensional test targets of whatever configuration. It is an undeniable fact that any real lens has only one, I repeat only one plane of accurate focus. That is by definition a plane of very thin depth. Thus evaluating a lens in the optical sense of the word is studying the characteristics of image points as they are projected on this plane of focus. It so happens that the MTF graphs are a very good analytical tool for just this: studying the point characteristics in the image plane. It is far beyond the truth to imply that a study of the flat image plane (that is what designers and serious testers do) disregard the three dimensionality of real life objects. To imply as some postings do, that a test of a flat object is irrelevant to practical photography, is to misunderstand the fundamental laws of optics. There are many methods, some very complex, some very secret to make sure that the image characteristics that are defined in the plane of accurate focus can be extended to a three dimensional area, part of which is constrained by the angle of view of the lens and part of which is constrained by the depth of field of the aperture used. We have to make some distinctions: some tests that are proposed in common practice: photographing a newspaper page, photographing one of many testcharts (bar lines in many configurations) are not very meaningful. This is not the fault of its inherent two dimensonality, but because the target as such is not very well correlated to actual imaging characteristics of a lens. In the case of standardized test patterns, we have the problem that most if not all users lack the necessary background to interpret what they see or think to see. Again we should not ridicule a methodology because it is falsely used.
An expert designer can predict with a high level of accuracy how a lens will perform, just by looking at all his/her figures churned out by the computer. We should really bring ourselves to a higher level of awareness that the two-dimensional test target versus three dimensional reality confrontation is not a meaningful distinction. We have bad tests and good tests and bad testers and good testers.
Now the issue of evaluation. Many postings assume that testing a lens in any objective way (that is using tests that produce contrast or resolution data) has no relevance to the demands and requirements of working photographers. First of all I hope to have made clear that measurement of image performance is MUCH more than looking at 'sharpness' (which is a non-issue as it is not existing). A responsible designer and his companion the tester will choose very carefully those measurable characteristics that are very relevant to the image as required by working photographers. To assume that testers are other-worldly and/or insensitive of the needs of real photographers is doing them injustice. Again we have good and bad.....
There does not exist any evaluation method that can handle in one merit figure all the many characteristics of a lens. And here indeed the acceptance of the working photographer is the last word. The role of the tester is to bring in figures based on carefully conducted tests to support or inform the user of he validity of his final decision. Or to help him make a meaningful choice.
There is no us-them situation. A tester might be a very good photographer him/herself and a working photographer can be a lousy tester. It is the role of a tester (at least that is the way I see and practice it) to inform a working photographer of the potential image qualities of a lens based on scientific testing of, yes, two AND three dimensional objects. If a working photographer likes or needs or is interested in these characteristics is purely his non-debatable choice.
The image qualities I try to investigate are very relevant to practical photography. Again: some of my test objects are indeed the barline patterns of a flat plane. If you really know how to translate these results into the demands of photographing cats and girls and snow storms and emotions, then there is nothing wrong with this method.
Sufficient or maximum quality.
There is nice but not true story about the salesman of Rolls Royce, answering a customer's question about the exact amount of horsepower of the engine with the words: "enough, Sir"!
It might be argued and some persons do indicate this, that all Leica lenses have 'enough' potential for image quality for all but the most exacting needs or for the farfetched demands of some out-of-synch-with-reality-testers.
In a sense I do agree. Many pictures of most every day scenes, including night shots do have acceptable or pleasing quality for most viewers.
There seems to be a certain consensus that modern Leica lenses have in fact many of the characteristics of older Leica generations, just with a bit more contrast. This is a simplification and a misguided one. The overall character of Leica lenses has changed significantly since the designers at Solms attacked the more viscous aberrations in a more effective way at larger apertures and over a larger part of the image area. Just looking at resolution or overall contrast misses the point. I have elaborated on these traits in many earlier posts and do not repeat them here.Being involved at the moment in an extensive test of many modern transparency films in the ISO 100 to 200 class reminded me anew of this contrast between sufficient or maximum image quality. None of these films will produce a bad image, all are fine grained (some more, some less), none gives correct colors (as measured with a computer assisted colorimeter for CIELab values of the MacBeth color checker), that is most of them produce sufficient imagery for most needs. Still some stand out for producing maximum quality, that is matching the best image quality of the Leica lenses. A good test is to take a close up of an object with extremely fine textures and shadow details, three dimensional in nature with strong specular highlights and some very sharply outlined color patches. If you do not find one object try to find several who combine these characteristics. Then step back one meter and take a picture of this object again. Repeat this several steps back, every step one meter.
The trick is to look at the various magnifications and to assess what happens first: loss of image quality by the lens or the film. Then look at the character of the image gradation. Then you get a feeling for the potential of modern Leica lenses.
Lens comparisons between brands
This topic as a classical one in every photo magazine and gathering of photographers. It is also a futile one IMHO.
Generally speaking the big four (C, L, N, Z) really do know how to design quality optics. The design of an optical system in essence means balancing aberrations that cannot be eliminated or canceled out. These aberrations are known to a Leica designer as well as to a Nikon designer. How a designer does this balancing act is partly art and partly specific know how of the effect of residual aberrations on the image quality. The definition of image quality differs per company/design staff. Again part of it can be quantified (merit function and its resultants MTF/OTF), but part is art (a spot diagram is easily generated, but analyzing this form with respect to potential image qualitdifficult. Leica is a master of this art and that is why we all like their products.
An optical system is too complex to be able to use a one dimensional evaluation scale as sharpness or contrast or whatever. So any discussion whether Nikon is sharper or not viz. a viz. Leica is void. Leica differs from Canon and Nikon and Zeiss, as all the others do from each other because every designer has a different set of priorities and weighting of the residual aberrations.
In stead of discussing top class lenses in simple terms as better or worse, it would be more instructive and enlightening to evaluate lenses as having certain optical characteristics that can support your type of imagery.
I really wish we stop this simple comparison game. If we could set up a complete list of lens characteristics we might end with a list of 50 or more personality traits. Let us not be so naive as to assume that any lens from whatever manufacturer can score on all characteristics.
Are older lenses better for artistic pictures?
The 2/ 35 asph improves visibly on its predecessor, the 2.8/24 is simply stunning in all kinds of hand held shooting, the 2/50 Summicron and the 2,8/50 are of sparkling clarity, the 2,8/90 is bringing almost any film over the edge, the new APO 90 is of superb quality as is the 3,4/135. Every transparancy shot with any of these lenses stands out of the crowd head and shoulders. If you do not see it, you simply do not want to see it.
It is true that the utmost of textural details will be seen only when a few films of high capability are used. The improved contrast, the much greater clarity and flare suppression, the fine shades of light in specular highlights, the flatness of field etc can be clearly observed when using any 100-ISO transparency film or 100 or 400 B&W. These current lenses are not only theoretical (or a few percentage points) improved, they are on a different level.
Would pictures by HCB be better if made with modern lenses? That is not a relevant question. Would Matisse have painted better Pictures if he used different quality paints or a different quality canvas? Artistic or aesthetic aspects are not at stake here. Why not ask if Salgado's pictures would be worse when made with older equipment?
The questions have been asked whether a modern Leica lens is more capable of rendering beauty than an older one? Or, is a modern Leica lens capable of rendering more beauty than an older one?
That is difficult but interesting to answer. Beauty is part emotion, part impression, but always it is primarily a feeling. Some socio-biologists will claim that the appreciation of beauty (of women) is universally imprinted in our DNA. But even accepting this fact, it will hardly correlate with optical quality. Beauty can be captured and represented with every possible artistically means and instruments (poems, paintings, movies etc.). And so also with Leica lenses. The goal of optical designers is to produce optical systems with a very small amount of residual aberrations in order to be able to represent reality as faithfully as possible. Leica lenses add their special flavor as these guys and girls really know what aberrations are important for photographic purposes. So if and when beauty can be objectively captured in reality modern Leica lenses will do this job more truthfully than older lenses. As long as beauty is ephemeral, any lens will do. Remember David Hamilton? Lartigue? Atget?
Older lenses have their peculiar fingerprints and some people love these characteristics. An old Harley motorcycle also evokes sympathy and admiration. Here one should walk softly. I myself have never seen in an older lens characteristics that have not been improved in newer versions. Even that most elusive aspect of three dimensionality is better represented in modern lenses. Bokeh might be different between old and current, but that is another story.
Again, older lenses can and should be admired (at least some of them) but I would like to ask the persons who favor the qualities of the older lenses to take comparative pictures of their favorite scene which really evokes the qualities of older lenses not present in the current ones with the older and a newer lens and then point out the differences if any.
I have made these comparative shots again and again and never saw something in the representation of the older lenses that has not been improved in the newer versions. The unsharpness area included. But I admit that the rendition of shapes and outlines in the unsharpness area is a matter of taste.
Are older Leica lenses as good as current ones?
Lens testing should not be representative of the demands of real life photographers in real life photo shooting sessions.
Sometimes it has been proposed that Leica users should stop searching for the best in image quality as the example of HCB 'proved' that great masterpieces can be made with the older equipment. Again this argument is not valid. The value content of HCB pictures is its representation of the human condition in its geometrical forms. HCB never was interested in any special optical qualities of Leica lenses. His priorities were quite simply of a different order. The argument that what is good enough for HCB should be good enough for every Leica user, is a very thin one. Why should HCB's imagery be the norm for everybody? Again it boils down to the position that only a certain class of photographers are allowed to define what is the proper use of a Leica as they seem to claim to use the instrument in its proper way. This argument is circular of course. HCB simply used the equipment available to him at his time. As did Eisenstaedt. It would be a bit rash to claim that some masters of the Leica (artistically speaking) should be used as an example to limit the quest for the ultimate image quality.
At least the Leica optical designers still think that the potential for improvements in optical quality is very real.
In this same category we find the often quite forcefully stated expression that photographically old and new lenses perform on the same level. And that by implication the quest for improved image quality is futile or at least not necessary. Or it is said that the improvements are not worth the trouble. In any case we note a Luddite attitude here. Modern Leica lenses have a generally much higher level of aberration correction than earlier versions, much smaller blur circles and a very different balance of residual aberrations, including the secondary spectrum. You can see this in every conceivable image characteristic. It is a bit disappointing to note that some observers dismiss the improvements as irrelevant for contemporary photography.
Of course if you shoot with the high sun in your back, use apertures of 1:5,6 and smaller and print on small-scale color prints, the differences will be small. Still the knowledgeable observer will note a higher overall contrast and a much crisper rendition of textural detail with the new lenses. And not every lens shows these improvements in the same scale. As example the second generation of the Summicron 50mm (from 1969) exhibits more aberrations than the third (current) generation. In practical shooting the chance that you will note these 'image defects' is very real. But if you happen to take pictures of objects with many very fine obliquely oriented textural details and high flare conditions, you will see the difference. And that is the point of current improvements: get optimum results whatever the level of subject detail or flare or contrast.
Sometimes these differences will only become visible under controlled and comparative test sessions. And this brings us to the next story.
Lens testing should not be representative of the demands of real life photographers in real
life photo shooting sessions.
I am not sure where this myth comes from as the supporters of this myth never explain what exactly they mean. It seems to be that the traditional test pattern (two dimensional and black-white bar line patterns as used by the USAF charts) is the scapegoat. Now no serious tester will base his conclusions on such a test pattern unless suitably educated into its interpretation. Used as a rough form of MTF related information it still has merits. Used as a simple resolution chart it is not of great use. That you may not interpolate from a two dimensional test pattern to a three dimensional reality is refuted by all optical handbooks and all optical design programs.
Every design program I know of has been based on the sharpness plane that is infinitesimal 'thin' or two dimensional. On of the best test patterns still is a large black piece of paper with very small holes punched into the paper. Lit from behind the forms and color fringes around the holes give you much info about the aberrations left in the design. Aberrations that will show up in every picture that is taken from a flat and a three dimensional object alike. The impression of three dimensionally may be based on the residual aberrations left in the optical system, but there is not one single theoretical or practical argument why sinusoidal test patterns or point spread functions could not represent the real world faithfully. Any reliable test procedure (at least mine does) will take into account all these theoretical topics and practical inferences to ensure that the results are useful for any user of Leica equipment who wishes to select lenses on the basis of real image qualities. The discussion should include the unsharpness area or depth of field. It is often stated that the elusive Leica glow is part of the answer as this characteristic is visible to anyone, but not testable by normal methods. Well I am not agent Mulder so I am unable to comment on this 'glow'. What I do know is that the characteristics of the unsharpness area are not specifically designed into the system, but are simply the result of the optimization of the aberrations as defined for the focal plane. Nothing mysteriously here.
The so-called 'light box' test might be preferred as a tool to quickly convince editors to select images on subjective and impressionistic impulses and to prove that the basic image quality (sharpness and color rendition) is good enough. To study the finer points of image quality and the real differences between several Leica lenses a much more elaborated suit of equipment is necessary.
Do we want or need to study these differences?
Some would say no, others are inclined to say yes. Persons with an engineering bias will enjoy discussing these finer points and in doing so try to find the truth of what are the best Leica lenses available today. There is nothing wrong with such an attitude. The measurement and comparative assessment of lens characteristics is a fascinating part of the Leica world. Anyone who does not wish to indulge in this activity can choose to leave it and study the language of the great Leica photographers. The language of art and the language of optics and engineering may be different but can be mastered both at the same time. That is the real challenge in my view. And Leica users are fortunate that the Leica products support both views and even integrate them in a fascinating manner.
The engineering standards to which Leicas are being designed and constructed are very high indeed and I think it a legitimate pleasure to enjoy and study these standards. I also think that taking pictures with a Leica, sensing the high level of precision engineering while taking pictures greatly enhances the pleasure of using a Leica.
Of course these pictures may not be great art, but I think it a bit narrow-minded to assume that Leicas may only be used in the proper way by 'vision-people'. I am a great admirer of HCB, but I could not make one picture in his style and quality. I still can admire craftsmanship in an instrument and use it accordingly.
The upshot of this long story then is quite simple: Leica M systems have been designed and constructed as precision engineering instruments that are dedicated to taking pictures in the style of the artless art of the snapshot. You can admire and enjoy both aspects of this fine instrument or you can choose to address one of both aspects. In any case it is up to any user to make up his/her mind. There is not one proper way to use a Leica, nor a canonized way to take pictures with a Leica.
There is only the pleasure of owning and using and studying this very remarkable instrument of photographic technique.
Bokeh is mentioned often as a lens performance qualifier. What is the bottom line here? Bokeh is a very elusive concept. It is related to the shape of out-of focus object details and the light-energy distribution within the unsharpness patches. It might be measured scientifically but no one knows how and thus subjective interpretations abound.
Bokeh is basically a function of spherical aberration and number of diaphragm blades. Clearly the out-of-focus areas in front of and after the sharpness plane are different depending on the overall aberration correction, which involves much more than just the correction of spherical aberration.Bokeh is not (and here I differ from almost anyone) a conscious design decision. Lens designers focus all their creativity to the plane of best focus and try to get an image quality that is consistent with their goals. As a general statement I would say that the clear rendition of extremely fine detail with high contrast and excellent shape preservation over the whole image area and over all distances and apertures would be the idea. This is not easy to accomplish and so compromises have to be made. A certain 'residue' of aberrations will be present in every lens. What this residue is composed of, depends on the compromise made. Now it is easy to understand that the way the plane of sharpness is defined has a bearing on the unsharpness areas in front of and beyond this plane. So the unsharpness rendition is a direct function from the degree of correction of the sharpness plane.
Current Leica lenses have a very rigorously corrected sharpness plane by a secret design formula, which differs quite a bit from the older design philosophy of Leitz. But even the older Leitz lenses have not been defined with any idea of bokeh.
So bokeh might be detectable in older Leitz lenses, but this is not a design decision, just the result of the overall correction.
Modern lenses indeed have less bokeh as I understand the idea, they are corrected to a much higher degree than older Leica lenses.
When comparing older and recent Leica lenses to study bokeh I noticed these aspects:
Modern lenses have a steeper slope of sharp to unsharp and older lenses while not as good in the sharpness plane as current ones seem to hold on better in the unsharpness area. This is however the deception of the eye. If you try to look at those distances where the degree of unsharpness is equivalent the modern lenses have a better shape preservation. But again this is not bokeh.
Are the characteristics of modern Leica lenses the result of cost reduction?
Some will tell you that is easy to make a lens with a high contrast, but it is allegedly difficult and costly to produce a lens that gives roundness and resolution at the same time. Reference is made to cheap and contrasty Japanese lenses and expensive, but softer and rounder German lenses of older vintage. And it is stated (without any evidence, as often is the case) that current Leica lenses could benefit from improved contrast because of reduction of production cost. Or the other way around: because production costs have to be lowered, Leica designers have to accept the easy design of a high contrast lens and do not have the time or resources to do an elaborate redesign for a lower contrast!
A lens that exhibits high contrast and high resolution at the same time is very sensitive to the smallest deviation from the correct design values. Production cost must be higher as the machining of parts, the polishing and coating of lenses, the centering and positioning of lens elements, the quality checks during assembly are all much more demanding. Some current Leica lenses demand a hundred hours of work from start to finish.
Classical European lenses give a unique flavor? True or false?
Some will tell you that European lenses (Zeiss and the older Leitz lenses) produce a distinct imagery that is sometimes described as Leica glow or in more descriptive terms as roundness, soft skin tones, balanced color richness and sense of 3-dimension.
None of these terms can be quantified and measured and belong to the vocabulary of the art critic. As said in another question, the analysis of a photographic image as art is different from an analysis of a photographic image as a function of optical performance. It is to be regretted that so many persons mix up these areas.
If these concepts cannot be measured, could they be true descriptions of the photographic image and point to different ways of rendering reality onto the emulsion? No. Roundness or the sense of 3-dimensionality or plasticity is related to our world of three dimensions with depth in space and solid objects that extend in three dimensions. Our binocular vision allows for the depth clues. Close one eye and all of the depth in space is gone. A picture (which is flat) will evoke this impression of reality when the representation of the reality is as close to the original as possible. Modern Leica lenses give a very high fidelity representation of reality and are able to record the finest details and shades of tones, much more so than older lenses. So their recording capability of the real world is higher and so the impression of reality is also better.
Give an example!
Take a picture, very classical!, of a girl with bare shoulders in a three quarter pose. And position the light in such a way, that the shoulder gets some highlights to bring in depth and the impression of roundness of the shoulder. A modern lens will record all the tiny details in the skin and will also render with high fidelity all the fine shades of white in the highlight area, and when expertly exposed also in the specular highlights. This gives a true impression of roundness of shoulder. An older Leica lens will wash out the finer shades of white and the outlines of the shoulder will be softer, blending more into the background. So the visual clues for a realistic appearance are less well recorded.
But a higher contrast lens will crush the fine shades of tone and the finer details? As said before, a higher contrast lens (everything else equal of course) will record finer details than the lower contrast lens. Highlight and shadow detail is also rendered to a higher degree as the higher contrast lens will give a bit more contrast to tonal areas that are very close in luminance and thus makes them just noticeable where the lower contrast lens will record the shades below the threshold of vision.
Canadian lenses are better than Wetzlar or Solms or Portuguese lenses?
A notorious topic on every list. Canadian lenses, as a generic term, are not inherently better or worse than lenses made elsewhere. Nor is a lens, manufactured in Portugal by origin not as capable as one from Wetzlar. The quality of a lens is its design in combination with the manufacture and the quality control. A sloppy design from Canada and lax quality control there, will not give you the optimum quality you expect. And meticulous manufacture and a very creative design in Portugal will give you the best of Leca quality. The paramters here are creativity and art in optical design, excellent choice of materials and glass and accurate production processes and dedicated human labour. That can be found in Portugal and in Solms and in Canada. But not one geographical location has an inherent advantage that is not to be found elsewhere.
But high aperture lenses are specifically designed for high contrast and lower resolution, aren't they?
One of those floating myths again. One of the aberrations that is most difficult to correct in a high aperture lens (1:2.0; 1:1.4 or wider) is spherical aberration. The effect of this aberration is a fuzzy image over the whole image field. The wider the lens, the more spherical aberration and thus the tendency to soften the details and lower the contrast. In this particular case, the tradeoff between the plane of best contrast and the plane of best resolution is the most marked. The designer almost always will choose for the plane of contrast to focus on as this will help detail rendition in low contrast situations, typically the ones for such a lens. But the contrast of such a wide aperture lens (1:1.4 as example), while relatively speaking quite good is always lower than the contrast of a well-corrected 1:2.0 lens. With the balance between plane of contrast and plane of resolution now shifted far in the contrast direction, resolution will suffer. Again not a design goal, but a best offer in a worst case situation.
The tradeoff here has a quite wide margin. So in this case we see many constructions, ranging from higher resolution, lower contrast ones to higher contrast, lower resolution designs. Note however that all this higher-lower is relatively within a design.
As soon as designers could break out of the grip of spherical aberration, a new generation of high contrast, high resolution wide aperture lenses could emerge. Witness the Summilux-M 1:1,4/35 asph as the first of a new generation.
There is a persistent discussion that the Noctilux lens (in 1.2 and 1.0 versions) are designed only for wide open performance and will deliver worse imagery when stopped down. This is absolute nonsense! Wide open both lenses perform admirably, given the aperture and the respective design constraints. But they do improve when stopping down
What are the sources of the high and low contrast in a lens, in terms of optical aberrations? Is there something more going on then simple flare in the lens? Yes, there is more to it (as usual). One obvious source of low contrast is the unwanted reflection of light rays from every glass surface of a lenselement. The more lens elements, the more reflections (non image forming illumination). Another source is the mechanical reflection, due to bad internal construction.
A most important source, hardly ever mentioned is more difficult to explain. In general we have several different planes of sharpness in any lens-system: the plane where we see the highest contrast and the plane where we see the highest resolution. The designer will generally choose a compromise that gives high contrast and useful resolution. Note that I do not say 'low' resolution. In the plane of highest resolution we will find very small core of concentrated light energy, representing the point sources of the object, but surrounded by a large and fuzzy circle of diminishing light energy. Here the light rays are randomly distributed and produce a kind of soft halo around the core. The resulting effect is a low contrast image.
The plane of highest contrast will give spots of a larger diameter, but with a steeper edge gradient and less fuzzy haze around the spot. The contrast is much higher, but the core is a bit larger. So the very smallest detail is not recorded In the other case (high resolution) this same small detail might be recorded but will never be visible because the contrast difference is below the detection ability of the eye. Any way the small shift away from maximum resolution will give a much higher useful resolution, that will be visible with great clarity. Current Leica thinking in lens design is to opt for a high contrast and a high resolution, and many of their lenses show clearly the advantages of this approach.
Older lenses had a lower contrast and thus a lower resolution, not because of particular design goals, but because the state of the art at those decades did not allow for better imagery.
Contrast and resolution are two different and even conflicting phenomena, most people will say. True or false?
If ever a myth should qualify as the number one in photographic technique, this would be a strong candidate.
According to popular opinion (which has been based on the discussions in many books about the Leica) high contrast implies low resolution and low contrast is often accompanied by high resolution. Many Leica users will have read about the distinction between lenses with low contrast and high resolution and lenses with high contrast and low resolution. It is commonly stated that the older (that is pre-1980) Leica lenses exhibit the lc/hr character, while all Japanese lenses (old and new) are of the hc/lr character. Some observers currently state that Leica lenses are as good as the competition in the resolution area, but are of the high contrast type. Others will note that modern Leica lenses are of Japanese type, that is of still higher hc/lr character.
In fact high contrast is always correlated to a high resolution. Resolution refers to the ability to distinguish between closely spaced lines or points as individual entities. This is also called the spatial frequency, where the numbers relate to the number of lines (black and white) that are squeezed into one millimeter of length of space. A spatial frequency of 10 tells you that 10 separate lines of alternating black and white tone will be packed in a millimeter. Any line then has a width of 0.1mm and the distance between two lines is also 0.1mm. These 10 lines are referred to as 5 linepairs/mm.
Contrast refers to the relative luminance differences between the blackest and whitest area in an object or negative or print or transparency. The whitest surface in nature reflects about 99% of the incident light and the blackest surface (black velvet) about 1%. That is a relation of 0.99. Theoretically we can reach a contrast of 1.0. If contrast drops to 0.7, the black area will reflect more and the white area will reflect less light. If contrast is zero the black and white areas are equally gray and we are not able to distinguish the two areas or lines or points. It is clear that the higher the contrast the easier we can detect the separation between closely spaced lines or points.
Therefore high contrast and high resolution are closely related. It is not possible to have a low contrast and a high resolution. The eye would not be able to separate the two closely spaced lines of almost equal brightness.
High resolution: how important is this as a performance parameter?
The interesting topic about the alleged differences between the human perception and the physical test parameters, has been extensively discussed by many engineers since the introduction of modern high contrast lenses in the mid sixties.
These new lenses were the result of a fresh look at those physical parameters within lens design that really could have direct relevance for the human perception of optical qualities.
So it had been noted that contrast around 10 lp/mm had the most impact on the perception of sharp subject outlines and added to the general impression of sharpness in a picture. It was also noted that a very high resolution in fact detracted from the perception of image clarity and the clear rendition of fine details, including subtle gradations in small object areas. Again the fine delineation of subject matter in strong highlights and deep shadows adds to the impact of a picture. But these aspects can not be captured by notions as resolution, bokeh or whatever.
High resolution is an ambiguous reference. In the past some lenses have been analyzed and recorded above 300 lines/mm (150 linepairs/mm). This figure for some unknown reason has been fixed in the minds of many Leica users as a benchmark figure.
In reality it has been established that 40 lp/mm (80 lines/mm) exceeds the image recording capacity of the film-lens-camera system. Of greater importance is the contrast with which these lines are recorded. Many lenses are capable of recording 150lp/mm, but the contrast is so low that we do not see the white-black grating with any clarity, just a mushy gray noise.
We should forget about resolution figures when not related to information about contrast.
Are Leica lenses expensive?
The most obvious argument for Leica's high cost production of lenses is quite simple: volume and not necessarily better QC.
The cost of designing a lens (giving the ubiquitous computer programs), and going through an elaborate testing program is the same for Leica as for other main marques. The Leica design process adds an additional step, that is the matching of the assembly production tolerances to the requirements of the optical designer.
If a certain optical parameter can not be guaranteed by the subsequent assembly or production line the design has to be changed. This matching and finetuning takes money. When a lens has a fixed design and the production can start (QC in place etc.), then we have the famous economies of scale.
Small volumes and small production runs are invariably more expensive (and not in itself better) than larger ones. So all costs of a lens (that is design. production, documentation, PR, overhead etc.) has to be spread over a small volume.
It is quite clear that the cost of glass is not the factor that fully determines the cost of a lens. That is again one of many myths around Leica lenses.
Cheap glass at the moment is $80/unit and expensive glass is $800/unit, but many lenses can be made out of one unit. So even extremely expensive glass would add no more than $400 dollars to the cost of a lens.
Of course expensive machine tooling must be amortized over a lesser production volume, many QC-checks are manual and add to the cost etc.
So my view of the high cost of Leica lenses is simply this. A low volume necessitates a high price (see also the more exotic Nikon or Canon or Zeiss lenses). If the cost price is by necessity high, because of small production runs and therefore the selling price must be high also, then superior optical quality and a stringent and elaborate QC program may be the only
way of economical survival.
Another factor comes into play. Older Leica lens generations lasted for at least a decade and some for more than two decades. Costs could be spread over a longer production run. Now a Leica generation will hold less than ten years.
Leica needs to invest in ever more elaborate production machinery and also needs to educate the workforce to a higher degree than elsewhere.
So while QC is certainly part of the answer for the high cost, there is more to know about the production process. Leica lenses are designed and engineered to very small tolerances and the selection and subsequent processing of the glass (grinding, polishing and coating and centering) are as important as the elaborate QC stages to ensure that the tolerances are as required.
Sales of bodies versus lenses
There is a persistent story that Leica mostly sells lenses and the bodies are just vehicles to put the lenses onto. A 1 to 3 relation is often quoted (three lenses for every body).
Economic results will tell another story. In three years from 1993 - 1996 the M bodies sold 9322, 11208 and 10171 bodies, the lenses resp. 18009, 19170 and 21186. The average body price was 3400DM and for the lenses 1600DM. Remember that in the serial figures of the bodies, the Minilux and the Digilux are included. Roughly there are now 2.700.000 bodies produced and close to 4.000.000 lenses.
Still many people will insist on a strong relation between the artist's interpretation and the image quality of a lens and or film?
Any object in front of the lens (or the eye) is just a random pattern of patches of varying brightness (and color), shapes and areas. That is what reaches the retina or the emulsion. The prime directive of the lens designer is to ensure that this random pattern is recorded as faithfully as possible. No more, no less.
BTW: this pattern is also the basis for exposure metering and much of the discussion on reflective/incident metering would benefit from this perspective.
For the eye this pattern is the starting point. The pattern recognition mechanism of the mind will interpret the random patches as a cat, or a baywatch girl or the interior of the Louvre. The next step is another cognitive one: we attach emotions to what we interpret. We dislike the girl, we like the cat. This is part of our cultural training and our sense of symbolism.
This cultural interpretation is subject to a vast literature of scholarly works. This has basically nothing to do with the designers prime directive.
In the course of history the lens designers have tried to fulfill this directive more or less successfully and in different ways. But bottom line no designer would use a different optical formula or even a different paradigm.
Fact is that the measure of the degree of faithfulness can be objectively ascertained. This, again, has nothing to do with cultural influence or personal opinion. You can like or dislike the way this faithful recording has been accomplished. Witness the discussion between admirers of the
Sonnar way or the Summar way.
This discussion then is limited to the measurable part of how close the prime directive has been fulfilled. Some choices are necessary. But in essence no emotions are involved. The perfect lens not being invented there is a certain bandwidth of choices and balances. (Leica versus Canon versus Zeiss).
This approach to lens design is valuable and objective. It has not yet any relation to the way a picture can be interpreted culturally.
Now the prime directive of a photographer is to create pictures with meaning and purpose within the cultural context the pictures are likely to be viewed. Second part of the directive is to develop a visual language and vocabulary in order to express oneself more eloquently. Here we are in the realm of language and symbolism. Ever heard Cartier-Bresson saying something about the quality of Leica lenses?
Of course a photographer, in following the directive, can opt for a lens system with certain optical characteristics, but still we can clearly distinguish between the optical and expressive part.
Now it is a matter of debate if lenses with certain optical characteristics may add or distract from the clarity of the visual statement a photographer is trying to make. This I presume is what Alf is referring to when he speaks of certain lenses as being better suited for his way of photography.
I do not feel qualified to add anything substantial to this kind of reasoning. I do however see the need for this discussion.
I feel more at home discussing the optical designer's prime directive without the additional topics of visual language or interpretations of these statements.
The discussion can be complicated, but will be purely ad random unless we learn to separate the several equally interesting topics.
If the physical parameters support one side, and the emotional view supports another side - isn't the test missing something essential which is required for human perception ?
Human perception is a very complicated topic, involving psychology, neurology and brain sciences. A number of facts about the way the eye-brain tandem work when processing visual stimuli, have been recorded. As an example, we may note that the eye when detecting a difference in luminance (as when crossing a white-black border) scans this border many times in rapid succession and so the nerves registering the white line are more and more stimulated, thus enhancing the impression of a high contrast edge. In black and white photography the well known neighbor effect is based on this phenomenon. We can explain many of the visual phenomena by referring to science. Some areas, the cultural and esthetic ones in particular, however, are not so easy to study in scientific or experimental terms. The question refers to the art of seeing and interpreting photographs.The pictures of Henri Cartier-Bresson are masterpieces of vision and invoke emotions far beyond the technicalities of image quality. The world of seeing and appreciating the artistic and human dimension in a photograph is a totally different one than the world of analyzing the image quality as an optical parameter. Both worlds are valid, but should not be confused.
Many people refer to the HCB and Eisenstaedt pictures and note that they used Leica lenses of the older generations. This is a truism as they could not use the newer ones as these were not available. Claiming the optical superiority of the older generations by referring to their use by HCB or AE is a fallacy. In this case you would be confusing artistry with optical engineering. While optical engineering is more of an art that most people know, the art of seeing and the language of images is a cultural domain. The content of a picture is of course related to the technique of photography. The representation of the world in front of the camera is mostly mechanical as the optical and chemical processes of lens and film work independently from the artists intentions. HCB nor AE were interested in the technical part of photography and certainly not trying to find the limits of the optical performance of the lenses they used.
Optical testing versus appreciation of lens quality
This is one of the most hotly debated topics. As the glass in front of the camera determines to a large degree the quality of the image, the performance of the lens is most important.
As noted above, there is a big difference between the perception of a physical property and the measurement of that property. The relation between the two might be very complex. The perception of 'sharpness' and the measurement of 'sharpness' do not necessarily correlate as the concepts may be different. In this particular example we say that sharpness is a subjective impression and the physical correlate is edge contrast or acutance.
We should be very careful here to be clear and to define any concept we use. Otherwise a confusion of tongues will result and in fact does exist.
Optical testing when done properly gives the most reliable results for the assessment of a lens.
There is a story that optical testing is not suited for real photography as we take pictures of three dimensional objects and optical tests are based on flat (two-dimensional) test charts. This is a myth and totally unfounded. Any optical system produces one and only one plane of focus, that should be identical to the filmplane. There we have the best optical performance. Anything before and after this plane of focus will become progressively out of focus and blurred. The out of focus areas are nowadays studied for their perceived 'bokeh', but that is not important here. So when photographing an object extended in space, only one vertical slice of that object is on the plane of focus. Presumably it is that part of the object where we have focused upon with our M-rangefinder or R-slr screen. Every other part of the object is more or less blurred.
Indeed. When testing a lens on an optical bench, we look at the plane of focus to assess the image quality. But it is very easy to defocus slightly before and after the plane of focus. The tester then can simulate the out of focus areas quite well by looking at the image when defocusing in small increments. Any optical design program can accomplish this too and this method is an important instrument for tolerance analysis. So a serious test method will do a through-focus assessment to study the out-of-focus behavior.
A well conducted test, based on modern optical evaluation methods and image quality criteria will produce objective results about the optical performance.
Optical performance is not to be confused with the perception of an image. A lens that is optically superior to another one, might give imagery that is perceived as less pleasing or good as this other one.
When talking about image perception we walk into a totally different realm of lens evaluation. Here personal opinions abound and every opinions is as good as any other. Of course some one will claim a higher status and a more valuable perception based on his experience, his prolonged use of a certain lens or whatever argument suits his position.
Be on red alert here. While some persons do have valuable insights to share, many talk without any substance.
Some persons will tell you that they have conducted real life tests, which are of course much preferable above any laboratories test. As the variables are never strictly controlled, such a 'test' can tell you nothing that is meaningful.
Measurement methods and subjective evaluations
Personal experience is by definition subjective and relative. It spans the whole range from incident-based impressions to hard won experience based on thousands of hours of photographic practice with Leica equipment. Obviously practical experience of long time Leica users whose photographs have to meet the expectations of demanding customers are very interesting and valuable.
But photographers are by nature technically conservative (they dislike experiments that might jeopardize their assignments) and they try to find their personal style as artists or craftsmen.
This experience however is expressed in a vocabulary that is quite imprecise and difficult to evaluate.
Any person will perceive the tonal quality of a print or the contrast of a lens in a different (personal) way. If one speaks of a 'rich tonality' or a 'better highlight separation' or a lens with 'a high sharpness', it is the perception of this person not the physical property that is being discussed.
If some other person notes less sharpness or a 'smooth tonality', we have no objective evidence to compare these statements, nor to find out which one is correct.
If we could measure the tonality we would proceed as follows: we measure the highest and lowest luminance in a subject and find values of 0.5 c/ft^2 for the shadows and 500c/ft^2 for the highlights. This is a contrast of 1:1000 or log 3. If we have a print of this subject and measure the reflectance we find values like logD 2.16 for the shadows and logD 0.05 for the highlights.
This is a density range of a little above logD 2.0. So we note that the subject tonal range has been compressed. If another print has a range from 2.3 to 0.06, we clearly note less compression and if we would measure all tones in small groups (darkest shadows, deep shadows, dark gray areas etc.) we might make comparable statements about the tonality range.
Of course the scientific and subjective evaluations should complement each other. But they should not be mixed up. Perceptions are very important for the assessment of the final result (the print or transparency). But a perception is no substitute for a measurement. Nor can it be equated with a measurement.
Elmar 3.5/50 red scale
There is some discussion if the so-called Redscale Elmar 3.5/50mm is an improved version compared to previous versions.
All books note that around serial number 905000 a recomputed and improved Elmar has been introduced. The alleged recomputation is based on a comparison of the curvature of the front element, which is slightly less strongly curved than the prewar type. It is also researched that some other versions had still different curvatures (not as flat as the Red scale but flatter than the prewar or postwar black scale versions).
The original 1926 Elmar had glass types (front to back: SK7, F5, BK7, SK15).
The Red scale is supposed to have (SK14, F8, LF5, BaSF10). Data from article in LHS England #48.
First of all: small changes in curvature, certainly from the front element of the Elmar are of often insignificant value for the design.
This element does not take the burden of correction of aberrations. The second lens element does. And lens bending as it is called is often done to accomodate other changes. If bending is needed for optical correction it is for coma and spherical aberration, which is difficult to optimize in a 4 element triplet.
Secondly: a change in glass types is not always done to improve the performance. Sorry to destroy another myth.
Sometimes a glass type becomes extinct and a new glass has to be found and some recomputation is needed. But if the lens is already very good (or can not be improved) a recomputation to accommodate the new glass does not infer better performance. Sometimes a glass type is changed for easier production or coating purposes or better or smoother grinding. Not of these in itself does improve the quality measurably. The 4 element triplet is quite resistant to improvements when changing glass types. What you need is a high index crown glass with matching negative flint glass.
SK7 has 1.61, BK7 has 1.52, and SK15 has 1.62 (rounded figures).
SK14 has 1.60, BaSF10 has 1.65, but is not a crown but a flint.
Kingslake in Lens Design Fundamantals uses several sets of glass types to correct a 4 element triplet but notes that is does not make big changes, unless all refractive indices are above 1.60. This is already the case with the 1926 version (except BK7).
So we cannot in itself conclude that on the basis of these data there is an improvement of design. Schott indicates in the glass catalogue two types of glass (preferred and not preferred). The newer glasses are mostly of the preferred type, and the previous ones of the not-preferred type.
I do not believe that the change gave much improved performance. And I am not ruling out a small drop in performanc in the outer zones from the type with the preferred glass.
Now the serial number. I have these numbers in my database:
Elmar 3.5/50 from 904.001 to 910.000 and allocated in 1951 (note NOT produced in 1951).
The next is 941.001 to 950.000 and allocated in 1952.
It is highly unlikely that Leitz would change a design in the middle of a production run.
I can almost with certainty declare that the indicated serial number of 905000 for a change in design is wrong, at least not based on any plausible facts or authoritative written sources.
Studying the available documents there is indeed a change in glass types (which not necessarily means a change to an improved design) and I would locate that change at serial number 955001. (might it be that the original author of the # 905001 did misread the German handwriting???, which is indeed very hard to encypher) It is a guess but that could have occurred as a preparation for the bayonet version from # 1140xxx.
Now the 4 element design has an interesting additional story. The 4 element lens is just capable of correcting the 7 basic aberrations. Read my book for details.
To give the expected performance there may not be any change in production tolerences as the design is extremely sensitive to assembly and manufacturing errors.
A 5 or 6 element design can give the same or even somewhat better performance with a much higher latitude in tolerences. Is this the explanation that the Japanese in the 1950’s when they introduced their cameras used 5 or 6 element designs which were not much better than the 4 element German ones but much easier to produce in lage quantities?
Why it is hard to get high image quality
Today I shot 13 BW films in less than 2 hours.With a beautiful model, that is no problem at all. Without motordrive I even managed a film in half a minute. I have used APX25, Panatomic-X 64, Maco UP25 (replacement of the old and veryfamous Adox KB 14), Maco 64, (replacment of Adox KB17), Maco Ortho 25(replacement of Agfaortho 25), APX100 and Maco 100 (replacement of Adox KB21).
In addition to my previous test of Agfa Copex, TechPan, D100 and TM100 and FujiAcros 100, I am slowly covering and testing all of the slower speed BW films.
These films are needed when the Leica quality needs to be demonstrated. Ofcourse the leica characteristics can be spotted when using ISO400 and above, but then you need to have a very trained eye to discern the significant differences.
As I am also using a slew of developers, I can comment on the final quality of the films used. My initial impression is that the developer is much less important than the correct exposure, correct focusing and reduction of vibration. Still a TP print does give you a visual edge when compared to D100 at all. But I still claim that I can get to medium format quality with a 35mm negative when put beyond a Leica lens.
I also "discovered" that a 30x40cm print is the minimum needed to show leica quality. But at this enlargement any personal failure of technique is remorselessly exposed.
When carefully studying my bigger prints, I noticed that it is very difficult to get on the negative/print a resolution of 30 to 40 lp/mm. It is indeed very difficult to jump from 20 lp/mm to the 80 lp/mm that is technically possible.
Anyone who assumes that a 100 lp/mm (let alone 200 lp/mm) is a piece of cake and easily possible with a Summicron DR should think twice before making such a statement. And if such claims can be heard occasionally then from people who do not take pictures themselves. To be frank: it is simply impossible!! A lens that can capture 100 lp/mm on film is a highly unusual specimen! Most people are not aware of the effort and quality needed to produce such a lens. Of course there are lenses that are fully aberration free. Look at the lenses made for wafer chip reproduction. Zeiss makes them and they are fully and 100%aberration free. BUT; the lens weighs a ton, has 20 and more lens elements and
EVERY element is individually adjustable laterally and longitudinally to compensate for the inevitable manufacturing errors. IF we reflect on this we may assume that a 17 element zoom lens has better image quality than a 5 element fixed focal lens. Why? The 17 element lens can be corrected to a higher degree, this lens can spread the sensitivity of the aberration correction over 17 elements, (compared to the 5 element lens), therefore production tolerances are less critical, and a cheaper production is possible. In a 5-element lens all of the aberrations have to be taken care of by a mere 5 elements, where the zoom can do it with 17 elements. That is he reason why many Cosina/Voigtlander lenses have so many elements: production tolerancing is eased, and quality can be held at an acceptable level, even if production has a wider bandwidth of errors.
To be fair: Voigtlander lenses are excellent value for money. But dismantle one and you will see the cost cutting: plastics, wide tolerancing etc. As they say: water boils at 100 degrees, in Germany and in Japan too!
I am now focussed on low speed BW film. I am now testing a new emulsion that is closely related to the famous ADOX KB14. This film has a very thin emulsion layer and a thick silver content. The pictures are very convincing, but of course not to TP quality. Still it is a challenge to match certain Leica lenses to this film and see what happens in picture quality. My Japanese friends may be pleased to note that film characteristics and lens characteristics (beyond the bokeh issue) can be matched to deliver a unique personality to the print that is reminiscent of the drawings of the Japanese masters of previous centuries.
Being in close contact to the emulsion factory in the former Yougoslavia (efke) who produces the KB14 I can give additional info about their way of thinking and producing emulsions. More of that later. Some of you have been inquiring why I can take pictures with a ISO25 film. With the sunny 16 rule a ISO25 film will have an exposure of 1/30 at f/16 in a sunny environment. That is equal to 1/500 at f/4 in the sun or 1/250 at f/2 in the darker shades. Quite good for handheld shooting. Even 1/60 at 1.4 is OK and then you are in really dark surroundings.
Essential differences between R and M viewing
Quite often you read about people remarking that some Leica lens will obscurepart of the Rangefinder window. Noctilux 50 or Elmarit 24mm or Tri-ELmar being cited as examples. This is true, no discussion about it. But the interesting question is this: are we using the M-camera as it is intended. Let us start with the viewfinder of the typical SLR. We see through the lens and view the object in a size that is the correct relation to the focal length and by nature we select the part of the object that is most interesting to us. With the SLR then we select, frame and compose through and with the viewfinder. In fact the SLR method is not different from using a view camera where we throw a blqck cloth over our head and study the exact image on the ground glass. This is a static process: we detach from the world at large and selectively focus on one element that interests us.
With the Leica M camera we are in a completely different way of looking at the world. Generally when using a M camera we are using a mental technique that some writers have called the stream of consciousness approach. We are in the centre of the scene we are attracted to and our eyes/senses absorb everything. Then in a split second we are triggered emotionally by an event or a juxtaposition of objects (the decisive moment) and then we raise our M to the eye, frame with the finder and while still being fully we have mentally captured the whole scene, the framing is an act of selection, not of composition or studying the subject and then the fact that some parts of the scene are obscured by the lens is of no importance as we already know what
the full picture will be. We can mentally fill in the blocked out parts, because we are so immersed in the scene that we can se the whole scene. As soon as we are using the framelines in the M as a precise selection mechanism for the scene to be framed and composed we are using the M as a version of the SLR way of looking at a scene.
The M frame lines are a tool to slice spatially trough the stream of consciousness dimension of time. They are not a substitute for SLR type viewing and selecting of topics. We need to abandon the SLR style of viewing when using the M. Then the technical shortcomings of the M (frame lines not exact, obscured by lenses etc) can be disregarded. The M style of photography is not a better type of SLR photography (like a better mouse trap), but a distinct type of photography, more generic than being comparable to the SLR. If you do grasp the meaning, then the M-photography will become a new experience.
On the true difference between the R and M systems.
To start with a short history: In the '30s, the system camera has been born with the large array of interchangeable lenses and all kinds of medical, scientific and copying accessories. The Leica and Contax systems expanded from the rangefinder body and had trouble to provide accurate viewing and framing. They solved it with the Visoflex. The Exacta Varex started with a mirror box and reflex viewing, but had a problem with fast and dynamic photography. Basically we see here already the tension between the two base systems, one tuned for dynamic style of photography, the other tuned for static photography. In the fifties the Japanese cameras, like Pentax, and specifically the Nikon F tried to bridge both concepts and provided tools that were ergonomically well designed, had improved reflex systems with clear viewing and so for a long period became the universal photographic tool. Based on 35mm film, they had added capabilities of motordrives, and the quick shooting of a full 36 shots in a few seconds. This was their approach to dynamic photography. The big advantage of the SLR species was the ease of lens change combined with the correct view in the finder. And it could include zoomlenses too. The Hasselblad on the other hand became the preferred tool of the studio photographer and the art and nature photographer who needed careful and dedicated composing and demanded excellent print quality.
But a system of lenses was a burden and the limitations of the rollfilm (12 shots, dubious film flatness) restricted the usefulness. The Zeiss Contarex tried to combine the quality aspirations of medium format with the ease of use and expandability of the 35mm SLR. Zeiss engineers assumed that the 35mm photography had matured to a stage where careful composition, accurate framing to the edge of the image, exploiting as much of the small area of the 35mm negative as possible and excellent optical quality would be seen as advancing the art of 35mm photography.
The market was not ready for this philosophy and Zeiss stopped production and with the Contax RTS joined the mainstream of thinking. The next stage is the incorporation of the AF module, and now we can even take pictures without looking through the finder at all.
But the basic tension between a dynamic and mobile system and a static, tripod bound system did not disappear. The reflex system is indeed at its best when doing macro photography, studio work, or use lenses with a very long focal length or very strange perspectives, like the 15mm or the PC control lenses. In short everywhere when the accurate match between what you see and what will be captured on film is needed.
The current R-system: If we study the features of the R8, we see that the incorporation of the flash automation, the several exposure systems, the range of high quality lenses, the provision of extenders and elpro lenses, all do indicate that the R is designed for the static type of photography where the small format can be used to its advantage: relatively compact bodies and lenses, optical quality to diffraction limits, and zoomlenses without a loss in image quality. The studio flash in the R8 is often overlooked but to me this item is very important as it indicates the direction of photography for which the R8 designers created the system. (including the lenses). The superb 2/180 is not useable without a strong tripod, as is the zoom 70-180 or the 4/280. Let alone the 2,8/400. Even the 2.8/100 macro will be used on tripod to get the best imagery. And indeed,when you see the R8 in this perspective, the true value of the viewings system can be appreciated. The viewing screen isolates the photographer from his subject and the subject is seen frozen in a small confined space of the groundglass, without relation to the surroundings. You see the final print or slide as you want it or as it will be captured on film. And if you use this screen as intended, you compose, arrange and create the picture.
You need time and dedication to do this, but that is the choice of subject and your method of interpreting. The R-system then is Leica's answer to the need of the medium format photographer who wants to use the advantages of the 35mm format without compromising the ultimate print quality.
The current M-system: The rangefinder could never compete with the reflex-screen in framing accuracy and the possibilities of careful composing and seeing exactly what you will capture on film, whatever the lens or accessory. But its small body, and compact lenses (compromised in optical quality compared to the the best R-lenses because of volume considerations) allow a different style of photography: to be involved in the scene and have your sensory system wide open to freeze a moment in time. You sense this from your emotion or relation to the subject or scene. You cannot compose carefully or create the picture: you have to wait and hunt and then act in a swift movement of fleeting aiming and shooting. With a rangefinder you aim and shoot in an instant. If you approach the M and R on face value you are missing the basic difference.
Both are systems, both have a range of lenses that overlap in focal lengths, both have the same type of facilities (TTL, 35mm film etc). Both can be used for a wide range of photographic tasks: reportage, landscape, portraits, you name it. So if you look at the systems from this viewpoint, you become confused as both systems seem to compete in the same type of photography. If you follow my approach to distinguish between static and dynamic photography and composition versus involvement, you have two dimensions along which you can make a satisfactory selection.
Optical topics (1)
90% of the pictures I took have been made with the TriElmar and Kodachrome 64 (amateur version). I am still convinced that the Tri-Elmar is a most underrated lens. Its optical qualities are outstanding and is ease of use is exemplary.
Most people are put off because of the alleged low maximum aperture, but here the stubborn leica myth that you have to take pictures at apertures of at least f/2 to be considered a true leica photo is interfering. Now Leica myths still abound as the latest Viewfinder issue does demonstrate. And it is true that talking esoterically about leica is more fun that exerting hardwon facts from a universe that is not well understood. Optics is a most difficult area, be it discussing lens facts or myths or interpreting MTF graphs or even doing one's own tests. One of the books I have been reading has been written by a professor in theoretical physics and had as its theme a layman's introduction to the nature of light. Supposedly to be a simple book, it started on page 1 with an avalanche of formulae of double and treble integrals and differentiations and continued on such a high level that I needed days of reflection to understand what is going on. Luckily the farmer who just sits on his tractor and wonders where to get the water for his crop. After 50 books on optics I am amazed that you can still learn new facts every time and that is indeed very rash to assume you know your stuff after digesting a few articles.
This professor tells the reader that all image formation is a diffraction pattern and can be condensed into one arcane equation. He insists that even in a lens that is limited by its geometrical aberrations, the diffraction effects distort the image severely. This is not a new fact. Abbe of Zeiss fame knew it in the late 19th century when he noted that physically small lenses did deform the higher spatial frequencies more than did large lenses. The basics? The image of an object is this diffraction pattern and by Fourier analysis the original object can be reconstructed from the image/diffraction pattern. BUT: in a small lens only a part of the full pattern can be captured and so the resulting image is only a fraction of the original. The reason that R-lenses can be optically better than the M-versions is now clear. An R-lens has the ability to capture a bigger portion of the diffraction pattern and so has a more faithful representation of the object.
Why are multi element optics inherently better? The basic idea of a lens is the bending of the light rays coming from a distant object. An object at 10 meters distance sends rays that are close to parallel to the optical axis and need to be bend quite sharply to focus on a plane at a mere 5 cm from the lens. It is well-known that rays that pass straight through a lens are aberration free. So if we can arrange a system of lens elements such that any element produces only a small deflection of the ray, we have small aberrations. If the total deflection of the ray has to be done by 5 elements, every step is a substantial deflection. And introduces aberrations. A lens for microchip production has 30 elements and every step is very small, resulting in an aberration-free system. But the alignment of 30 elements and the tolerances are extremely costly. So a 5 element system may be the better option. Again I am very impressed by the Leica designers to get such a performance from M lenses which have few elements and are physically small.
All theory is against such a design. I do think that there is generally a lack of knowledge to really appreciate what it is to design and manufacture a high class lens.
On production cost.
Studies have shown that the reduction of the overall tolerances rom let us say 0.03mm to 0.01mm treble the cost of producing the lens, even if it may be difficult to see a much improved image. Here we see the Leica problem: users expect best quality, which is equivalent to say that production tolerances must be very small, which in turn boosts the costs. BUT many a Leica user will never see or be able to get to the level of expertise to appreciate the image quality that is possible. A Cosina/Voigtlander lens of 1000 dollar may bring the same results as a 3000 dollar Leica lens, where the Leica lens is manufactured to trice higher tolerance standards, but without command of the imaging chain or a suitable subject where the differences do show, this theoretical advantage may be lost.
My whole point is that to appreciate and exploit Leica imagery we should move beyond the obvious or traditional topics (is the Summicron DR the best 50mm lens) and try to master our subject of the whole imaging chain and choose carefully the several elements. For me it is of much more importance to find the optimum film (Provia 100F or Delta100) or exposure and to study subjects that do justice to the Leica lens philosophy.
But as long as the Leica scene is dominated by spin doctors who draw smoke screens about he really important topics there is hardly a chance that we will ever move forward in the evolution of the photographic image. No wonder that digital image capture captures such a large audience. It is an effortless undertaking and gives fast and pleasurable results and it is image oriented where the classical Leica discussion is myth oriented. No wonder too that analogue photography is on the defensive. If this type of photography is being killed in a few years, we know who to blame. Chasseurs d'image notes that in june 2001 60% of the sales of photographic apparatus is digital. Here we see another instance of careless interpretation. What is being compared statistically? Value or numbers of units. What is the comparison base: slr sales or all sales of photography including single use cameras? It is easy to get a false impression here. But numbers are rarely used to illuminate facts but to manipulate the opinion.
The question is then what level of quality can we appreciate or see in normal circumstances. While an MTF graph may give valuable information to the optical designer, in the hands of an untrained user the interpretation of the curves may be hopelessly inadequate and give rise to conclusions that may be wholly off track. And the relation to an MTF graph and the resulting picture may not be obvious or clearly demonstrable. There is a fragile relationship between visual images and mathematical computations as an MTF graph is. Even when done on a bench there are many variables to look at. Why are the Photodo results different form the Leica results: even a different width of the slit which is used to measure the edge gradient has significant influence on the data that are presented. It is remarkable to see how a professor of physics with 40 years of experience in the lab makes very cautious conclusions compared to the easy going interpretations of the non-expert.
Read the latest Photo Reponses where you can see some test figures that try to evaluate Technical Pan with APX25 and the Copex film (AKA Gigabit). Only one picture is shown, no data are given and the fundamental question if this picture (read lab setting) does represent a valid test for this comparison is not at all discussed. But is gives he impression of a comparison and thus it will satisfy most readers, while all important facts stay under cover.
Optics (2), image clarity and the digital scene
Jay wrote: I have always read that the size of the front element is a direct consequence of the maximum aperture. Then from the above quotation, would it not follow logically that the fastest lenses in a given focal length should be optically superior to the slower ones (in the same generation of course)?
Well, in theory this is partly true and it follows from the diameter of the Airy disk, which is in fact related to the aperture. The smallest diameter can be obtained with the largest aperture. But any large aperture lens is so loaded with geometrical aberrations, that the size of the Airy disk is often less than a tenth of the size of the unsharpness blur that we get when using a wide aperture lens. To be precise: for a f/1 lens the calculated Airy disk (smallest diameter of a point) is about 1 micron. But for the Noctilux f/1-50mm lens the actual spot size is in the order of 20 - 30 micron. The professor in his book was not referring to wide apertures but to wide angles including wide apertures, as in this case the biggest part of the diffraction pattern can be captured.
The size of the front element is a good but not perfect indication of the actual entrance pupil, which does govern the true light gathering capabilities of the lens. As example: the Elmarit-M 2.8/28 and the Summicron-M 2/28 have the same physical front lens diameter, but the latter one has a much larger entrance pupil. The entrance pupil is important, not the physical diameter. But the EP is a much more difficult topic to understand and is avoided by most. Here again we see the limitations of what we are accustomed to read in most magazines, books or internet discussion groups.
Speaking of conventional themes. There is an overwhelmingly strong opinion that testing of lenses is done by taking pictures of meaningless testcharts and/or gathering of numbers (MTF values) that have no relevance to real photography. And the true appreciation of a lens has to be done by interpretation of the image content and way of representation of the subject matter. This view is as wrong as it is popular.
It relates to the old dichotomy that engineering is dull and numbers and cool and anti-human and that art is exciting, full of feeling and immersed in the human condition. And that both worlds are opposed to each other. In fact there does not exist such a naive dichotomy. Mathematics has been called by many a perceptive person as art or even poetry. And much art is as mechanical as it is devoid of true life. We should try to abandon many a myth to get to the essence of what we are doing. In a recent issue of The Economist a particular poem has been described as exhibiting the precision and clarity of a B&W photograph. Here we capture the essence: precision and clarity are attributes of a forceful statement and it is photography that can accomplish this. But precision and clarity are also characteristics that are close to the faithful reproduction of a scene, supported by a lens that is as clear and precise as is possible, in fact a leica lens. Testing a lens for these attributes is done by looking at MTF graphs and residual aberrations. So in my view a thorough scientific test of a lens ( a method ridiculed by many) will reveal properties of the lens that are instrumental in its role as a poetic imaging tool.
There is no fight between concepts here: to add a thing to something, that something should lack the property that is added, otherwise it makes no sense. Only persons who love to simplify and distort would claim that a scientific test is anathema to an artistic interpretation: the last one can benefit significantly by acknowledging the value of the first. I for one fail to see why a sincere test and analysis of a test pattern should be valued less than a cursory glance at a print of a landscape only because the landscape suggests reality and a test pattern suggests artificiality.
The very strong point of a B&W photograph is its capturing of the essence and extent of the variety of the dimensions of reality. A lens that is corrected highly will do the job with better results than a lesser lens. My description of older leica lenses as reproducing the scene with less transparency (muddiness) and precision has been questioned as inaccurate and even ridiculous: still I stand to this description as it is a true reflection of the state of lens design of yore, even if someone does not can appreciate the significance of this analysis.
In my view then, poetry and clarity of vision and accurate reproduction are closely related and so is the scientific analysis of a lens and its capabilities to record the reality as accurately as possible. And as meaningful as possible. A not so well-known book by Bryan: "Cameras in the Quest for Meaning" tries to explain this symbiosis. It is a pity that many a Leica user seems to be caught is the fallacy of either a scientific approach or an artistic interpretation. The meaningful mix of both is what constitutes the essence of leica photography.
There is a strong indication that digital photography is gaining at the expense of analogue or silverbased photography. No denying here: the convenience and speed of the digital method fits into todays lifestyle. Even if it is expensive and lacks the ultimate quality of the analogue method: but image quality is the victim of the adage that the best is the victim of the good enough. In fact the industry is killing the analogue photography as fast as they can. In a recent interview the Kodak manager in France said that the profits of analogue are used fully to subsidize the lossmaking efforts of Kodak into the digital realm and that Kodak has no intention to invest any money into analogue products anymore. But as we read in Chasseurs d'images: the industry killed analogue photography as a hobby or a lifestyle when first introducing the camcorder in the seventies and now introducing the single use camera, which has done more to kill the hobby of photography as has done the digital camera or the computer.
Analogue Photography is more related to poetry than to anything else and we all know how many of us read poems.If we want to pursue the fascination of analogue photography we have to act and act quickly. We may quiet our conscience as leica users by assuming that Leica will someday introduce a digital M or R, which is technically and commercially possible. But what do we loose then?
The capacity and capability for a precise and clear expression of our inner feelings is the essence of B&W Leica photography.
Remarkably this capability is engineered into the image qualities of todays Leica lenses, but we prefer to stay in the past and worship mythical issues and state stereotypes as fundamental truths, afraid as we are to face the future.
Konica Hexar and register
I am currently using the Hexar RF for several reasons. To test the new Hexanon 2/35mm, to check on Hexar body - Leica lens compatibility and get a feeling for the Hexar system. To start with the body: the specs are well known, so I can jump to the more philosophical topics. The body appears to be a very high engineering quality, has a very solid feeling and is really easy to use. The electronic shutter-motordrive unit is a sealed box and can not be separated. It is the same as used in the Contax G/2 series. As an aside: if Leica were to use this unit, the manual advance lever would have to go. The viewfinder is slightly lower in contrast than the Leica and the Hexar rangefinder patch has a distinct yellow tint, that will lower contrast and makes it more difficult to focus at objects at 10 meter or more distance. While the body has almost identical dimensions to the Leica, the look and feel is distinctly different. The rounded body contours of the Leica and the clean top cover make it look more elegant, compared to the squarish and somewhat boxy character of the Hexar. In use the Hexar is quite simple and its controls are well laid out and generally useful to the photographer. The exposure compensation feature is nice, but with the leica a simple half click stop of the aperture ring will do the job as fast and easy.
Biggest drawback of the hexar is the small time delay between pressing the shutterknob and the actual firing of the shutter. This delay and the instant of wait and thus insecurity is most annoying and you can not use the Leica technique of prefocusing and fire when the object is sharp in the finderpatch.
When you close your eyes and pick up the Leica and the Hexar several times, the difference in feeling and haptics emerges. When you hold the Leica, your thumb slides behind the advance lever and your finger lays on the shutter release button, which is sharp as a trigger. This simple and intuitive act signifies to the brain a state of alert attention and you fall into the mood of a hunter or an active sportsperson anticipating the moves of the other players.
When holding the Hexar, both hands hold the body and wen your finger touches the release button, there is no trigger effect. The finger just rests there and you do not get any feedback from the body. So you switch almost automatically into a more passive state of mind and allow the camera to work for you. That is easy to do as the automatic functions of the camera (exposure, film transport, motorwinder) are so well executed that you start to rely on them and even transfer control to them. In fact you are starting to become an operator of the camera, adjusting the wheels and not the driver who forces the camera to do as he wants it to act.
The transfer of controls to the camera and the mood of becoming more passive in the photographic act is in my view the fine distinction between the Hexar and the Leica. Photographing the same objects with a leica and a Hexar in quick succession underscores this difference: with the Leica the work is harder (more to think and act), but your act blends in with the subject and you are part of it. With the Hexar your work is easier, but the remoteness of the controls acts as a filter between the object and yourself. Let me say, that you become a bit lazier when using the hexar and that shows in the pictures.
Technically there is nothing wrong with the Hexar pictures, well exposed, sharply focused etc. The Hexar then is for photographers who avoid technicalities and want good imagery with a minimum of technical and manual control and who feel that the visual involvement with the object has to be separated, even detached from the tool they use. In this sense the Hexar is close to the Contax G. The family resemblance goes a step farther. My test of the Hexanon 2/35 indicates that Hexanon imagery is in character very close to the Zeiss philosphy of correction. The Hexanon is an 8 element lens (with the now familiar negatively curved front lens, pioneered by Leica and quickly adopted by Konica and Voigtlander). The Summicron has 7 elements, but has one aspherical surface, and one such a surface equals two spherical surfaces). At full aperture the lens exhibits a medium contrast (less than the leica lens), has visible flare in the bright areas and small detail rendition. The performance on axis till an image height of 6mm (image circle of 12 mm diameter) is excellent with a very good definition of very small detail. In the outer zones the image quality drops significantly and now we see small detail with quite blurred edges. Astigmatism is very well controlled, but there is some curvature of field. The lateral chromatic error is quite large, and may add in the bokeh preservation. The corners are very weak. At 2.8 the flare is gone and the image crispens a bit, the central disk of excellent quality now extends to a image height of 8 mm, with the corners still bad and the outer zones hardly improving. At 4 we find an overall improvement, but the chromatic error still softens the edges of very small and tiny detail. At this aperture the quality is comparable to the Leica, that shows better reduction of the chromatic error and thus a crisper and cleaner image. If resolution figures were relevant, I had to note that the
Konica has the edge here. But these are bench mark figures (large scale projection test) and in actual photography the small advantage would be lost. This sideline indicates that differences in resolution of 10 line pairs/mm are not indicative of superior image quality. Optimum aperture is at 8, and after that contrast and resolution drop due to diffraction effects. Close up performance at 1 meter is identical to the tested distance which is at 100 times the focal length.
The inevitable question of course is how this Hexar lens compares to the last non aspherical Summicron. In my view the Hexanon is the better lens overall. But you cannot use the Hexanon lens on a Leica body: a collimator check showed that the Hexanon lens has a focus plane that differs from the Leica lens by 0.09mm. Is that important? The discussion on the Lug about the Hexar body/Leica lens compatibility dismissed small differenes in the area of less that half a mm as irrelevant, because some uses could not detect any difference when comparing different lens/body combinations. The truth is this: a did a test on the bench and focussed carefully on maximum image quality. Then I used a micrometer to defocus by 0.03mm (which is quite small). In the image the loss of contrast was very evident, but resolution at least at the lower frequencies (around 40 lp/mm) did not suffer. What did suffer was the edge sharpness. If you were to do your own testing and looking at the negatives with an 8-times magnifier, you would not see any drop in resolution (beyond the detection capability of the eye at that magnification). But at a larger magnification you begin to see it quite clearly.
Now the continuing saga of the Hexar/Leica lens compatibility. First a few remarks: You can not measure the actual distance from bayonet flange to pressure plate by using the pressure plate itself as a reference. The slightest and unnoted pressure from the instrument itself on the pressure plate will give errors and the pressure plate itself is hardly ever a plane itself. So additional errors. The only way to do it is to remove the pressure plate and insert a device that is calibrated to be at the same distance where the pressure plate ideally has to be. To start from here. The distance from the bayonet flange to the pressure plate or more accurate the top of the outer film guide rails ( pressure plate rails) in the Leica M is 27.95mm. This distance is also (but wrongly referred to as register. But this distance and measurement is used to check if the guide rails and the bayonet flange are parallel to each other and have the correct distance. The second important measure is the distance from the film rail (the innermost film guide rails) to the bayonet flange. In the Leica this is 27.75mm. The film gate then has a distance of 0.2mm. In every Leica book I know of there is a reference to the filmplane/flange of 27.80mm. What is this. Rogliatti, Roger Hicks, Collectors Checklist, Hasbrouck you name them, all refer to flange to film plane distance or flange to film register. Now in German the word is "Auflagemass". This can be correctly translated as "flange focal length" or "flange focal distance". But this measurement is done for the lens itself on a collimator where the lens is adjusted such that the distance from the lens bayonet flange to the true optical focal plane (focal point) is indeed exact 27.80mm. First lesson: NEVER believe what is written about Leica in books that are focussed on history or collecting: these persons are no engineers. In every other book, check, double check, triple check to make sure the person knows what he talking about.
To sum up: we have an optical measurement done on the lens to adjust the flange focal distance and that distance should be 27.80mm. We have a mechanical measurement on the Leica body, which is the distance from bayonet flange and the pressure plate rails which is 27.95mm. The film gate is 0.2mm. If we now use a film with a total thickness (emulsion plus base) of 0.13mm (APX25 as example) the thickness of the film will not fit into the film gate. There is some play and therefore the film will curl and curve inwardly (away from the lens). By using a focal distance of 27.80mm, Leica will ensure that the film when bowed a little, still will be correctly aligned in relation to the focal plane. It is intriguing to note that thick colour neg films of about 0.27mm will fill the film gate completely and the pressure plate will press the film to a plane position, instead of the curved position with thin film emulsions. Theoretically a thick film would have a better flatness than a thin film. Of course more research is needed, but these investigations do show that the information in the public domain is at best scanty or at worst misleading.
Now for the Konica Hexar. Here I have only one official fact: that is the bayonet flange to the pressure plate rails of 28.00mm. But I do not have official info about the flange distance to the film rails (or film gate distance). Nor about the lens flange focal length. My own measurements on one Hexar body and lens showed that the film gate had a thickness of .24mm and the lens a flange focal length distance of 27.71. On the basis of these measurements the flange to film rail distance is 27.76mm. These results are however no reliable enough to draw firm conclusions. What I do know from discussions with konica people is that their tolerances are wider than with leica and are choosen such that the best fit of Hexar body to hexar lenses is assured. The many inconclusive reports about problems or the lack of problems with fitting a leica lens on a Hexar body is partly to be explained by these tolerances and partly by the unreliability of the reports themselves. The Konica people at the factory told me that the Hexar is designed for use with the Hexanon lenses and that all dimensions inside the Hexar are based on that fact. If a hexar user fits a leica lens and he has problems, than it is caused by these different dimensions and/or the chain of tolerances add up unfavorably. If he has no problems: than he is plain lucky as the tolerances are such that they are close to what is expected for leica bodies and/or his demands are such that they are below the visibility threshold for the mismatch to show up. This is not the end of the story. People would expect quick solutions and fast answers and move on to the next topic. That is living in the fast and superficial lane of user group discussions. Serious research takes time and experience and dedication: scarce resources in a hasty world.
Image evaluation
Some of you asked for more information about the measurement of performance or the evaluation of image quality. Let us first start with some basics, that are very important. The ideal lens will form a perfect point image from a point object. A true point object would be a distant star as here the diameter of the star image is extremely small compared to the distance between the lens and the object. We all know that even an ideal lens will form an Airy disk of any point.
This is a patch of light with a central core of high illumination (where most of the rays will be focused on) and a series of concentric alternating bands of low (dark) and high (white) intensity, the intensity of which quickly fades. This is the diffraction pattern of the point source. The ideal of a lens designer is to create as much blackness as possible around the central core: in this case we have maximum contrast and the point will be recorded clearly. The Airy disk is a three dimensional object with length and width and height. Length and width represent the size or extent of the disk and the height represents the intensity (amount) of the energy concentrated in the core and the bands surrounding the core We have all seen the shape of this figure: it is the Gaussian distribution from statistical distributions: the steep hill with a sharp point and surrounding much lower wavelike pattern. If two of these disks are separated by the radius of the first dark ring in the pattern, the intensity midway between the two peaks drops to 0.74 of maximum intensity. We then say that the two points are resolvable and there are several equations to calculate the theoretical maximum resolution. This value however has no relevance to the actual optical performance of a system.
In actual optical design there are several methods for analysing the optical quality. One is the ray tracing itself: every ray is traced through any lens element and the path does indicate the quality of the lens. Another one is the spot diagram: here we look at the pattern formed by the lens of a bundle of rays: spot size and form of the spot give valuable information about the aberrations still present. A still better method, but very difficult to understand is the analysis of the optical path difference, where the spatial and temporal components of the ray tracing is analysed. A ray not only has a location on the image plane, but also a speed and if rays which are wavelike, are out of phase, they will not arrive at the exact moment in time on the film plane. This difference is the phase transfer function, a part of the optical transfer function. The other part of the OTF is the well known MTF. Also used is the encircled energy, which is a measure of how much of the total energy of a point is concentrated in a spot of a certain diameter. This is obviously related to he idea that you need to focus all rays and thus all energy in one small spot.
The MTF is probably the most comprehensive of all methods, while the above methods are more specialized and give partial info about the state of the system. The test target is a periodic object that varies sinusoidally in its intensity. It is a waveform as we see often when discussing radiowaves. The wave has peaks and valleys and if we have a perfect lens, the wave is represented exactly on the film. If not, the peaks and valleys will be changed in form and height. The maximum distance between a peak and a valley is called the contrast of the signal and when this contrast is changed (lowered) we call it the ratio of the modulation of the target versus the modulation of the image. The target of the MTF method is a very small slit of light, that is captured by the lens.
The distribution of the light energy over the slit has the same characteristics as the pattern seen in the Airy disk. That is why it makes no difference if we study a point or a line. Fourier transforms help us to go from one to the other.
The slit can be changed in horizontal and vertical directions and represent the aberrations in tangential and sagittal directions. As the reproduction of the edge of the slit and its energy distribution is influenced by all aberrations left in the system, and the sagittal and tangential directions represent the extreme positions of the angles of the rays going through the lens, we have in one diagram all info about the contrast drop of the spatial patterns being recorded and the angle of view of the lens. The MTF diagram is easy to look at, but extremely difficult to interpret correctly. The recent example in the Viewfinder magazine from LHSA is a case in point. Here by casual inspection of only one aspect of the diagrams (the highest contrast value at several locations and apertures) the conclusion is proposed that the older 7-element Summicron has better performance than the current Summicron. In fact to get a good view of the performance of a lens , based on MTF diagrams, we need to do this: You have to look at the overall shapes of the curves at all frequencies at once and changes of the curve pattern at all apertures. That is not easy: a high contrast value at the sagittal curve may be nullified completely by a low contrast value of the tangential line and the dips and meanderings of these curves reveal coma and chromatic aberrations, when you compare them wide open and stopped down. Again a simple high contrast value in the center of the image, may be degraded in real life when chromatic error is present, which can also be detected in the curve shapes, but only when you are trained in reading these diagrams.
It is understandable that photographers or users of lenses would want to have a simple figure for comparison of lenses, as the old resolution figures. It is comforting to know that a lens with a resolution of 60 lines is better than on with 50 lines. Much has been written to try to get rid of this criterion. We are not advancing however if we now change to a simple contrast value and state that a lens with a contrast transfer, at the 20 lines frequency and in the center, of 89% is better than with 80%. The evaluation of optical performance is not that simple: wish it were. All methods mentioned so far are not available to the normal user with the exception of published MTF diagrams, which are not easy to interpret and now are as much mistreated as the previous method of resolution determination. Still the last one is not obsolete, when used with some common sense. The classical USAF target (1951) can bring valuable info, when you have a careful setup and a good microscope. A very simple analogon is the white picket fence with alternating white and dark lines. Photographed at a very large distance it gives a single spatial frequency pattern that can be enlarged and analysed (edges, contrast, sharpness etc).
The Siemens star test (radial pattern) is also a good method for testing the performance of a lens. Keep in mind that you need a whole bunch of these charts to cover the whole image area and what is most important should be scrupulously aligned parallel to the film plane. That is obviously a big problem and the crude "testing" reported on the Lug recently of the even simpler test target of the page of a paper with small print, indicates that testing when done in such a simple way is very dangerous and in fact highly uninformative, as the only thing you can test here is the competence of the person who conducts the test.
Generally then I would say that testing in whatever method or format should be left to the experts with the right equipment. The photographer should restrict him/herself to taking pictures and evaluating the results, related to the level of demands that are relevant for the type of photography s/he engages in.
Here I must insert a warning. I have seen many pictures by Leica photographers that have been presented to me with the comment that here we have fine examples of Leica quality or that here we have pinsharp accurate representations of the reality. Quite often and I am really saddened to say this, these pictures are way below what we can extract from our equipment. I would remark that some leica photographers live in a kind of pseudo reality where the wish to relate the quality of the imagery to the reputedly high quality of the equipment may blur the senses to see what is not there. This is a delicate topic, and I am well aware of this: I would not dare to discuss it on any public forum like the Lug or Leg as I am shot to death in an instant. Still my goal in testing Leica lenses and the factual degradation of the image through the imaging chain (film. exposure, etc), is to define a true anchor point for optimum image quality. You are free to depart from that or being not interested in striving for this level of optical performance. But I do find it difficult to accept that some photographers demand the best equipment, discuss the possible image defects (like dust specs in a lens or a filter in front of the lens or the accuracy of the rangefinder within a hundredth of a millimeter) at extreme length and then create pictures than could be improved upon by anyone with a high class point and shoot camera.
Of course: technical image quality is not the only characteristic of the Leica camera or any other topclass camera system. It has been argued correctly that a 4x5 inch negative is always superior to whatever 35m negative. And that in many instances a print from a Leica negative negative is indistinguishable from that of other systems. True again. Still we buy leica equipment for the real advantages in performance or differences in fingerprint. To try to achieve these advantages and differences is a noble goal, in my view, and needs an open minded discussion where topics can be discussed and facts analysed with a modicum of rationality. Objective testing by proven methods and diligent interpretation of the results is a minor but important contribution to that goal.
The general misinterpretation of MTF data to 'prove' a point is an example with wider scope. As long as we do not want to delve below the surface of common sense and cherished myths, we will never see the truth and start to enjoy (Leica) photography as a fine (mechanical) expression of our emotions and intentions. But to extract a high level of image quality from our equipment takes time, experience, dedication and technical expertise or craftsmanship, and all of these requirements may be in scarce supply.
That mysterious digits on M lenses
There is a lot of discussion about the meaning of the double digit figures on mounts of Leica M lenses. But before explaining the facts and ideas behind them, I have to make an observation that may upset some of you. The sources of info about Leica are large and varied and comprise published books, articles and a vast amount of discussions, from presumedly knowledgeable, but anonymous sources and an even larger amount of free floating texts on the internet (newsgroups, websites) that are very uneven in quality and authority.
In scientific research the situation is quite simple. Facts and theories are original if the writer/author is the first to present facts (experimental research) or create a theory: a new interpretation of known facts. Frauds excluded (and there are many in the scientific field) any serious researcher will acknowledge his sources by referring to previous texts or to his original published research data. So anyone can trace the history of the facts or verify its origin and authority. In Leica lore this is not the case: in most cases you will find a report or a discussion or an explanation without any reference at all. And without being able to verify what is being stated, anything goes. A recent example is the analysis of the distance from the bayonet flange to the film plane, which may be 27.8 mm or 27.95mm, depending on resources and interpretations or translations. The topic of the double digit figures on the M mount is in the same vein. Explanations and figures are numerous, but is there any one who will acknowledge his source and so allows fore identification of the original data or individuals who wrote about this? In fact most discussions and explanations are based on a very few sources, Rogliatti, being the foremost one. It would really help if the sources would be mentioned, as it is clear that most discussions are a mix of previous reports and articles. By not mentioning the sources, one simply perpetuates the myth and evades the possibility of being wrong.
Focal lengths groups: the dimensions. It is well known and this info can be found among many writers, for example Rogliatti: Leica and Leicaflex lenses (2nd edition), that the tolerances in the manufacturing process of lens elements (distances between elements, some difference in curvature of surfaces, different refractive indices per charge of glass melting etc) will generate some differences in the actual focal length of the lens. But we have first to establish the calculated the true calculated optical focal length of a lens. For the Summicron 2/50 (second generation) this is 52.02 and for the Summarit 1.5/50 it ois 52.16mm. In the past the production process was not as accurate as it is today and a wider range of measured focal length could be found. The older Elmar 3.5/50 as example has been recorded as ranging from 48.6 to 51.9mm in steps of about 0.3mm (not exactly. (Info from the book " 25 years Leica Historica" and the magazine of the Leica Historical Society UK). The newer Elmar 2.8/50 had only three groups: 51.6 and 51.9 and 52.2mm. The older Summicron has the same groups: 51.6 and 51.9 and 52.2: a difference in distance of 0.3mm. The Summilux has these groups: 51.0 (indicated as (10),; 51.14 (11); 51.3 (13); 51.45 (14); 51,6 (16); 51.75 (17); 51.9 (19); 52,05 (20) and 52,2 (22). The Noctilux (current) has only 50.00 (00); the 1,2 version has : 51.75 (17); 51.9 (19) . The 75 and 90 and 135 have even more different designations. Sources: 25 Years Leica Historica and My book:Leica Lens Compendium. So one should be careful to differentiate between lenses and the same figures do not indicate the same differences or focal lengths. Focal lengths groups:why. It seems to have escaped most observers that Leica R lenses do not have these numbers on the lens. Still we may assume that the same tolerances and manufacturing processes are being used. So why M and not R? The explanation is quite simple. The true focal length of a lens is a characteristic of the optical cell of the lens: every lens element has its own focal length (negative or positive) and the sum of every focal length of these individual lens elements determines the system focal length or the focal length of the lens. The variations between the focal length is different per lens type. When assembling a lens, one can try to compensate these tolerances and use lens elements with plus and minus figures to stay within specified ranges of focal length. Having established an actual focal length per lens, one has to mount the lens, first in its proper focusing arrangement and secondly match the rangefinder curve that is calculated and machined for a specific focal length. This is the essential point. A change in actual focal length has no influence on the length of the mount itself, but only on the steepness of the RF curve of the lens. That is why the R lenses do not have these numbers: here you focus on the groundglass. But with M-lenses you focus by matching the alignment of the rangefinder patch with the extension of the lens governed by the cam of the lens. The focal length groups then indicate the true focal length and the fact that the correct cam has been fitted to the mount to ensure correct focussing. The RF roller movement "assumes" a true focal length of 50mm. The engineering complexities of the M body and its lenses are fascinating as are the solutions.Depth of Focus
Now on another topic: We know all the idea of depth of field: the extent in three dimensional space that is interpreted as acceptably sharp when recorded on film and enlarged/projected. In the film plane we have the same concept: depth of focus: that is the displacement of the true focus before we detect an unsharpness. also called defocus shift. This has particular relevance to the idea of film plane and focal plane and the amount of defocus by film curvature or inaccuracy of visual focusing or focus acuracy by engineering tolerances. A simplified equation tells you that Depth of Focus in micrometers is equal to the square of the f-number. So when using an f/4 lens we can defocus by 16 microns before any unsharpness blur will be detected. An f/2 lens has a defocus margin of 4 micron. All of this in critical demands. For normal photography the following equation can be used. Depth of focus = 2*CN. C = circle of confusion N = aperture. For a 2/50mm lens the Depth of Focus is about 1/30 * 2 = 1/15mm = 0.07mm or 7 microns. There is also an equation that relates depth of focus to resolution.
DoFocus = 4N/R(esolution) IF we need a resolution of 100 lines at f/2, we have a defocus limit of 8 microns. Contrary to popular opinion, the depth of Focus INCREASES when we focus at closer distances. So all tests done to prove that the Hexar can be used with leica lenses and using close focus settings actually mask any focus errors instead of exposing them.
If we do close focus photography and use f/2 but accept a resolution of 40 lines (which is the resolution most people would be very much satisfied with) the DoFocus might be close to 0.5mm!!
Formula in Rays " Applied Photographic Optics"
DoF is defined as the difference in distance that the film plane or the focal plane may move axially without disturbing the sharpness impression. Lenses are habitually checked in the design stage for the tolerance in through-focus MFT values. How does the contrast suffer when the ideal plane of focus is shifted for a certain amount. Lenses can even be tuned to be rather insensitive to changes in depth of focus by optimizing the through-focus behavior. But in general: if we do not know anything about depth of focus in addition to what we noted as elements of possible mismatch, the entire discussion and "proofs" of (in)compatibility is vapour ware and more misleading than enlightening.
Goldberg uses a crude rule of thumb to establish the depth of focus as the product of blur diameter and and aperture.If the blur diameter is taken as 1/30 or 0.03mm and the aperture as f/2 we have a depth of focus of 0.06mm. So if our combination of tolerances would be off by 0.06mm or 60 micron, a drop in contrast and image quality would result. As the drop in quality would be more noticeable as a drop of contrast than as a drop of resolution (see my Hexanon 2/35mm test), it might go unnoticed by many who check the sharpness of the image, by looking at a single negative, trying to check visually the plane of best sharpness.
A more sophisticated equation relates the depth of focus to object distance and here we see the opposite of what we assume: depth of focus increases at the shorter distances and so will cover any imaging defects, caused by a possible mismatch.
RF mechanism, accuracy and Depth of Focus
The Rangefinder mechanism, the attainable accuracy, the required precision of engineering and assembly make for fascinating reading and study. Let us first review the simple mechanism in the SLR to get an idea what is involved.
Generally we have two separate actions when focusing the lens on the object: 1. setting the distance by physically moving the lens relative to the film plane 2. Checking the setting by visual inspection to ensure the movement gives acurate focus.
Both acts are indeed separate and do not need to be connected. When we use a fixfocus box camera, we skip both and concentrate on framing. When we set the distance ring of the lens to some estimated hyperfocal distance, we do action 1, but not action 2. With the SLR mechanism both actions are combined in a direct way. That is: moving the lens and visually inspecting the sharpness are one act and we use the same mechanism.
In the SLR the lens transmits the rays which are focused via a mirror to a groundglass. We assume correct focus, when the image on the groundglass is sharp, that is when contrast is highest (coarse screen) or details can be seen clearly (fine screen). This principle is identical to the one used in a technical camera. As we can always move the focusing ring of the lens backward or forward over a large distance, we can always find a point of sharp focus. We do not have to know the focal length of the lens or bother about deviations from the nominal or true focal length. Or even about the true distance setting of the lens. All we need to know is the simple fact: is the object in sharp focus? And this we can establish by looking at the focusing screen.
There are no mechanisms to transmit any mechanical information from lens to body and none are needed. That is why you will not find focusing groups engraved on a R-lens. The manufacturer will of course stay within the tolerances of the focal length, but for the focus mechanism it is irrelevant whether the actual focal length is 51.3 mm or 49.9 mm. Both lenses will give accurate focus at the specified focal plane (51.3 or 49.9), when the projected image on the ground glass is found to be sharply focussed by visual inspection. As long as we can ascertain that the distance from the mirror to the groundglass (reflected ray upward) and the distance from the mirror to the film plane (transmitted ray horizontal) are the same, we will locate the focal plane image at the same location where the film plane is located.
Technically the Leica R body is partly assembled, including the chassis and the lens flange, and then at the end of the assembly line, the film guide rails are machined to the required specifications and automatically aligned parallel to the lens bayonet flange. The mirror box is machined and adjusted such that the two distances (mirror-ground glass and mirror-film plane) are within specifications.
Separately, any lens has its own distance from bayonet flange to focal plane and this distance is the same, irrespective of the actual focal length as a 19mm lens should focus the rays to the same physical location as does a 600mm lens. In the R series this is given as 47mm (see Osterloh books, where he names it wrongly "distance to film plane").
What are the required specifications? The film plane is not, contrary to what you will read in most books and hear in the public domain, a fixed location. Film support, remarks Goldberg in his famous book: Camera technology, is very complex. Sensitized surface must be located at the desired distance from the lens (our flange to focal plane definition) and lie in a plane coinciding with the image plane. Between exposures the film is transported and may not be scratched or subjected to mechanical pressure. Film surface location is the most difficult and makes "camera manufacturing an art as well as a science" (Goldberg).
Film is put through the camera body inside a film channel (invented by Zeiss and not used in the TM Leica bodies), comprising an outer guide rail, and an inner rail. The film is held in place by the outer rails and the pressure plate tha rests on the outer rail. The inner rail should hold the outer edges of the film in a flat position. BUT: The distance between both rails is 0.2mm. Exact distances vary as manufacturers have different tolerances. But on average a film has a thickness of 0.13 to 0.18mm. Thus the film has a clearance of 0.02 to 0.07mm and this is enough to jeopardize ideal film flatness. Films tend to bulge forward.
So the designer has some options when he has to locate the exact focal plane for his lens. Use the outer rails (lens flange to pressure plate distance) and be sure that you will miss the emulsion of the film where the image should be located. Use the inner rails and the flatness of the film is a problem. So here we have the real problem of locating the image plane. Every manufacturer has its own ideas and in the case of Leica they have decided that a distance of 27.8 mm from the bayonet flange will locate the image plane inside the film gate dimensions and take care of the curving of the film it self. The wellknown dimension f 27.8mm is often described as the flange to film plane distance. Correctly described we would have to say: distance from flange to focal plane, which might be identical to the film plane under a certain set of assumptions.
Let us look at actual figures. In the leica M-series the distance from bodyflange to outer film rail is 27.95mm. The thickness of the film channel is 0.2mm. (I measured 0.21mm). So the area where the film might be located ranges from 27.95 to 27.75. With a lensflange to focal plane distance of 27.80mm the lens, when correctly adjusted would focus onto the film emulsion layer. For the Konica, I measured film channel depth of 0.24mm, distance from body flange to outer film rail of 27.95mm. This last dimension is identical to the Leica figures but outside the official Konica specs of 28.00 +/- 0.03mm. More measurements would be required, but this example indicates that the Quality Control Criteria at Konica are somewhat more relaxed than the factory specs indicate OR I had a version which had been adjusted already.
The M-camera RF system.
If we now focus on the M-camera, we see that the rangefinding act of the SLRcannot be duplicated. We have an indirect relation between the two actions. We focus manually by moving the lens mount, but we check the distance setting with a separate act and mechanism: the rangefinder. There a number of identical engineering elements between the M and the R: film channel distance from pressure plate to body flange distance from lens flange to focal plane (27.80mm with the M)
The new element is the coupling between the lateral rangefinder movement (the distance that the ragefinder patch moves) and the physical axial movement of the lens mount. This is done through a very complicated engineering trick: the roller cam and arm on the RF side and the steepness and angle of the distance curve on the mount. Disregarding here the issue of the focal length groups (where you match the pitch of the curve to the exact focal length), we need a mechanism to translate the axial displacement of the lens mount to the axial displacement of the RF patch. The roller arm and cam do the job and this humble instrument is the Achilles heel of the RF system.The roller cam and the curve on the lens mount should match exactly and any movement on the cam or the fact that the cam does not follow the curve exactly (tolerances, non-parallel surfaces) is a source of trouble.
Leica-Konica compatibility.
It is clear from the facts that the dimensions and tolerances between Leica and Konica differ, even if the M-bayonet and the KM-bayonet fit. The mismatch that has been reported, may be caused by any of the factors involved: differences in flange to pressure plate distance differences in film channel thickness differences in cam/curve engagement differences in lens flange to focal plane distance differences in tolerances differences in engineering solutions (the Konica roller arm and cam are more sensitive to changes and tolerances than the Leica version differences in film flatness between several film types. It is quite rash to identify one of these aspects as being the sole source of the reported mismatches between Leica lenses and Konica bodies.
Verification in the "field".
Several individuals have checked that their Leica lenses on a Konica body do not deliver results that differ from what they expected to get from the combination Leica lens - Leica body. This is a minefield, really. The checks as reported used short focal length lenses or close focus distances or any combination that was available.
First of all: these checks without a proper and methodologically sound lab situation with controlled comparisons and predictable results, are quite subjective and without merit I am afraid to say. If one does not know what the results have to be from a set of quantified parameters, how can one reliably remark that the problem dos not exist? Checking a Konica body and a Leica lens and noting that the combination gives correct results at a certain distance, because the tester by visual inspection gets a sharp picture, is not a proof. The conclusion depends on what the observer accepts as "sharp" and without an objectified definition of that most elusive concept, we are lost in the desert.
Film flatness
Now the film flatness issue.
In the previous post I noted tat the only fixed dimensions are the focal plane of the lens (relative to the flange) and the film guide rail distances from the flange (the film channel). The focal plane, that is obvious, is located inside the film channel ( in the Leica case 0.05mm inside the film channel (measured from the flange).
Ideally the film emulsion should be at that same position. In the classical Rolleiflex you could insert a glass plate in front of the film emulsion that held the film flat and in a location such that the front of the film emulsion would coincide with the exact focal plane of the lens. In a 35mm situation you can not do it. So the film lies somewhat loose inside the channel, the back pressed on by the pressure plate and the perforation sides are limited in their forward extension (to the lens) by the guide rails. The natural tendency of the film is to curl away from the lens, but all studies will tell you that in practice a film emulsion at the film gate will bow outwards (towards the lens).
The center of the film area will be closer to the lens than the outsides. So if I use a Techpan with a total thickness of 103 micron (0.0103mm) (base 100 micron, emulsion 3 micron), the pressure plate will ensure that the front of the emulsion is at least 103 micron towards the focal plane which is in this case located at a distance of 150 micron from the pressure plate (assuming a zero tolerance for simplicity). Some outward bulging then will guarantee that the emulsion will be in a location of the focal plane. A thicker film with a thicker emulsion layer will have the focal plane in the middle of the emulsion, but these differences do not matter at all. Now what are the measurements to try to capture this bulging of the film towards the lens. Kyocera, when introducing the RTS III and the vacuum back stated that they had found following figures. A true flat film (with their vacuum mechanism) would still deviate at most 10 micron from the ideal position and films without vacuum plate would deviate 20 to 30 micron from the plane position.
Adding the 30 micron that Kyocera found to the 103 micron of the TP gives 133 micron which is very close to the ideal location of the focal plane. The APX25 has a total thickness of 123 micron and with bulge it would be 153, so exact where the focal plane should be.
The focus depth we discussed earlier for a 1.4 lens is 47 micron (in both directions). So this depth would cover the small deviations in film bulging.
The Kyocera figures are not alone. Zeiss did their own analysis and noted that film could deviate by 80 microns, which introduced in the Planar 1,4/50 a contrast from from 60% to 20%!! Goldberg studied a large number of SLR's and found that the difference between focal plane and film plane (including curvature of the film surface) averaged 70 micron, with extremes to 170 micron. IN such a case the focus depth would not cover any errors. But even with the average figure of 70 micron you could not get exact sharpness with a 1.4 lens as it exceeds the focus depth.
Film flatness is mostly caused by the film cassette and the use of the film. But the reported cases of sharpness plane differences might be related to this phenomenon of film flatness.
More to come. Heavy stuff, but I am afraid you need it to know what is going on
BW films of classical character
The monster test of the BW films is underway. I had some old rolls of Panatomic-X (20 years old), the film that introduced high resolution acutance photography to 35mm users. I also used the Maco UP25, 64 and 100, which are all versions of the classical Adox high actance series of KB films. And an ortho 25, APX25, APX100 and previous tests included PanF, TM100 and D100. To keep it manageable I used one developer (the famous CG512) and tried to develop to the same CI value. You need to do this as otherwise the steeper curve of the APX25 may lead you think this film is sharper than as example a D100, while in fact both are as sharp (seen as recording the same information from the object) but the 25 has higher contrast so the pictures have more punch, which could be sen as more sharpness. All pictures were enlarged 14x. which in my view is the minimum to differentiate meaningfully between films. The shots were of a model in an old desolated factory, giving ample fine details, tonal scale and resolution possibilities. The Pan-X showed outstanding sharpness and acutance, but its grain pattern was a bit rough but very tight. It resembled the grain pattern of the APX100, which is a bit finer, and indeed the two films are close. Finest details however were suppressed by the grain pattern. The tonal scale showed quite subtle grey values, again till the threshold of the granularity noise. The whole atmosphere is an image of very pleasing tonality, gritty sharpness and details painted with broad strokes. The UP100 (Adox KB21) has surprisingly fine grain, but on inspection the grain is clumpier but the edge sharpness is low so the fineness is bought at the expense of definition. Overall quality is still commendable and while not up to todays standards, in its day it certainly was a winner. The Pan-X and KB21 images indicate the progress realized in 20 years of emulsion technology. In itself of amazing quality, these films lag in all significant areas when compared to todays super stars.
But the differences are on the other hand more evolutionary that revolution. The APX100 gives images that suit the reportage style of location photography very well. These images have a fine realistic imprint: some what gritty, but with a smooth tonality and sufficient fine detail to make the scene interesting. The APX25 has a higher inherent contrast and so small details are recorded somewhat more forcefully. Grain is absent, which adds a creamy tonality to the scene, but on close inspection the recording capabilities are just a small edge compared to the APX100 or PanX. The finer grain does record the faintest shades of grey values, which adds to the 3D impression of the scene. The UP25 (KB14) is very
close to the APX25. Grain is slightly more pronounced, but much less so than PAnX or APX100. The tonal scale is identical to the APX25. The intriguing characteristic of this older thin layered, thick silvered emulsion is the edgy grain clumps, which, being very fine, also roughen up the image structure. It makes the picture very lively and especially for model photography and architectural photography adds an effect that can be described as underscoring the main story. Compared to the PanF as example the KB14 is definitely less smooth and its finer details lack the stark micro contrast of the PanF, but all said, this film is a worthy emulsion, that deserves a try. On a normal viewing distance, the main subjects literally jump from the picture. The Ortho25 is a trouvaille: I had some films and asked myself: why not? In the same setting, the prints proved excellent.
The skin of the model came out very realistic and I did not notice any strange grey values. f course there was no red in he scene, so all other gray values are more or less 'natural". Sharpness is excellent and grain very fine. The film has a clear base and so looks very contrasty, even if the values are close to normal. Not a film for every topic, but I am inclined to use it more often and when using some filters can even add some additional tonal scale. Definitely a film to try and use for portraits, glamour etc. Take care of red of course. But more versatile than mostly thought of.
As a preliminary conclusion I have to say that the UP25 and Ortho 25 are very potent films with a potential for intriguing results that need to be explored.
They are not as good as current top performers, but the distance from a TP as example is less than often imagined. So it is as easy to note that there is hardly any progress in BW emulsions in the last decades or to state that we have advanced a big stride to deliver superior results.
If you habitually use enlargements below 10X, the difference are even smaller The lesson: try more film than you use now: it will add to your toolkit and visual awareness.
Limits of digital capture
Many of you asked to get some reliable information about digital capture, digital techniques and the possibilities, (dis)advantages of a digital M or R. I have to admit that the discussion on the Lug is not the best source to come to grips with this exciting technology and its basics. Let me try to shed some light on he matter, as closely related to Leica products as I can. First some starters. I draw heavily on Schneider, Zeiss and Rodenstock information (published and unpublished). We all know by now what a sensor array is: a grid of pixel elements of a certain size. While we now can create pixel elements with a size as small as 1/4 micrometer, this is not the size we can expect in digital photography. Here the minimum size would be close to 3 micrometer ( one thousand of a millimeter). Current 3 to 5 million pixel chips have sizes ranging from 6 to 12 micrometer. Let us assume for easy calculatons that we settle for a pixel size of 10 micrometer (which is still smaller than we have at the moment in production). With such a size we have 100 pixels on a row over a space of 1mm. The classical measure for resolution is the bar test, which has alternating black and white lines of diminishing width. If there are 3 black and 3 white lines in a mm, we say that the spatial frequency is 3 linepairs/mm. With 50 black and 50 white lines, we have a spatial frequency of 50 lp/mm or a resolution of 50 lp/mm, or a resolution of 100 lines per mm. Consider our 100 pixels in a space of 1 mm. With such an arrangement we could reproduce the spatial frequency of 50 lp/mm in a one-to-one fashion in the pixel row. The first pixel reproduces the black line, the second one the white space and te third one the black again, etc. Very easy and no problems. With current 1/4 inch chips (30 by 30 mm total area) and a number of 2000 by 3000 pixels on this area we cover a 6 miilion pixel ccd, where every pixel has a size of about 12 micrometer. Our assumption of 10 micrometer then is quite realistic
It is established since many years that the eye can resolve at best 6 linepairs/mm and on average 3 to 4 linepairs/mm. On the assumption that a digital picture will be reproduced on a A4 size, we can calculate that a 7 to 10 times enlargement is needed. Working backwards we see that a pixel area that can capture 40 lp/mm is all we need or can use. Let us settle for this 80 pixels per mm as a base. It is clear that we can not use MORE resolution than the 40 lp/mm from a lens. If a lens would capture 80 lp/mm, there would be one black/white pattern that we need to squeeze into one pixel. If we had a lens with 60 lp/mm, one black and a half white bar would have to be captured by one pixel. That will not work. But what is more important: if we have a lens with a higher resolution than the sensor array can capture, we introduce false information and a phenomenon called anti-aliasing.
What to do? And now we are the bottom of the matter, which is fully neglected in all discussions on the internet, not only at te Lug. In front of the sensor array there is placed an anti-alias filter or a high pass filter, that will cut off the higher spatial frequencies that the lens can capture but the sensor array cannot handle. As Jim does not stop telling us (rightly) that we need 4 pixels to capture colour information, we need a second filter in front of the sensor array: the colour filter that has the wellknown blue, green pattern. The red colour is interpolated and that introduces noise. So a third filter is placed in front of the array: a low pass filter to introduce some softening of the image points to help the software to calculate the info. Then we need an infrared blocking filter and a noise filter etc. In sum there are at least six filter layers in front of the array, with a total thickness of 6 mm!!
One consequence of the limiting resolution of 40 lp/mm is the fact recognized by Schneider and Zeiss at all, that lenses for digital capture have to be designed specifically for the resolution limit of the sensor array, typically 40 lp/mm. This resolution need to be complemented with high MTF values and be as good as can be ove the whole sensor array. Schneider states that they can design lenses with a true cut off at a resolution at the 40 lp/mm point, after which the resolution drops rapidly to zero, to evade noise and anti-aliasing. This is not true: the MTF graphs show a much more gradual fall off and that is why the high pass filter is needed: to cut off the unwanted resolution.
It is true that normal 35mm lenses, designed for the analogue system, are not good for digital capture. Specifically wide angle lenses whose oblique rays in the outer zones might not be captured by the array. Here we must add the filter layers. A plastic stack of filters with 6mm thickness is a lens element in itself. And when oblique rays fall on this plane element of 6mm thickness, heavy astigmatism and coma is introduced. Furthermore (and now we are in optical theory again) a wide angle lens has its exit pupil close to the film plane. But now we have that filter in between. And the oblique rays from the exit pupil strike that plane at extreme skew angles, with all optical errors. So we may safely say that lenses designed for analogue photography are not good for digital capture.
If you look closely at the specs of true digital lenses for digicamcorders as example, you see that they try to design lense with limited resolution at the cut-off point AND (VERY IMPORTANT) lenses that have the rays from the exit pupil as parallel as possible to the sensor array. A special rechnique is needed, and remember the word as it will be more important than ‘boke’: it is telecentric optical systems. A telecentric lens is specifically designed such that the exit pupil is at a location at infinity and now all rays from this exit pupil are roughly parallel. This design is not a new concept: it is quite old, but now needed for digital systems. Telecentric lenses tend to be physically big and a new bag of tricks is needed to reduce the size. The second technique is to design a lens which uses the filter plane as an integrated element of the lens.
To sum up this part: normal lenses for 35mm photograpy (and now I concentrate on Leica lenses, but it works fro all others as well), have too much resolution, a different type of aberration correction and an exit pupil at the wrong place to make them good for digital photography.
When we look at the interviews given by Mr Cohn, when he notes that a new line of lenses and presumably a new bayonet is needed for future use dedicated to digital cameras, he is right. The idea that it is enough to put a 6 million pixel array/chip in an M or R and we have a digital camera is naive to the extreme and neglects all important topics that differentiate lenses for digital and for analogue photography. Were we to use current Leica lenses in front of a digital array, we would lose all that makes Leica lenses in the current state-of-the-art so special: we loose resolution from 40 to 120 lp/mm, we add errors the image because of the obstruction of the filter layer to the exit pupil, we loose the edge sharpness of the lower frequencies (the outlines of subjects) because of the high pass filter and we add blur to the finer details because of the low pass filter.
On the Lug there is an equally naive discussion that equates pixel count with true resolution. But again there is more behind the scenes. When we read that a camera or scanner can capture 3000 pixels per inch, it is easy to calculate that this amounts to 118 lines per mm or 60 lp/mm. And we simply assume that this is the same as 60 lp/mm captured by a lens on analogue film. Far from this! The 3000 pixels per inch are nothing more than the sampling frequency andWHETHER this sampling will result in a true 60 lp/mm depends on all kinds of factors, some of which are discussed above. So it is not unusable to see that there is a 50% drop in true resolution, compared to calculated resolution. A simple test is reported below.
I scanned a test chart with 600dpi and printed the result. The original chart has a resolution pattern of 1 to 4 linepairs/mm. The scan was 1:1 and so was the print. A direct comparison was made and it did show that the scan/print combo gave a final result of 4 line pairs/mm. So I reduced the chart by a factor of 2 and repeated the procedure. Now (under a loupe) I could detect with some difficulty 5 line pairs/mm. This was done with a very good 1440 dpi printer. So where the simple calculations say that I should have a resolution of 1440 pixels per inch or 57 pixels per mm, I got at best 10 line per mm, a far cry from the 57lines/mm that simple theory 'predicted'. When I used the sharpness filter to enhance the image, I indeed got results that gave higher contrast at the edges by reducing the grey values, but I also saw a severe colour fringing at the edges that might indicate optical flaws in the scan array or interpolation faults in the software.
Why can Canon and Nikon lenses be used on their digital cameras? First of all, the small size of the chips do change the focal length and a 50mm will be a 75 etc. This longer focal length cuts off the troublesome rays in the outer zones, minimising the oblique angle problem. Secondly: the size of a pixel (in a 3 Meg ccd) is about 0.01mm. And by applying the Nyquist rule (ask the experts what this is -:) )we can calculate that the limiting or cutt-off frequency is about 50 to 60 lp/mm. For general photography and high digital prints for magazines etc this is more than enough. So C and N are happy to limit their lenses to this frequency if they are able to reach it. And the N and C engineers, knowing the quality of the lenses can adjust algorithms such that blooming, ant aliasing etc is reduced or compensated.
Medium format cameras get their final quality by using much larger chip sizes (30 x 30mm) and even 70x100mm!!. A 30x30mm chip with resolution of 2000 by 2000 pixels, gives a 4 Meg size. But pixel size is still 0.015mm. And medium format lenses and large format lenses have typical resolution figures of 20 to 40 lp/mm. Newly calculated Rodenstock lenses for these formats have now 60 lp/mm at optimum and 40 lp/mm at other apertures.
So whatever size of format you use, the current pixel size of 0.01 to 0.015 is still valid when doing calculations. The pixel size of 7.5 micron in digicam cameras is offset by a still smaller chip size.
It is clear then that all high end digital cameras operate at optimum/maximum resolution levels of 50 to 60 lp/mm. That is fine and in the same league as when scanning a 35mm slide with a high end scanner.
The specific characteristics of Leica lenses are partly achieved by the resolution/contrast level beyond 50 lp/mm. Therefore the difficulties in putting current Leica lenses in front of a ccd sensor array.
As all calculations (50 lp/mm cutt off frequency) are all based on he fact that the image will be printed at a Din A4 size with a 60-raster, here we see the rationale and the limits.
Image perception.
The appreciation of final image quality ( as being observed at an analogue print or digital print or projected slide or as a reproduction in a book) is not easy. We all know that. We look at a picture with our eyes, and this basic fact has enormous consequences. Our visual system is not objective, and we perceive a picture with a large dose of psychology involved. Vision science (yes, this exists too!) tells us that we have a three step process: the retinal interpretation (the image as recorded by the retinal cells), is transformed by the neural response of the cortical pathways (the visual streams in the cortex) and lastly the perceptual interpretation will add perceptual properties (as shape, color) to this image, and this is again interpreted by our knowledge of the external world. We know we live in a three dimension world and the interpretation of the image will be done such that visual clues are interpreted as happening in a 3-D world. And of course these interpretations are also embedded in our cultural and emotional heritage. In order to get to the basics of an image, we have to be aware of all of these layers of interpretation. The simple concept of ‘sharpness’ is a most difficult topic to analyse. We have to find and isolate aspects of an image that most people will identify with ‘sharpness’ and we have to find ways to reliably reproduce image elements that are associated with ‘sharpness’. Now ‘sharpness’ is not a quantifiable aspect of an image as sharpness clues are contextual and dependent on psychological mechanisms. I have seen pictures described as being ‘tack sharp’ that in my view were fully unsharp, compared to other pictures that I would consider sharp. If I define sharpness (as example) as a measurement of the total amount of visible information in a picture I am telling you that for me the sum of gradation (the differences in tonal value that can be detected) and resolving power/contrast ( the smallness of details that can be identified) are of paramount importance.
BUT: gradation and resolving power are dependent on magnification and a lot of other factors. For the sake of simplicity. let us assume only magnification is important. Then in order to compare two pictures, we need to enlarge them to the exact magnitude and they must have the same content. If they have a different content, our layers of interpretation could get different visual clues from the picture and make a different assessment.
Here we have the first problem when individuals are telling you that picture A is much sharper than picture B and by inference that lens A is not as good as lens B. First: we forget about the imaging chain: same aperture, distance, film, speed, development, enlarger etc are needed. Then we need the same content in order to compare. This is normally not done. The many comments about lens performance you read on the net (newsgroups) and in articles hardly ever do a comparison, but an isolated assessment: or a comparison of pictures taken in very different circumstances and with different subjects. How it is possible for someone to take a 400ISO picture at 3 meters from a person with lens A and a 100ISO picture at infinity from a landscape with lens B and then give a verdict about lens performance escapes me. Unfortunately this is the general case. Even if you use the same film and aperture and speed and distance but different subjects there is a danger that the visual processing done by our mind will tend to the picture that is more pleasing and thus distort the assessment.
If you look at the care that is needed to do really meaningful tests of the working of our visual system, you may wonder why photographers are so quick to express their opinion that this picture is sharp or better than another one.
Remember that there is no agreed standard of image quality and a large latitude of interpretation is possible. Attempts in the past to try to find a statistical average by showing hundreds of persons a set of pictures and asking them to select the ones they like best or would be best in terms of ‘sharpness’ or ‘color fidelity’ or whatever you want to analyse, have not failed but are hardly conclusive. Given these aspects (image chain, comparison, visual processing) I am very hesitant to make an assessment of one or a few pictures and derive conclusions from this about the lens quality (just one of many parameters, if a very important one). And when I read that some photographer remarks that on the basis of a few rolls of pictures he does not like a lens and there fore the lens is bad, I am wondering what he is trying to do. It is human nature to want to rank items: this restaurant is better than that one, this book is better than another one, I like this ice-cream more than the one I had a week ago. This car is a bad one, and I want another one. This type of discussion can be found in most social discourse and there it is fine and appropriate.
Personal views are always acceptable and in a certain way the truth . Every individual is his own final arbiter. Like something and you will be pleased. It is as simple as that. If you like the results you get with a 400ISO film and a Summarit lens at 1.5, the world is OK. You are pleased and that is all there is.
Replace lens by film, analogue by digital, wet print by inkjet print and the same statement holds. No discussion is needed. And if we would stop here, we could all benefit from an exchange of opinions and be interested in why some one is pleased. But now a remarkable process occurs. The statement that “I am pleased” is stretched to cover the statement “the products I use are better than the ones some body else uses to get results I am not pleased with”.
This is not allowed. There is no logic that can allow you to infer from : ”I am pleased” to “the products I use are better”. If you are ranking products you need a standard and a measurement method. If you are ranking lenses you need to have a standard to compare different lenses. And you should be willing to accept that your personal opinion is not the best yardstick. Now we enter the realm of science: most people assumed on the basis of common sense that a feather would fall at a slower rate than a brick. But what happened: under identical lab conditions, a feather falls as fast as a brick.
The same holds for the assessment of a lens based on an appreciation of a picture. The picture is the result of a large chain of acts and factors that do influence the appearance of a picture and the content of the picture evokes an emotional chain of reaction in the brain that may influence the appreciation.
So it is simply impossible to extend the personal impression of the image quality of a picture beyond the individual statement. You as a person may express your individual feeling or impressions, but that is it. A personal opinion may be very valuable and give insight into the topic, but that is quite far removed from stating that some one’s personal preference can be seen as a normative statement. To transgress beyond the personal opinion we have to do a group test , just as what is done in art.
Up till now no one I know of has ever proposed a meaningful definition of image quality as perceived by photographers or others. Being inherently subjective, such a definition is not feasible. So it is natural that lens designers and theorists of optical performance need objective criteria. The goal of a lens is to reproduce as faithfully as possible the object in front of the lens. So the easy definition of image quality (as seen by optic designers) is the closest approximation to the ideal of the perfect reproduction. We know that the deviation from the ideal is caused by aberrations and mechanical faults (tilt, decentring, wrong spacing of elements, bad glass surfaces). Neglecting for a moment the last ones, we have as objective criterion the amount of aberrations in a lens. In a previous newsletter I have told you about the several methods to measure the aberrations in a lens (MTF, Strehl Ratio, etc). So from this position it is easier to define a lens as good and better or best. Even here we have design philosophies which make it impossible to rank all lenses in one row. Zeiss has a different approach to design than Leica and that makes it difficult to say that either one makes the best lenses. But at least we have criteria which can be measured, discussed about, compared and ranked in importance.
The proponents of the subjective assessment school, will argue that you cannot use objective measurements because they are done on a flat subject and what is being measured is not related to the 3-D world and the practicalities of photography. This argument is extremely weak. Aberrations are not confined to a simple plane of focus. But they are at work too in the depth of a subject. And any computer program can do what is called through-focus analysis, meaning that you move the plane of focus over a certain distance to see how the difference in focus affects the aberrations. This through focus movement is identical to depth of field and so to picturing an object with depth.
The real topic is to find a way to translate the optical quality of a lens to some sort of picture quality and taking into account the imaging chain. You have to take real pictures to do that. But not at random but controlled and comparable.
If we want to test a lens or make meaningful statements about a lens, we need to create a situation where only the characteristics of a lens make a difference to the result we study. In other words: keep all parameters the same, except the lens. That is the only way to make valid comments about a lens performance.
That is why I always use the same objects when studying the behaviour of a lens. I photograph my cat, at a specific distance (which is diferrent for a 90mm than for 21mm) with flash and the same film in the same environment. And a landscape in identical situations: sometimes the weather does not help and I have to postpone my test. And a model and several other representative subjects, but always same apertures, magnifications, light, film developer etc. I take care to change only one variable: the lens. This is hard work and often boring. From film tests and developers i know which film/developer combo allows for the maximum performance.
Then I have an impression of how a lens performs in practical situations. But I know that there are many limitations at work here. To back this up I do a benchtest to measure infinity setting, contrast, resolution, decentring, vignetting, the changes in optical performance from center to corner, at all apertures and distances. Again hard work and boring. But I know at least what to expect from a lens and when my practical results are in accordance I am happy. When not I have to refine my practical testing to get the optical performance onto the negative.
Now there is that age old argument that you can not make meaningful statements from one sample. That is statistically irrelevant. In theory this is true. You can not generalise from one sample. In practice we all do this. We use one lens and say that we like it or not. Read any test magazine about cars, computers, scanners, motorbikes, airplanes. What you have. EVERWHERE only one sample is used and conclusions drawn. No reader of a car magazine has ever questioned the validity of the test, because it has been based on one sample. It is typically of the photographic world and in fact only within the Leica world, that this argument pops up. Why: because someone 50 years ago made the remark and it has been repeated over and over, without even questioning whether it is still correct now or even was in the past.
The justification of the one sample test is two fold: One: current production standards are such that is it unlikely that a sample way off specs will emerge from the assembly line. In the past the quality tolerances were arger and then it may be the case that you could test a particularly good or bad example. Two: if a responsible tester gets a lens that seems to be out of line given the specs of the manufacturer, the MTF graphs or other data, he will try to find out what is happening. I my case: I first check all equipment to see if they are still calibrated as should be. If so: I make new pictures with additional checks of all parameters in the imaging chain. If the lens still is below or way above what is expected. (given my experience of having tested hundreds of lenses) you contact the factory.
Any tester knows that in a one sample case there are some question marks as to how representative the results are. These have to be faced and checked. This can be done by cross checking, cross comparisons, etc. How large should a sample be to be really meaningful in a statistical way. Sampling theory tells you that the size of the sample is not related to the size of the population, but is only dependent on some internal calculations. In fact at least 20 items have to be included in a sample for true statistically relevant results with 99% reliability. So what number of items below 20 you take, it will not improve the reliability. Whether I use one or three or five, is irrelevant. Twenty I need. So I do not increase the reliability of my results for the whole lens population, if I use one lens or even 10. That being the case I can safely state that testing one lens is as good (bad?) as testing five.
It would be nice if these lessons of statistical theory would be diffused in the Leica community to get rid of yet another myth: that you can not get relevant and valid results from testing only one lens. If we follow the strict rule: that you need to test 20 items before the results are statistically valid, we could stop all discussions about Leica products: Who has tested 20 items of every lens or camera he talks about?
It is my strong impression that for some reason Leica discourse has been stuck in the same groove for the last 60 years. Would it not become time to embrace modern ideas and concepts? The recent discussion on the Lug about glass for filters is another example of the myth creating power of being out of synch with current technology and science.
Digital versus analogue. No doubt that printing technology is improving to a level where it may be difficult to see the difference between a digital print and a wet print. But that is not the issue. Any digital print is a rasterised version of the digitally captured image, just as when you would take a photograph to the printer and ask him to produce a book. And we all know that the craft of the printer is such that at higher rasterized value the pictures in a book look convincingly close to the original at least at A4 size. There have been some studies about the relationship between sharpness and graininess. The conclusion is that image quality is not a linear combination of sharpness and graininess. The overall quality tends to be determined by the lower of the two aspects. If graininess is high then the image looks poor even if sharpness is high. If sharpness is low that the print has low quality regardless of the grain level. Electronic images are still low in sharpness and low in graininess, and so will be perceived as lower in quality than analogue prints that have good edge effects or acutance and so high sharpness. Of course you can enhance the sharpness of the digital print by the several filters available tfor sharpness manipulation. But another study noted that the sharpen filters introduce unwanted artifacts and are not so easy to apply with good effect. Specifically this report noted that the sharpen filters in Photoshop were quite weak if not bad) and to really sharpen a digital image with convincing results a person had to work for many hours to get the required results. Then it is still easier for me to jump into the wet darkroom and start with a negative that is superior and a process that is simpler and takes less time. Amazing is it? As a famous Dutch football player once said: every advantage is accompanied always with some disadvantage.
Leica and medium format
The issue of image quality and how to use the Leica equipment (and optimize the imaging chain) is an important one. When Mr Stein and his team designed the M3, they had in mind the creation of the ultimate photographer’s tool, not a collectors piece. Leica equipment is made for taking photographs and enjoying the act and the result. We are not getting any closer to that goal as long as we stay in easy dichotomies as analogue versus digital or medium format versus 35mm. While these topics bear a relation to the concept of image quality, it is best if we try to define a somewhat manageable yardstick to see if we are approaching the standard of image quality. Every one is familiar with the idea of resolution coupled to contrast. Resolution is measured in spatial frequencies as a string of alternating black-white bars of ever smaller width (per mm) and contrast is defined as the relation between the black and white as a percentage where black is seen as 0% intensity. The Leica goal of optical design is to have 40 linepairs/mm at a contrast of above 50% from center to corner of the image. And most current M lenses reach that goal. What we need is a different matter. And here opinions range high and wide. Some would say that at least 100 lp/mm can be captured on the negative and have to be for superior results. Let us take a realistic approach and ask ourselves if we have some visual awareness of these numbers.
When I give a course. I always project a slide of some landscape scene, which is by all standards sharp and full of detail. Then I ask anyone to identify a part of the scene where (s)he would see 5 or 10 or 50 lp/mm. It is remark able that no one can!! We all discuss the concept of resolution and are quick to throw into the discussion some numbers, but we are unable to visualize this. One example: a landscape mostly consists of shapes of larger or smaller dimensions and within the smaller shapes we hardly see still smaller sharply defined details but shades of colour or grey values. Does it occur to you that outlines of small and large structures can be equated with just one linepair? And you may have trouble finding structures that can be related to 3 or 4 linepairs. Indeed it has been established long ago that for exhibition quality landscape pictures 2-3 linepairs per mm and for high quality portraits 1-2 linepairs is adequate.
The medium format advantage. Here we have the origin for the superiority of the medum format camera. When we have a large print of 40x50cm, the negative is enlarged about 8 times. Two linepairs in the print imply 2 x 8 lp on the negative. That is a meagre 16 line pairs/mm on the negative. Now most lenses, even the simple one in the Seagull, can resolve 16 lp/mm with a fair contrastand the real advantage of the larger format can be played out. If we picture the same scene on a 35mm negative and the medium format negative it is obvious that every part of the subject will be covered by a much larger area with the medium format. This larger area does not translate in higher sharpness (as both systems do resolve the required 2 lp/mm), but the medium format reproduces the gradation in the tone value differences more faithfully as there is more silverhalide available to reproduce these subtle gray scale differences. However good the Leica lenses, they will loose this game as there is (as the word goes in the automobile world) no substitute for cubic inches, in this case square inches of negative area. My own comparisons between Leica and Hasselblad show a virtual equality till 30x40cm (which is already a big tribute to Leica and the lens/film combinations used) and a gradual loss after that as the grain of the 35mm negative starts to break up the smooth tonality, NOT the sharpness or reproduction of fine details.
Resolution and contrast in the practice. Our eye has a visual acuity of 2 minutes of arc per line pair or in normal parlance we can see the letter E with a size of 9mm at a distance of 6 meter. That is we can distinguish separately at that distance the three horizontal bars of the letter E. Some people dismiss the testing of a lens with a barline pattern as nonsense. Still when doing so, you get a very good idea of what the dimensions are. When I did this type of testing I had a bar line pattern of one linepair per mm and my simple assumption was this: 100 linepairs is a piece of cake for a modern Leica lens so I need to take a picture at a magnification of -100 to get on the negative a structurethat is 100 lp/mm. So I took the pictures at a distance that is 100 times the focal length. For the 280mm Apo-Telyt that is 28 meters. At that distance I could not even distinguish the target and had to introduce a deep black line on the chart to help accurate focusing. After developing I was not able to see any structure: the whole test chart area was one grey patch. Only under the microscope at 40x I could see that mythical 100 linepairs. And at first I did not capure details at all: every line was unsharp by movement of the camera body. Only after mirror lock up, a very heavy tripod and additional weights on the lens and body to balance any vibration of the lens translating to the body.
Who says 100 lp/mm are easy and can be done even handheld? I am surprised that in the current Leica Fotografie International there is an article which claims that with this 280 lens outstandingly sharp pictures can be made when using that lens handheld or when using a monopod. It is quite clear that this photographer has a very different interpretation of what constitutes a sharp picture or he is referring to the quality definition used above ( 2-3 lp/mm in the print). And then you under utilise the apo-telyt 280 significantly.
Even the limit of 40 lp/mm is not so easy to reach. Hand held photography at normal shutter speeds hardly ever brings that level of resolution onto the negative. Sad but true. After doing all these tests I am now acutely aware that these three digit figures of resolution are more elusive and hard to capture than most assume.
The comparison 35mm versus 120 format. If I would use a very fine grained film in 35mm format like Techpan, I am able to compete with the micro gradation (subtle changes in gradation in small areas of the negative) and tonal smoothness of the medium format. And in fact till 30x40 or a bit larger (40x50) there would be hardly a visible difference. The comparison between formats is always a system choice. If I would compare Techpan 35 and Techpan 120 at the same final size of 40x50, there is a difference, however small. But using the wider apertures of the 35mm format, the most often used comparison is Techpan 35 and let us say Tmax100 in 120. Then both systems are equal. BUT: there is more. The microcontrast and overall contrast (sharpness at subject outlines) of a Leica lens (MTF values) is higher than that of a comparable Hasselblad/Zeiss lens. (incidentally: Mike Johnstons article about MTF and contrast is not correct when he tries to explain that there is no overall contrast in a lens when we are talking MTF). The superior MTF values of the Leica lens cannot be transformed easily onto the paper. If I use a 15 times enlargement in my enlarger, I have by nature a larger amount of light scatter and a general loss of contrast. At 5 times enlargement I can use Grade 2, but the identical negative at 15x needs Grade 2.5 or even 3, to compensate for light scatter and the need to illuminate a larger area with the same lamp, which must bring a drop in contrast. In fact at enlargements above 10x, there are all kinds of disturbances that degrade the final image: like light scatter, lens quality and vibration. These effects should not be dismissed.
IBW developers
While on this topic: the best and most satisfying paper size for Leica pictures is the 30x40cm. Why? With this size you can bring within visual range all the detail that the lens/film combo can capture and you can be close enough to the print to see the overall picture and the small details. When using 20x25cm, the enlargement factor is too low to bring into the front all details of the negative and at 40x50cm you have to stand back to see the picture and then you are so far from the print that the resolving power of the eye does not enable you to see the richness of detail.
It is quite clear to me that BW photography with prints at 30x40 is the ultimate standard to judge Leica quality. If there is progress in the mature industry of analogue processing, then the area of print papers holds the most promise. The tonal scale and gradation and print contrast (deep black and paper white) can be significantly improved. Normally a paperprint gives you a Dmax of D=1.9 to 2.1. But with special toners you can reach D=2.3 to D=2.5, which is a spectacular improvement that digital prints can not even approach. With careful exposure and development you can create a negative that holds its shadow details such that this deep black improvement retains the shadow details. For the negative I see hardly any improvement, since Kodak and Agfa have shelved the activities to improve photon efficiency. Agfa BTW will retain its analogue business and even try to expand it. On the topic of developers I have a simple but iconoclastic view. Most of the developers now on the market deliver identical results, given the film emulsions we have on the market. I know that many photographers hold on to their preferred solution (developer and dilution) and claim all kinds of special advantages. None of these can stand a careful comparison test. And the wellknown handbook of Anchell and Troop has many claims but not one is being documented and/or proven by comparison tests. I do believe that the whole discussion around the special secrets of BW processing are a relict of the past when alchemy was thought of as a real science. It is also part of the profile that a photographer wishes to have: having access to hard won personal experiences that add an edge to his photos. Of course there are some clear differences between classes of developers. like Xtol and Rodinal. But after having tested and compared about 50 different developers with all kinds of film I have to state that the differences are in 90% of the cases marginal if not trivial. One may be able to measure a longer toe in some film/developer combo when doing densitometric tests, but I wonder how much of that aspect can be brought to paper, as the characteristic curve of the grade of paper used may not support or diminish that effect. As the print is the final result/arbiter, I am not so much interested in stuying one step in the chain, but finding a total solution that gives me a fine print.
I have done all kinds of tests and used prints, the microscope, the densitometer to find reliable and reproducible differences between combinations of film and developer and paper. But the range of parameters is so great, the incidental user interference unpredictable, the match of components of the single chain too subtle and the effects so badly understood that we are facing a process that has a wide latitude of unknown influences, the impact of which we cannot predict. As example: many would stick to a proven process of agitation. (30 seconds rhythm or whatever). Developer manufacturers will show you results that the agitation is of minor influence on the result (at least in 35mm format) and chemical theory will learn that the movement of molecules and the chemical chain of interaction between the developer substances and the silver halide is hardly influenced by the agitation sequence. If someone will tell you that he has two negatives that are indeed different, he may hardly be able to single out the effects of one process.
It would be best for your conscience and your improvements in photographic technique to stick to one of a few possibilities: PanF+, Delta100 or Tmax100 (possibly Acros: but my comparisons did not show any clear advantages of that film to the rest) in Xtol (whatever dilution as long as it gives you negatives that cover a six stop contrast range in the object with a density range of D=0.1 to D=1.3. Rodinal is also a good choice, but expect some grain and noise that reduces the micro gradation. But it depends on your choice of subject. But test your film/developer choice consistently at 30x40 or a selection of the negative at enlargement 12 - 14x. If you stay at much lower enlargements, any possible differences and effects may be lost beyond the visual threshold.
Why do East European films have a clear base as compared to Ilford, Agfa and Kodak with a grey coloured base. First, as usual, some theory. When light is hitting the emulsion layer three separate processes take place. The easiest one is where rays are passing straight through the emulsion and hit the halide grains. Then we have the situation where the rays are deviated from their course because of refractive index of gelatin, and because of scattering at other grain clumps and form what is called “irradiation”: the scattering of rays within the emulsion, producing a softening of the edges of contour lines, and diminishing the effect of acutance. Thirdly: some rays are passing through the emulsion and are being reflected back by the pressure plate or the backing of the film: these rays produce the wellknown halo-effect or halation. Irradiation can be minimised by thin layer films and by a grey colour of the film base. Halation can be minimised by the use of a grey base, a special anti-halation backing or by a additional layer of 1 micron thick that prevents rays from reflection back into the emulsion. Many 120 format films use a anti-curl backing and Kodak or Ilford add to this layer an anti-halation backing. So several 120 format films are clear based after development and fixation. The original Adox films used a three pronged attack on irradiation and halation: A very thin emulsion layer A grey colour for the film base An additional anti-halation layer of a few micron that is washed away after fixation. This layer however is very sensitive to scratches when the film is long., so suitable for 120 format and not for 35mm. A grey film base reduces the sensitivity of the film and reduces overall contrast. The East European films use a clear base and an additional anti-halation backing. So they reduce halation, but not irradiation, which is countered by their very thin emulsion layers. In my testing all Efke and Maco films exhibit a much higher overall than comparable Ilford, Agfa and Kodak films. A clear base is more cost effective and a grey coloured base is more complicated and it can reduce irradiation more effectively. The visual impact of the Efke/Maco films as having a brittle edge contrast may be expained by this clear base. A topic for more study.
MTF and flare
MTF graphs. In the book there are MTF graphs of all M current lenses and their predecessors. As I have noted that interpreting an MTF garph is not an easy act here are some guidelines. We are all well aware that image quality of a lens is better when the lines of the MTF graphs are straighter and located higher in the diagram. But here are some pitfalls: The actual contrast transfer in %points is not easy to assess. As example: for the 5 lp/mm graph (which defines overall contrast) a difference of ONE %point is significant. So a lens with a 94%
contrast transfer at 5 or 10 lp/mm will show a drop in overall contrast when compared to a lens which has a 95% contrast transfer. On the graphs that is not easy to see. But for the graph that covers the contrast at the 40 lp/mm a difference of 10 % points is not very important. So two lenses, one with a transfer of 50% and another of 40% may be very close in performance. To get a good grasp of general optical performance of a lens it makes no sense just to look at two graphs, (wide open and at aperture 5.6). The performance needs to be studied at every aperture and the behaviour of the curves at every aperture needs to be compared. As no one gives you these data, you are probing in the dark. That is why I persuaded Leica to publish graphs at aperture wide open and at optimum performance and not at the usual aperture of f/5.6 (Zeiss, Canon) or f/8.0 (Photodo). Most recently designed Leica lenses are already at their best around f/4 or wider. So a comparison at 5.6 is not the best way to go. Thirdly there are several image planes where the designer may want to focus his lens upon. Remember that a point is not focused as a point but as a patch of light.
The rays that enter the lens are focused not to a point but to a caustic, which is a kind of paraboloid that is an hourglass shape turned horizontally. Therefore the image plane can intersect the waist of the hourglass at several locations, giving a different balance of contrast and resolution. You can go for the finest resolution, but loose contrast or the other way around. If the person behind the MTF gear does not know where the designer wishes the focal plane to be, he can seriously go wrong. I asked the man from Hasselblad who does the Photodo measurements what focal plane he chooses. He refused to answer and that means that the Photodo results are very suspect. Fourthly and most
seriously: MTF measurements (calculated or really measured as at PopPhoto) are generated without taking into account that most devious aspect of flare. MTF graphs are normalized to zero and disregard flare. So a theoretical high MTF value of let us easy 92% (at 5 lp/mm) could be in practice drop to 80% as flare reduces the contrast (micro and macro). This is never discussed (not by Photodo, nor Zeiss or even the many interpreters of MTF graphs). You will find it discussed in the M-broschure of course. Flare is the most significant of all image degraders.
Digital trends and the role of a brand name
The topics this time are a bit heavy and wide ranging, but still relevant for Leica photography.
The decline of the PC as a big force in the industry is imminent, as the merger of HP and Compaq indicates. The PC is now a simple commodity in the world of consumer electronics, and a low priced one, with a thin profit margin. Its role as an information machine is losing out to modern gadgets like mobiles, handhelds etc. And this is a normal development. It does show that consumer electronics all will follow the same path:
Innovation, mass production and acceptance, overtaken by another innovation and becoming a low priced commodity.
The question is what will be the fate of the digital camera, which, like it or not, is for most people an extension to the PC as is a scanner and a printer.
Just as PC owners see a limit to upgrading to ever more speed in the processor, so digital cameras will settle for a 5 to 6 million pixel size, as this is all the average and even ardent amateur will need or is willing to pay.
The industry is now very keen on selling digicams because if the very enchanting profit margins. Note that the average digital camera is based on the same camera chassis and production technology as the analogue version, which sells for a third or even lower of the price. The chip is expensive, the software is? None of that: people pay a high price because the product is still in the early innovator stage of the product cycle.
But once largely adopted (as the PC is) and the product life cycle is stretching to that of the video recorder, the price will drop, the margins go down and the electronics industry has a new challenge.
Do not believe for a moment that the digital camera companies have photography in their mind when devising and selling the digicams.
In my view the fundamental change is the shift of industry from photographic to consumer electronic.
The classical photographic industry had a vested interest in the art and culture of photography as the tools they made and created were dedicated to photography.
The current electronic cameras can migrate to any type of capturing device, stillframe or moving. The consumer electronics do not foster a culture: video recorders and camcorders are in the same class as refrigerators.
Of course many will argue that it is possible to use a digital camera as a tool for photography: we can seek the decisive moment as well with any capturing device.
It is predicted that the Xmas season this year will see an avalanche of digital cameras in the lower price region and in the high tech class and the overtaking of the sales of analogue cameras may be an economic indicator that the last fortress of analogue recording is crumbling.
In a sense photography will continue as an art form. A well composed picture with interesting content and fine colors and details or a daring composition with new insights into the subject can and will be made with the digital camera
But what is fading is the craft of photography as a knowledge base and experience to exploit the finer points of the medium. Walk around for a while in a range of new groups and you will see the frightful thin knowledge base on which most people lean for information and opinion.
It is fully true that Cartier-Bresson was not a technician, he even boasted that he would not know a word of phototechnique. Still he knew intimately what the emulsion he used could do for him. And he exposed the film as an aid to the message he wanted to convey. This intimate knowledge and rapport to the medium is part of the fascination of true photography.
The medium is still (part) of the message. And the Leica camera is at the core of the craft of photography. We all know that most Leicas are owned and or used by amateurs, and if Leica had to live on the sales to the professional users, the cameras would be 5 times more expensive.
Leica has made a wise step to combine forces with Matsushita and to put their lens design expertise in a basket with a comsumer electronic company, that also manufacturers the Minilux for them.
But will the analogue line survive? Yes, if the craft of photography thrives in a world where image making increasingly means image manipulation. No if the dominant force will be the computer assisted creation of a digital image.
CAI (computer assisted Image manipulation) is a fine tool, but as with all these terms (CAD, CAM etc) the fallacy is to assume that the computer can add to the deficiencies of the human operator. No lens design software can design a decent lens without heavy input from the user. So he still needs to have intimate knowledge of the theory of aberrations, even if the compter can optimise the lens without user control.
My point is that without intimate knowledge of the craft, no computer can do anything meaningful. Lose the basics of the photographic craft and no Photoshop can compensate.
The Leica is designed around the ideas and the tools of analogue photography. It is one of the finest instruments in this medium. The medium dies, so will the Leica.
Leica is a brand name and the connotations are with a certain product. Mac Donalds is a fastfood supplier. A car made by Mac Donalds would not sell, as this is not their area of expertise. Leica cars would not sell, nor Porsche cameras. A brand, as has been noted recently, is a necessary element in todays overcrowded product arena. A brand is and was based on one feature: a guarantee of reliability and quality. People return to the product if these aspects are guaranteed. A Leica badge to a product that dies not live up to these standards will fail in the market. Consumers are not loyal to the brand name in itself. They are loyal to what the brand provides: quality, reliability, and the ease of choice.
Sigma lenses
Lens tests.
As I told in the previous letter I am now testing a series of Sigma lenses to see where they stand. Sigma is one of the better third party suppliers and certainly a daring one, giving the exotic specs of some of their designs. The comments I will make should be read in context and I hope you will respect this as I am unhappy with selecting some statements to prove a point rather than see the whole story.
The test is done in two stages. First the results from the optical bench and the conclusions that can be drawn from these results. The next part is an assessment of how real life pictures look in practice.
I have to say at the start that some usual opinions will have to be relegated to the dustbin.
It is not true that this type of lenses is always of inferior quality, compared to first rank optics.
It is true that mechanical engineering is more important than optical design as such, as the potential of the design can only be put in practice with a mechanical design and construction that matches the optics.
It is true that the quality control and tolerance bandwith are of significant influence on the price. As example all lenses had dust and specks above the level we are used to from Leica lenses..
All lenses are fitted with pressed aspherical elements, but this is not translated into small size or superior performance in itself.
The zoom 3.5-4,5/15-30mm.
A very big lens, with a deep blue coated front lens. Why this coating is not known, but it makes it very difficult to look into the lens. Its performance is excellent:
At 15mm and full aperture quite crisp definition at the center part of the image with the usual steep drop in quality at the outer zones. Amazingly low distortion and visible vignetting of normal proportions.
At positions 21, 24, 28/30 the quality stays in the same league or improves a bit. There is at the wider apertures a veiling glare and a visible amount of spherical aberration that softens the edges and brings some blur circles around the star images. There is overall a quite visible amount of chromatic aberrations, which shows in color fringes at the edges of points and lines. You would however see this only at larger enlargements.
Tilt and decentring of elements is also noticeable, but again quite low for a lens of this complexity.
All in all: a very fine design.
The zoom 2.8/28-70.
At positions 28 and 35 quite good quality in the center part of the image, with the same veiling glare and chromatic aberrations. Distortion at the long end the negative is more visible, but not visible at the short side of the negative. At focal lengths 50 and 70 however the lens shows a pronounced distortion. Now there is much wobbling in the lens groups that make up the design. This is only visible at some focal settings as in other ones the tilt will cancel out. Here we see a phenomenon that often occurs with these zoom lenses. The AF movement needs to be very smooth and fast and so the zoom groups (variator and compensator) are mounted with less secure mounts. Now you can understand why Leica’s 2.8/35-70 was so heavy and difficult to assemble.
The 1.8/24mm macro.
A big lens as usual with these SLR designs. This lens was a real surprise. Very good image quality at full aperture over most of the negative area. Minimal distortion, only a trace of veiling glare and wide open aberrations. In fact this lens at 1.8 is a good as the Elmarit-R 2.8/24 at 2.8. Admittedly this Minolta originated design is 15 years old, but we can see where current optical design is heading to. The mechanics of the Sigma lens felt a bit sloppy, but that is usual with many AF designs and it this case did not affect the performance. Some tilt and decentring that reduced contrast at the outer zones.
Stopped down it improved with a jump as then the aberrations are reduced and the scattering of light is almost eliminated.
The 1.8/28mm macro.
This lens is a big one too. Optical performance was way below the Summicron 2/28, and below the 24 companion, but still in itself of average to good quality. Many photographers would be happy with the quality. Again the fingerprint or family trait of veiling glare, chromatic aberrations, and in this case a higher level of decentring and tilt. The center of the image,where most people would focus their attention to, showed fine detail with the soft edges of the color fringing and the spherical aberration. The outer zones showed a big drop in quality that was improved when stopping down. Distortion was low but of a complicated pattern of compensation in two twists of direction.
Overall these lenses do indicate where the cost/performance equation lies:
Wide open performance is degraded by aberrations, due to a less stringent optical correction. But stopped down there is less to complain about. The mounts and mechanics and assembly are the parts where the cost containment does occur. At least we know why we pay the premium for Leica lenses.
But generally the leveling of performance to a high standard with some downward glitches is the trend of the last 10 years. A good design now is easy thanks to powerful design programs and many examples of how to design for good performance. A cost effective assembly is a strong point of these manufacturers and they can allow themselves the equipment to manufacture complex synthetic parts, that ease assembly and whose high cost can be distributed over many tens of thousands of lenses. The other side of the coin is the wider tolerance band and the weak wide open performance. But there are some remarkable designs and the 15-30 may be designated as a truly good lens.
Next part is a report about what pictures we took in practice with real film and real objects.
Then we will give an overall assessment. These preliminary results are just a midpoint evaluation. And some finer points may change.
Best lens??
The statement in Pop Photo that the Planar 1.4/50 is the best 50mm lens ever and that the new 3.5/50 Elmar in the '0' series is the second best one, is remarkable and rash at the same time.
First some general remarks. It is not often done to lump all 50mm lenses in one basket and select the best one. It is wellknown that with larger apertures, the number and severity of aberrations increases with a squared and cubed magnitude. This increased amount cannot be corrected to the same level as can be done with lenses of less wide aperure. You can of course add more lenselements to offset the aberrations, but this equation has its own limits. In the past 50 years designers have found the best compromise with a six element f/2 des ign in the 50mm focal length. And all lenses that offered a wider aperture (1.4 or 1.2 or even 1.0) had a performance drop compared to the best f/2 designs. This almost iron law worked in the thirties, when the first wave of highspeed designs were created and it works today with all mayor manufacturers. Nikon, Canon, Leitz and Leica never designed a 1.4/50 lens that equalled or surpassed their own best f/2 design at the widest opening.. There are crossovers and generation changes of course, but as a general rule this works. I at least do not know, (from practical tests or from the literature where thousands of designs have been examined), of any 1.4 lens that at its widest opening (1.4) is better than the best f/2 lens at f/2.
The PopPhoto description „best“ is not clear in its meaning, but I do presume that they are talking about optical properties at all apertures and comparing the performance per aperture with other 50mm lenses. At best, a 1.4 design is stopped down as good as the 2.0 design at comparable apertures.
Now the Planar lens: for strict comparison of optical properties, the MTF is the best method as it represents the overall correction of all aberrations or in other words the residual aberrations.
PopPhoto uses its own derivative of this method with the SQF measurements.
As it is difficult to translate these curves into a useful interpretation of what you can expect in every day photograpgy with real film, these curves must be complemented by practical shootings. Given the nature of the photographic process and the many uncontrollable influences that every individual photographer inserts into his/her own imaging chain, the practical end result may or may not conform to the potential image quality. So it is alright for any photographer to interpret the end result as sufficient for his/her needs/desires, but it not allright to migrate this result into a statement this result is the final representation of the ultimate or feasible image quality.
If we now look at the MTF values of the Planar, we see a good, but not spectacular result. The 10 linepairs figure (which represents the overal image contrast or bite of the image) is quite low with 82% in the center and about 75% at image height 15mm. The 40 linepairs (representing the sparkle of the lens and the crispness of defintion of very fine detail) is low with 40% in the center and 20% at 15mm image height. For f/5.6 the figures are
94%/94% and 65%/73%, which is very good for the 40lp, but also for the 10 lp, given a 50mm focal length and angle . There is also a rapid rise of the contrast from center to the zonal area, indicating a strong presence of spherical aberration and focus shift.
I do not know what measurements and or pictures PopPhoto has used for their interpretative findings, but I can state on the basis of the MTF graphs (provided by Zeiss and these are widely regarded as honest), that the PopPhoto statement is questionable and even wrong. There are better figures for f/2 lenses in this focal length.
The Summicron 2/50 for M has at aperture 2 and at 10 lppmm figures of 90%/85% and at 40 lppmm 50%/40%. These values are better than the ones for the Planar at 1.4.
Compare these data with the ones for the Elmar 3.5/50 new and we see the following pattern:
At 5.6 the 10 lppmm have a contrast of 95%/95% and for the 40lppmm of 70%/70%, with a straight line. At 5.6 the Elmar is better by a margin than the Planar at 5.6. You cannot compare the widest apertue as this is 3.5 and that is close to the performance of the 5.6 aperture.
This analysis does show that a comprehensive discussion is needed in order to paint the performance profile of a lens, that global statements, like „this is the best lens“ are dangerously low in real content and that it is rash to compare lenses of the same focal length and widely different maximum apertures.
What aperture is good for Leica photography?
There seems to be a need to make short and easy statements to define or defend a position. The new Leica lens 3.5-4/21-35 (as seen on the Japanese website) has been quickly condemned by some as un-Leicalike in its aperture and so useless for the real photographer.
Now I do not know why there is such a strong feeling in the Leica community that you need to use the wider apertures of at least f/2 to qualify as a true Leica picture.This approach is even seen as a law: use a Tri-Elmar and you no longer can be taken seriously as a Leica photographer.
In my view there are many valid uses for Leica lenses at all apertures. And a picture with the fabulous 2.8/180 or 4/280 does in my view count as a legitimate Leica picture, even at aperture 4 or 5.6 or 8. And I often use my aa 2/90 at aperture 5.6 or even, my god, at 8. In some circles that counts as blasphemy, but I see lenses as instruments to deliver the performance that I need and not as icons or symbols of Leica lore.
The R8 has a wide range of excellent lenses in the 2.8 to 4 and lower category. And for the intended use of the camera an f.4 design fits in quite well. And I prefer an outstanding f/4 aperture in the 35mm focal length above a not so well executed 2/35mm design that I cannot use at the wider openings with the same confidence .
I sincerely wish that the Leica community would adopt a more tolerant stance. See lenses for what they are: optical instruments that are designed with a certain intention and goal and within a wide range of parameters, like size, weight, cost, performance etc.
Scanner basics?
Today I finished testing a number of recent slide scanners for 35mm film (in the 2900 and 400dpi class).
I will not give you all details as this would require several pages.
Highlights are these:
The software is quite sophisticated and the scratch and dust removal programs are amazingly effective.
The density range: the oft noted range of D3.6 to D3.8 and more is a purely calculated value based on depth of 8, 12, 14 or 16 bits.
But what is the real measured value? I used a Kodak step wedge, and measured the strips with my densitometer. The wedge ranged from 0.1 to 3.0
I scanned it with scanners in both ranges and I found to my big surprise that every scanner only could register a maximum D of 2.1! Every step darker than this was not recorded by the scanners. They even refused to autofocus on that part of the wedge as they could not see any density change. The difference between the 2900 and 4000 scanners is minimal. Also hardly a significant change invalue when using different bit depths.
Whatever the claims: I could not verify them. I also scanned real life slides from many film types and with different contrast ranges. The chosen slides were defined by photographers: real in the field persons who earn their living with photos, not stupid testers who do not have a clue of what is important. The slides were selected based on the deep blackness of the shadows as seen with loupe on a light table.
All scanners of both types could register the full range of tonal scale and here there were some differences between marques. Most of them could indeed capture all the shadow detail, but not more than could be seen with the naked eye.
The solution is this: I measured all slides with the densitometer for the black parts and the meter registered values of 1.97 to 2.07!. So current slide film is not able to reproduce deep black outside the Dmax range of 2.0. This any current scanner can handle.If you live under the assumption that the black of a slide is around Dmax 3.0 you might be impressed by the dynamic range of the scanners. Truth is that both are around D=2.0, which does notr even exceed a good black print.
Now another hot item: resolution. Again: resolution fgures quoted are calculated ones. I used a special slide with a very fine resolution pattern. The result: the best 2900 scanners could capture around 35 linepairs/mm and the 4000 scanners managed to get slightly above 40 linepairs. But this value is again lost in a 300dpi digital print.
Generally the differences between 2900 and 4000 scanners are becoming very small and most people would be smart to buy the much cheaper 2900 ones. Loss in image quality is hardly visible.
On all measurable aspects: resolution, depth, contrats range, shadow details etc, the scanners were very close, with no significant winner.Good software will help.
The resolution slide also showed optical defects in the scanner lenses.
I once noted that in general handheld photography a resolution limit of 20 lp/mm would bequite good and I was ridiculed by the experts. Having established that the best scanners are limited to 40 lp/mm and using slides made by reputable professional photographers, I could establish that the level of detail in the slide could be captured by the scanner. Thus the limit of resolution in the slide is below 40 lp/mm and on average quite below this value. Still the slides were regarded as quite good and critically sharp by the photographers.
Can a lens be corrected for BW only?
When researching the history of the Elmar 50mm, I noted several times that some authors remarked that the earlier Elmar was not designed to be fully colour corrected, while others stated it was. Elsewhere I heard that the Noctilux was designed for BW photography and not for colour.
The idea that lenses can be corrected for BW or colour photography is often found in the literature.
Can this be the case?
The visible light ranges from 400 to 700 nanometers. And optical glass always transmits the full spectrum. Colour correction of a design is a two part act. The most importat one is the selection of the glass. That is the familiar story of the APO correction which is done with the selection of glass (not APO glass, but glass with certain characteristics that have to match to get the apo correction).
So whatever the design of the lens , the glass will transmit the full range. A good colour correction implies that you want all colours (but at least from the blue and red part (450 to 650nm) to focus on one point of the axis (longitudinally) and also that the discs of confusion before and after the plane of focus are equally large (lateral colour). Now here comes the correction part. If we take a white point source and project this one through the lens to the plane of focus you will see a scattering of light rays of all colours. Some times the green rays are spread out most, sometimes the blue and sometimes the red, depending on correction and design. A designer now has a choice: he could neglect the scattering of the red rays, knowing the lens is used with orthochromatic film. But then the blue rays are not well focussed either. And when he corercts the blue rays, the red ones will improve too. In simple cases the designer just calculates the lens with the green-yellow light, assuming that the other colours will behave nicely.
Of course when a lens is designed it is impossible to correct every part of the spectrum to the optimum. As example. the deep blue light (430nm) is often neglected. And here we see a subtle diference between Mandler/Canadian lenses and Kolsch/Solms lenses. The Mandler designs are not so well corrected for the deep blue part as are the Solms designs. Do we see this? Yes! When comparing a large scale projection of the Summicron 50mm, I can see a very small colour fringe of deep blue on small details, which is absent when using the Apo-Summicron 2/90. A minor point to be sure, but some times of value to know.
To return to the original question, I would say that you can not design for a certain part of the spectrum. But you can be some what less stringent when correcting the lens and give some priority to one segment of the spectrum versus another.
Berek in his Leica Brevier notes that the Elmar is fully colour corrected, as BW emulsions are sensible to the full spectrum or will be soon. And in his handbook of optical design (1930) he presents the Elmar as a design example and shows it to be colour corrected, as he calculates with the full spectrum.
It is true on the other hand that if a lens (by accident or purpose) is corrected not so well for the red part, then the use of this lens with ortho film will give slightly better results than with pan film.
Only when using lenses with extremely high magnifications as long telephoto lenses with very high inherent chromatic errors, we note in the older design books that they then are correctd for a specific part of the spectrum. But only for scientific use.
To conclude: the Elmar has been colour corrected from the start, and a correction philosophy for BW emulsions does not exist. You can optimise (or neglect) for a certain spectralband for scientific purposes,. But I have never seen an example in any of my books (old or new) for a lens that is specifically designed for a spectral band when being used for general photographic purposes.
The secret of the Leica magnifier 1.25.
There has been some discussion about the loss of viewfinder transmission when using the magnifier.
Generally there is always a loss in light transmission when there is a magnification without compensation for the Entrance Pupil Diameter. Look through a 8x42 binocular and a 10x42 and you may be able to notice a loss in light transmission.
This also works for the several viewfinder magnifications in the M bodies. The 0.58 has the finder with the highest clarity and the lowest flare factor. The 0.85 has a finder that is a bit dimmer and a somewhat highr propensity to rf-patch flare. With a higher magnification the Exit Pupil Diameter becomes smaller. The Exit Pupil of the RF patch has a diameter that is close to the average diameter of the eye. But when you focus in the dark, the Pupil of the eye is enlarged and becomes wider than the RF Exit Pupil, which causes a slight loss of the transmission values. So in general when you focus in dim (non)-available light with the Leica RF, you will experience a loss of brightness, as outlined above. This loss is most likely compensated for by the eye's sensitivity as no one ever complained about this. The loss is greater with the 0.85 than with the 0.58.
Now the magnifier. As is the case with every magnifier (remember the Apo-Extenders?) there will be a loss of brightness with the magnification. With the 1.25 magnifier it is 1.25^2 (1.25 squared), that is 1.5625 (calculated with the HP 41 (the Leica of the calculators). This is a half stop and not the full stop as mentioned in recent discussions. Adding A plus B, we may note that the 0.58 finder with a 1.25 magnifier has equal brightness as the unaided 0.72 finder. And the 0.72 with magnifier is equal to the unaided 0.85 finder.
I hope this clarifies the issues involved. A half stop by the way is not noticed at all when we study and analyse the natural vignetting of a lens
M3 rangefinder issue (October 25, 2003)
I noted in my FAQ that the rangefinder of the M3 will get a yellow tint at a certain age.
Here is my quote:
"The optical construction of the M3 range/viewfinder is different from that of all successor models. It is more elaborate to build, more sensitive to shocks and abuses and it will get a yellow-tinted color cast when aging."
A reader noted that on another website a different explanation has been given. The quote is below:
"Another difference, one that is not immediately apparent, is that newer Ms have aluminized (or platinized) beamsplitters. The M3 finder, which has a gold-coated beamsplitter, looks like it has a bluish tint, with a gold rangefinder spot. This is intentional, and designed to increase contrast. The newer finders are brighter, but do not have the same snap."
Which explanation is the correct one?
Mine is and here is why:
The beamsplitter (in fact both prisms sides) are joined in the M3 with Canada balsam, which ages after time and then becomes yellow, even a quite strong yellowish tint, so much that it is difficult to see both rangefinder frames for alignment. It is true that the beamsplitter face in the M3 was a bit darker than in the M6. But the M6 has no longer the Canada balsam to cement both parts together and has a silverized prism side. The story about the gold plating is as so often in Leica world a myth to boost the fame of the M3. It is plain Canada balsam, which will get a yellowish color over time.
Finding a clean M3 becomes ever more impossible as the all M3 cameras are having the effect of the ageing of the Canada balsam. It it also hardly possible to dismantle the prism and add new balsam.
The Nokton 1.2/35mm Aspherical review (October 25, 2003)
This review has been the subject of some discussion at various newsgroups. As happens with every review I write, there is the standard remark by some persons that my report is biased in the extreme and not worth reading. My first reaction would be: I write reports now since 1997 on my website and the same comment is heard from that very moment and by the same persons.My loyal critics will always comment that I am biased whatever I write.
My second reaction would be a bit more serious. If I am biased, I must get my facts wrong: being biased means that you will press the facts in a preconceived frame and you are not able to see facts with a fresh and balanced eye.
When doing the Nokton test, I loaned the lens from a fellow reviewer, who writes (as I do) for the Dutch magazine "Camera Magazine". He is a Pentax guy (wrote a book about these cameras) and does not like Leica cameras. We are in friendly competition, but do respect each other and we decided to share the test results. He did the Nokton test in his usual way and I did mine in my way. We both made practical tests on slide film (slow speed and high speed). This was done independently. Then we shared results and compared notes. His verdict was remarkably close to mine. I had more measurements than he had, but his concusions did fit well with my facts. His report in the Dutch magazine hs the same conclusions as I describe in my review.
I did this on purpose. It is always preferable to exchange findings and compare conclusions to see where you may have been wrong or biased. And whenever possible I do share my results with others before I write my own review.
Why do I not present pictures to document my conclusions. The same reason why Geoffrey Crawley of BJP fame, CdI and others do not present pictures: in this case a picture is mostly misleading and prone to erroneous interpretations. There are situations where numbers and words do a better job than pictures.
Perspective, focal length, depth of field and digital sensors
There seems to be a lack of insight in the relationship between perspective and focal length. The Digital Back for the R8/9 series is now on sale and its sensor area creates a shift in focal length by the factor 1.37. The sensor area has the size of the APS film format. A 90mm lens, when used with the Digital Back operates as a lens with a focal length of 123.3mm. But is it a 125mm lens in perspective and viewing angle?
Perspective and viewing field
The situation is not new. Rolleiflex users had the possibility to change from roll film to miniature format long ago. The focal length of a lens depends on size of the image and the field of view. We could dispense with some confusion, if we would work with the field of view as parameter, but the focal length is now the usual figure. The size of the image detector (or image size) limits the field of view of the system. When the size of the detector becomes smaller, so will be the field of view of the system, and this is independent from the lens in front of the sensor. The reduction of the field of view is identical to enlarging only the centre portion of your miniature negative.
The visual perspective is not in any way related to the focal length of a lens. Perspective is only governed by the standpoint of the viewer (the person that looks at a scene). It is well known that two objects with the same size, but at different distances form the observer will look larger or smaller dependent only on the distance difference. This is the normal perspective, technically known as the central perspective. The relative size of the objects as reproduced on the negative or sensor area is decided fully by the choice of standpoint. The only influence of the focal length is on the size of the image, not on the relative sizes of the different parts of the image on the negative. Whether a lens covers a wide or small field determines how much of the scene we can capture, but has NO effect on the perspective. We can easily verify this by taking two pictures from the same standpoint, one with a 21mm lens and one with a 135mm lens. When we now enlarge the negative taken with the 21mm lens to the same size as what we have with the 135mm lens, you will notice that the relative sizes of the objects are not changed by the use of the different focal length.
The second type of perspective is the telecentric perspective, Now objects with the same size appear on the detector with equal size, independent from the distances they have form the observer position. We can achieve this telecentric perspective by a virtual relocation of the normal central perspective point to an infinity position: the rays that in the normal case converge to a point are now refocused as parallel rays. The entocentric perspective is the same as the central perspective, but with the size relations changed to the opposite dimensions.
Let us briefly return to the topic of the 1.37 factor of the digital back. When using a 90mm lens with miniature film or with the digital back from the same position, there will be no change in perspective. The only visible effect is the larger size of the image when using the digital back. If you want the identical size as with miniature film, you need to step back some distance to get the same object size.
Depth of field
Depth of field (DoF) is not caused by optical aberrations or diffraction phenomena. DoF is only related to the limited ability of the human eye to resolve small details. The DoF equations are dependent on the size of the circle of confusion (CoC) and this size is defined conventionally and with some arbitrariness. The size of the CoC is dependent on the size of the sensor or film area. For 35mm film (24x36mm) the CoC has the value of 0.033. For 18x24mm it is 0.025mm. If you look at a picture from a distance that is equal to the diagonal of the sensor area, the eye subtends an angle that corresponds to the angular resolution of the eye. If you enlarge the picture, the distance of viewing will have to be proportionally larger.
It is customary to define a lens with a focal length that equals the length of the diagonal as the normal or standard lens. When the size of the sensor area decreases, the DoF increases. If we compare the DoF of a Summicron 50mm and a negative area of 24x36mm with the lens of the Digilux-2 at the equivalent focal length of 50mm, corresponding to a physical focal length of 12.5mm (the diagonal of the sensor area is four times smaller) we will discover that the DoF of the Digilux lens is four times larger at the same apertures. Or we need to stop down the Summicron by four stops to get the same DoF.
In this case the diagonal of the sensor area and the physical focal length and the CoC size are connected. When we enlarge the picture from the 35mm negative and the small Digilux sensor to A4 format, we need a bigger enlargement factor for the Digilux picture, which means that the advantage of the large DoF is a bit less. Still when comparing pictures made with the Digilux-2 and the M7, we see more DoF in the Digilux case. We cannot explain it when referring to the usual DoF equations, but then we may have to adjust these equations.
The Digital Back for the R8/9, and the announced M-digital camera.
For both systems the sensor area is smaller than the full format 35mm negative. When you use lenses designed for the 35mm format on these cameras, there will be a magnification factor of the image of 1.37 or somewhat smaller. See above for the implications on perspective. The focal length of the Leica lens itself does not change. Because of the smaller sensor size, we simply cover a smaller part of the scene. The magnification factor does not change, when we look at the subject on the sensor area itself. A smaller sensor when everything else is the same, acts as a mask that is set before the 35mm negative area. Imagine this: we have the R8/9 with a film inside and the usual 24x36 area, defined by the film gate size. If we would be able to reduce the film gate to a smaller area, we have a smaller negative area, but not a larger size of the object. Look at the normal 35mm negative and tape off all four sides by a tape that is 5 mm wide. The resulting negative area will be 14 x 26mm, close to the sensor format of the Digital back. But, and this is really important, the size of the scene on the negative has not changed in both situations. All we have done is to reduce the size of the negative area. From this perspective, it is wrong to speak of a magnification factor of 1.37. We do not magnify the objects in the scene at the capturing stage: they are stiil of the same size, but projected onto a smaller capture area. What does happen is an apparent magnification, because we will enlarge the smaller negative/sensor size by a bigger enlargement factor compared to the 35mm negative. If we want to have a A4 size print (length of diagonal is 36cm) with both negative/sensor sizes, we need to enlarge the 35mm negative by a factor of 36/4.3 = 8.4 times and the digital sensor by 36/3.13 = 11.5 times. The size of the objects in the scene is bigger, because of the higher magnification at the printing stage, not at the capture stage! How does this affect the DoF? Theoretically there should be no change: same focal length, same distance, same CoC, only a smaller sensor area. But experiments show that a smaller sensor area is accompanied by a larger DoF. On the other hand, a larger magnification ratio implies an increase in the size of the CoC. And last but not least, we have to put into the equation the eye as its own image processor.
To be specific: let us put a 50mm on the R9 with a film inside. The focal length will be 50mm and the DoF as specified in the DoF tables depending on distance, aperture and focal length. Let us say the distance is 5 meters and the aperture 5.6. Now switch to the Digital Back. The only change is the smaller negative/sensor area. Assume we make a picture of a person's face that will occopy an height of 10mm on the negative in the normal position. If the face is centered in the middle of the negative area, we have a blank space of 7mm above and 7mm below the face. When capturing the same scene with the sensor, the face is still 10mm in size, but now the blank space above and below the face is only 3mm on each side. Enlarging both pictures to A4 gives a size of the face of 85mm in the case of the negative area (35mm film) and of 115mm in the case of the sensor. The result is a larger face when we compare both A4 prints and so an apparantly magnifier effect. Indeed, there is a magnifier effect, but only because of the bigger magnification at the printing stage. DoF would be smaller in the case of the bigger enlargement, because of the increased size of the CoC, but the size of the CoC in the case of the sensor capture is smaller. In the end, the DoF would be the same. We will have to wait for real life comparisons to see the effects of the equaition, where the eye and the CoC are the most imporant (subjective) ingredients.
We use the 1.4/50 ASPH on the announced Mdigital with a reduction factor of an assumed 1.37 for the sensor diagonal (same as the DMR). The lens is still a 50mm with the same magnification as in the case with the film. But we have to enlarge the capture area by a factor of 1.37 to fill the same print area. The size of the objects on the capture area will be 1.37 times larger, and the observer will assume that we used a lens with a focal length of 69mm.
The upshot is this:
When we use our normal Leica lenses (that is lenses designed for the 35mm (24x36mm) negative format) on cameras with a smaller size of the capture area (sensor or film) of 1.37 or 1.6, we will have to enlarge our sensor areas with a factor of 1.37 or 1.6 compared to the full frame negative area to fill the same print size. This additional enlargement at the printing stage will produce a larger size of the subjects on the print and gives the illusion that we have used a lens with a larger focal length.
Zeiss and resolution and fairy tales (October 15, 2004)
In the Zeiss magazine 'Camera Lens News' issue 20 we found this amazing statement: "ZM-Objektive bilden auf Gigabitfilm Strukturen ab mit feinen Details von 400 Linienpaaren pro Millimeter!" Or in translation: "ZM lenses are able to reproduce on Gigabit film structural detail with a resolution of 400 linepairs per millimetre." ZM refers to the new line of lenses for the Cosina based Zeiss-Ikon rangefinder camera, introduced at Photokina 2004. This is a remarkable statement and most certainly untrue. Let us see why this claim belongs in the realm of the fairy tale.
The Gigabit film has been discussed widely in the internet. It is a repackaging of the Agfa Copex document film. Agfa itself has extensive documentation about the film. Agfa states that the film is capable of resolving about 600 lines/mm. Agfa tells you that this resolution can only be achieved when developing the film to a contrast index of 3.0, which in effect reduces the film to a line copy film, suitable for the reproduction of black on white lettering (as is intended when making microfiches). This high contrast and the steep black/white gradient reduce the flare effect around image points and effectively can optimize resolution. Agfa also states that for continuous tone reproduction the resolution figures are much lower as can be expected. There is some confusion whether this value of 600 refers to lines (as separate bars) or refers to optical lines (a set of one black and white line or a linepair). If we calculate with linepairs, then we get a linewidth of 0.0008mm! The Airy disc diameter of 0.0001 to match this resolution requires an f/2 lens and a wavelength of 0.412 or in the far blue region. To be on the safe side, it would be better to interpret the Agfa value as 600 separate lines or 300 linepairs (or cycles/mm). The Gigabit marketing documentation states that a special developer formulation can bring 700+ lines with a low contrast negative suitable for continuous tone pictorial photography. On the assumption that we are talking cycles/mm here, we get a linewidth of 0.0007mm. This would be a physical miracle: to add resolution to the inherent resolution that the film emulsion is capable of by chemical means would be a revolution in photographic chemistry. And now Zeiss even claims a resolution of 400 linepairs/mm or 800 lines/mm on film(!). Zeiss and Gigabit both claim that these resolution figures are realistic and have been documented with real photographs. I asked both companies for proof, but did never get it. These claims cannot be realistic! If we look at the smallness of details on the film. we are talking about sizes of 0.001mm. The circle of confusion on film is normally taken as 0.03. The Copex film plus Zeiss lens would be able to record details 30 times smaller.
The confusion about resolution
There are a number of concepts that should be clearly understood. In discussions and literature you may encounter lines oer millimeter, linepairs per millimeter, cycles per mm and abbreviations like l/mm, lp/mm, cy/mm, lppmm. A line is a white or a black bar of some width and height. The well known barline test chart of the USAF 91951) is a good example. here we have a pattern of three black and two white lines. A bar plus adjacent space is an "optical line" or a "line-pair" or a "spatial cycle". Optical designers use these terms
There are a number of concepts that should be clearly understood. In discussions and literature you may encounter lines per millimetre, line pairs per millimetre, cycles per mm and abbreviations like l/mm, lp/mm, cy/mm, lppmm. A line is a white or a black bar of some width and height. The well-known bar line test chart of the USAF 91951) is a good example. Here we have a pattern of three black and two white lines. A bar plus adjacent space is an "optical line" or a "line-pair" or a "spatial cycle". Optical designers mix these terms freely as they understand that a line always means a bar plus adjacent space (the width of the adjacent space is half the width of the bar: the total width is the same as that of a white and a black bar). When they say that a lens can resolve 50 lines, line-pairs or cycles they refer to the same phenomenon. The factual width of any line (black or white) is in this case 0.01mm. Fifty cycles/mm implies 100 separate lines and every line is 0.01mm.
Resolution is expressed as spatial frequency in cycles per millimetre and is the measure for resolving power.
This is easy. But in a number of cases we are not sure that we are talking about the same thing. In television cameras the notion of a line is either a dark or a white line. Resolution here is twice as high as in the photographic world. And it is also common practice to refer to the white bar as a line and to the black bar as a space. If we see the word 'line' referred to, we are not sure if we talk about the white bar or about the optical line.
The diffracted-limited lens
The classical measure for resolving power was the ability to separate closely spaced objects (stars). In an aberration-free lens the image of a point object (a star) is a bright central zone with annular alternating dark and light rings of varying width. This pattern is called the Airy disc. The Rayleigh criterion tells you that for a two-point resolution problem (two closely spaced stars) the limit is reached when we calculate the radius R of the first dark ring from the centre of the bright spot.
R= 1.22 x Lambda x Aperture.
The radius is not the diameter, which is calculated by
D= 2.44 x Lambda x Aperture.
The radius is the distance from the centre of the bright spot from one object to the first dark ring, where the centre of the bright spot of the second object is located. The value of the radius then refers to cycles/mm. The resolving power of the lens is the reciprocal of R or
S = 1/R in cy/mm = linepairs/mm
The Rayleigh criterion is only valid for one specified wavelength and assumes an unobscurated pupil (no vignetting). To be one the safe side, it is often better to calculate with the D-equation, which will half the value of the resolving power.
Let us assume that the claims are indeed true. Is this physically possible? We know that the theoretical limit of resolution is defined by the diffraction limit. A perfect lens, without any aberration, will reproduce a point source as a patch of light with a certain dimension as defined by the Airy Disk. I am sure that Zeiss is capable of producing excellent lenses, but I really question their ability to produce aberration-free lenses for the Zeiss-Ikon camera. Again, let us assume Zeiss can do it. What are then the theoretical limits. The size of the image spot, as limited by diffraction effects, depends on wavelength and aperture, nothing else. The wavelength is measured in micrometers and designated as L (lambda). The aperture is denoted with K and the formula then becomes as follows:
R= 1.22 x Lambda x Aperture.
The visible spectrum runs from blue to red (0.45 micrometer to 0.70 micrometer with green in the middle at 0.55 micrometer). For easy calculation we use the green light in our equation. The apertures of the new ZM lenses run from f/2 to f/2.8. The results are seen in this table.
Aperture | MTF cutoff in cy/mm | Diffraction limited radius at wl 546nm | Resolution in cy/mm |
2 | 916 | 0.0013 | 769 |
2.8 | 654 | 0.0019 | 526 |
4 | 458 | 0.0027 | 370 |
5.6 | 327 | 0.0037 | 270 |
These figures are in the same legue as the claims of both Gigabit and Zeiss. But we assume that the lens is perfect and that there is no loss of resolution in the imaging system from lens to film. Both these assumptions are wrong as we all know too well!
Based on the purely theoretical calculations of aberratio-free lenses the Zeiss/Gigabit claims seem to be right. But they both transfer these theoretical claims into practical photography without any reduction in values.
The D-equation would be far more realistic for the wider apertures and the wide angle of view of the ZM lenses and then the Zeiss/Gigabit claims are too high, even in this theoretical exercise.
MTF based resolution
For extended objects (as is the case with photographic motives, the Rayleigh criterion based of radius is not the best approach. Extended objects have spots of varying size and varying brightness. In this case the light distribution in an extended image is derived mathematically by the two dimensional convolution of the object with the Point Spread Function. Here we calculate the Fourier Transform to get the OTF from which we derive the MTF. We have to distinguish two cases:
Diffraction limited MTF (without aberrations) and Geometrical MTF (with aberrations). Values from the first type of MTF are almost identical to what you get when using the R-equation. The second type of values is much lower and is probably the best representation for real-world lenses in practical use.
D-MTF and G-MTF values can be found with optical design programs. As example for an f/4 lens I got the following results. Limiting value (D-MTF) for f/4 lens is about 400 cy/mm and G-MTF for same conditions is 170 cy/mm. I selected f/4 as at this aperture we might expect the best quality based on the lowest amount of residual aberrations.
Remember also that we are discussing the limiting frequency at an MTF value of zero. To see details on film, we need an MTF of at least 15%, and this will shift the limiting frequency to much lower values. In both cases the practical values are lower. With values of 150 to 200 cy/mm for outstandingly good lenses we talk sense. If we now combine these lens values with film MTF values of 100 to 200 cy/mm, we get the results I have referred to in my lens reports, where the best film/lens combo's deliver resolving powers between 80 cy/mm and 130 cy/mm.
When we try to be realistic and base our assumptions on practical assumptions:
no lens with the wide angles of view of the ZM lenses is really aberration diffraction limited at f/2, 2.8 or even 4
the Gigabit film can resolve around 200 linepairs/mm in continuous tone reproduction and
there will be a strong drop in resolution in the whole chain of reproduction from lens to film to enlarger
we have to conclude that Zeiss and Gigabit wanting us to believe in fairy tales.
Introduction
The topics, presented here, are often discussed among Leica users, eye-to-eye and in discussion groups all over the internet world. You will find many sites where lists of topics and their answers are being maintained. Most often these lists are a cut-and-paste-act where answers and explanations from many persons are put together indiscriminately. It is difficult, if not impossible, to check the value and the trustworthiness of the content.The internet has generated an enormous amount of information fleeting freely around the world. In the academic and scientific world, there is a tradition (not always adhered to) to back up statements by facts and research and by peer reviews. At the limit there is always a theory that checks the wildest fantasies, unless you design your own theory of course. In the Leica world, there is no such thing. Real knowledge and valuable experience is mixed indiscriminately with myths and explanations that do not even have a remote connection to facts and/or accumulated time-honored knowledge.
Leica users have quite often strong opinions and views about the product, the technique behind it (mechanical, optical and photographic) and the results that can be achieved (the pictures). Photography is for the most part workmanship and for the rest it is art, even when both are grounded on a scientific base. It is natural that workmanship is oriented to tradition and traditional values and knowledge. Good workmanship has evolved out of a trial and error method and the secrets of the trade and the really important facts are closely guarded. Photograpic workmanship follows the same rules. A photographer who has success with a certain approach and technique will not be easily convinced to share his secrets with anybody. It is a part of his trade success and money earning power.
This attitude, fine in the world of workmanship, has migrated into the Leica world where so many 'secrets' abound. The famous Leica 'glow' (as example), has never been convincingly demonstrated, but is part of the Leica myth. You must become a believer to see it. (?). And to protect this elusive characteristic, which for some might re the motivation to buy and use or admire a Leica, all kinds of defensive acts are played: if you cannot measure or even compare this characteristic, you either cannot test Leica lenses in the usual way (with an optical bench or an MTF analysis), or you are not qualified or you lack Leica experience and so on. All of this is very common, but if you want to search for the true nature of the Leica and to research the best combination of tools to elaborate on that nature, that attitude is killing. When I recently noticed that the Tri-Elmar lens at the 50mm position delivered better images than the Summicron 50mm at aperture f/4.0, some individuals immediately tried to discredit me with every trick of the trade. The obvious course of action would have been to do a test and see if my statement was indeed correct. But in the Leica world, facts are not always appreciated. Authority, self imposed or not, on the other hand is valued more highly. Facts have a nasty habit. When found true, you are invited to change your mind and opinions. Authority is much more comfortable: you are not bothered by facts, and whatever you say, can be repeated forever, as long as you wish. And you are not forced to change your mind, which for some is very pleasant.
The Leica world is a difficult one. Partly populated by collectors whose fascination with the instrument generates a different kind of appreciation and knowledge than that of a user whose approach is to exploit the mechanical and optical capabilities of that same instrument. And there are numerous users who admire the deployability of that instrument to make better photographs or at least more inspiring pictures because that instrument is the best tool for the job as a photographer. The Leica is a most fascinating and very potent instrument for taking and making photographs. The art and nature of this high precision engineered product is not yet explored fully. Not in its basic properties (mechanical, optical) and not in its capabilities (the pictures).
This FAQ is dedicated to the ongoing research into these aspects of the Leica camera and its lenses and related techniques. Every topic is based on the best of my knowledge at the moment of writing. But knowledge grows and changes and therefore some answers may evolve over time. If you do not want to learn and occasionally change your mind, there are many opportunities in the internet and elsewhere, where the status quo is preserved and defended. (TOP)
What are the most important sources of image degradation?
Movement of the camera should be number one. Using a camera without support, that is: handholding, is the most important cause, especially when combined with slow shutterspeeds. A speed of 1/250, pressing the camera against your forehead will generate more degradation than using a full second on a stable tripod. The validity of the classical rule: a safe shutter speed is the reciprocal of the focal length (1/50 for a 50mm lens, 1/250 for a 200mm lens) has never been demonstrated. My testting indicates that the minimum is 1/250 for focal lengths from 15 to 50mm ans at least 1/250 for 75 and 90mm. A longer focal length and of course the variolenses demand 1/500 to 1/1000 for best performance. This is defined as a high quality print with 12 times enlargement or a slide projection of one meter wide. For R-lenses a mirror lock-up, a very stable support that fixes body and lens with one mechanism is imperative.
Inaccurate focusing is a second major cause: a slight defocus is already visible as a drop in contrast.
Wrong exposure (over- and under-) is a third major source. Overexposure is the worst of the two. For best optical performance a half stop underexposure is beneficial. But has to balanced against shadow detail definition. When using colour neg film, this rule is not OK: it is best to overexpose coloutr neg film by one stop for best performance. (As for the chromogenic films like Ilford XP2 Suprer).
A small aperture is also a potential cause of image degradation. For current Leica lenses the range of f/2,8 to f/4 will be best for optimum results. Beyond f/5,6 or f/8 image quality will degrade as can be seen in drop of overall contrast and loss of definition in small details and textures.
High speed film and colour neg film in general will not support the best optical performance. use slow speed BW-film and slide film for best results.
Filters, when not accurately plane, are also a sorce of problems, but less so than the above sources. (TOP)
What is a diffraction limited lens?
Wow! Well you asked for it. In raytracing, light has wave-like characteristics and normally light rays travel along paths that are indicated by ray optics, that is straight lines. But edges of aperture blades, edges of lenses and other optical obstacles, cause the rays to divert from their path of direction. Thus light wil spread into the shadow of an object or away from a point image. So even the tiniest point source of light will be recorded as a very small spot of high intensity and a series of rings of diminishing luminance. A point of light then has always a certain fuzzyness, which cannot be improved and which sets a limit to what can be observed. The Airy disc pattern and Raleigh's criterion quantify the smallest point that can be seen or recorded. So a tiny spot of light (or luminous energy) will be recorded as a patch of light that is always surrounded by a fuzzy edge. When two such spots/patches are very close in space their fuzzy edges start to overlap and make it impossible to separate the two original spots. We have reached the limit of resolution.
In a normal lens the optical aberrations also deform the spot of light to a patch of various shapes. In most cases the aberrations produce a much larger patch than the diffraction effects. When an optical designer designs an optical system where the optical aberrations are so well corrected that the shape and area of the patch are so small that the diffraction effects are visible, we call such a lens a diffraction limited lens. Leica has several such lenses in the program. One of them is the Apo-Telyt-R 1:4/280mm.
Such a lens approaches the theoretical limit of resolution and is aberration free.
The diffraction effects are most visible when the area of obstruction of the light path is small in relation to the wavelength of the light and the light intensity is high. Stopping doen to f/11 will produce diffraction effects for the monochromatic light of the longer wavelenghts. Quite small of course, but the edges of very fine detail will get blurred and so will mash together and produce noise. (TOP)
Should I use filters?
A hotly debated topic, this one. Any filter in front of the lens will add one additional airspace and two additional surfaces. So by definition image quality should be degraded. How visible will this be? One obvious case: when strong light sources are shining directly or obliquely into the lens + filter, severe flare and secondary (ghost) images will be detected. Even when we are taking pictures in situations where contrast between dark and light areas is very strong, some degradation can be expected.
These effects will also be stronger when we are using the wider apertures. Stopped down the flare will be less noticeable, but the ghost images will still be visible. If this is objectionable to you depends on subject matter and your own criteria.
A filter will be useful for protecting the lens surface. Leica front lenses are hardcoated, but not invulnerable to dust and chemical reactions. So I prefer to use a filter when I am sure image quality is not degraded by its use. In sensitive cases I just remove the filter. In low-contrast situations. landscapes, reportage etc, everywhere when the use of a filter is acceptable from an image quality view and helps to keep the front lens clean and protected I use a filter. When using B&W some filters must be used to get the correct tonal reproduction. (TechPan for instance).
You should realize that the degrading effect of a filter is much lower, in most situations,than using a shutter speed of 1/15 sec. Many Leica users feel no inhibition to use slow shutter speeds, but are afraid to use a filter, bcause ofiits impact on image degradation.
Stopping down to f/11, or using a speed of 1/30 or an inaccuracy of rangefinding aproduce more disaster than a filter (except the cases first mentioned of course). Let us keep the things in perspective and first attack the big causes of image degradation before going to the smaller evils.
Should I then forget about tests and do my own testing?
Most Leica photographers are very sensitive about the performance of their lenses. Arguments about lens performance and the relative merits of one lens as compared to others constitute a large part of the discussion whenever Leica photographers gather.
These discussions are for the most part futile form a technical view, although from a social standpoint they might be very desirable.
First of all any person has his own personal set of test criteria and the testing itself is never done in a consistent way, that is keeping all parameters under control and comparable. If you 'test' lens A with a 400ISO color neg film and lens B with a 160ISO color negfilm with a different characteristic curve any valid conclusion is exit. Take pictures without a tripod and again you loose very possibility for a meaningful test. Use your 90mm for a portrait and the 24 for a landscape and agian no comparison is possible.
Of course you can argue that you are not interested in testdata, but want to know how a certain lens performs in the typical deployment situations for this lens. Well, then you are doing an enquiry to see if this lens satisfies your needs. That is a very good and valid way of looking at a lens. A lens that satisfies your needs or lives up to your expectations, is always a good buy. These criteria, valid as they are, are not to be confused with the procedure of testing a lens. If a user tells you that he is very happy with a certain lens, you can look at his photographs to see if his pictures match your demands. If this user tells you that he finds this lens the best there is, he is overstretching his credibility.
Testing and comparing lenses is quite different from enjoying a lens and feeling pleased with its performance.
Hardly any uer of Leica lenses is able to extract the full image quality and recording capabilities of the better Leica lenses. If we give the optical performance potential of a lens a value of 100, we are on safe ground when stating that most users extract at most 60 to 70% of that potential out of the lens. Many would use only 40 to 50% of the potential. That is a pity as getting the best out of Leica lense is a very rewarding activity.
Now assume lens A has a performance value of 80 on a scale from 30 to 100. The second lens B has a value of 95. Assume user X gets 50% performance out of lens B, that is about 43%. Now this same user also has lens A and here he gets 55% performance out of the lens, that is 44%. Based on these results he might be inclined to say that both lenses have equal performance.
To study the true maximum performance of a lens you need to have equipment, knowledge and procedures to extract the full potential out of a lens.
It is true that there are photographic magazines whose lens testing is on the same level as the user acceptability procedure outlined above.
The best advice
* Stop trying to test lenses and
* look for resources that are reliable in their conclusions whens electing lenses and
* enjoy taking pictures and
* improve on your technique
Someone told me that Leica argues against the use of the MTF?
In the eighties Leitz indeed published a few articles, stating that the use of MTF graphs might give wrong information about the performance of Leica lenses. In fact they noted that the publication of a few selected figures without background info would be partially misleading. Two arguments were used, one correct and one not so correct. The MTF data will be totally different, depending on the choice of focal plane and contrast threshold. See above, where I argue along the same lines.
The second argument is the basis for much discussion among Leica users. Leitz remarks that Leica lenses in many tests get lower figures than in the photographic practice. MTF tests and the then often used resolution test charts are based on a flat (two dimensional) test object. But Leica lenses are designed for recording solid objects with depth and when recording three dimensional objects some aberrations like astigmatism and curvature of field will not be noticed or can even enhance the quality of the image. These aberrations are easily detected on a flat test object or a MTF measurement. So Leica lenses get low notes because of these aberrations, that in the photographic practice are not detectable.
There is a grain of truth here. When a designer finds he has to accept a certain amount of residual aberrations, a choice has to made which one to correct and which ones to accept or to balance to a certain degree.
But the argument of Leitz in those days that MTF data are not representative of the optical performance in the field because of the difference between a real-life three dimensional object and a flat test chart does not sound convincing.
How is the MTF measured or generated?
Many magazines (Popular Photography, Chasseurs d'Image, Photodo etc) use the Ealing optical bench to generate MTF data. The Ealing projects a small slit of as example 0.01 mm through the lens to be tested, and the lens produces an image of that slit on a screen or a CCD capturing instrument. The brightness difference from the center of the slit to the edges is mesured. The steeper the slope and the higher the difference from the maximum to the minimum reading, the higher the MTF value. By varying the width of the slit you can generate data for several spatial frequencies.
It is not possible to compare MTF data from different sources, unless you know they are made under identical parameter settings. Some of the most important, but never disclosed, parameters are the spectral composition and weighting of the white light that is used and the spatial frequency that is used for focusing.
Most published data do not give the raw figures, but use an average to get an one merit figure that is supposedly easy to interpret. Te averaging method can be very strange, as is the case with the Photodo figures. They use the MTF data from two apertures (f/4 and f/8) that are weighted, and center performance again gets a strong weighting. The choice of 4 and 8 is made to be able to compare the many zoomlenses that do have modest apertures. But for the modern Leica lenses, where optimum performance quite often starts at f/2 and is at its height around f/2,8 this choice is bad, as the best part of the data is not put into the equation.
Chasseurs d'Image also uses the Ealing equipment, but the generated figures are fed into the merit figure indirectly, as this magazine has grouped the lenses into categories and uses a reference lens per category, and also uses several more test targets and methods. All these are mixed together and the resulting figure is an expert assessment of the lens. How 'expertly' the judgement is, can not be known as we do not know what balancing act the testers have performed. Again the corners of the lens figure quite high into the judgment, which might distort the conclusions.
All these methods then filter the data to get a more or less convenient merit figure for easy comparison. But comparison is not possible and the filter is quite peculiar, so the conclusions from the data are a bit thinly supported. The link from these merit figures to the performance in the the field is not always strong.
As Leica users we are lucky. Leica publishes the MTF data for all recently introduced lenses and these data are very closely linked to the real performance the user can expect.
It is also possible to generate MTF data from the design data that are computed by the computer program employed to design a lens. These data are as accurate, if not better, than the experimentally derived values, if the subsequent engineering and quality control at the production stage is good enough.
(TOP)
Why are MTF graphs important? How to interpret them?
Unless a lens is fully defraction limited, there is a certain amount of residual aberrations present in the optical system that will degrade the image away from a perfect image. The MTF graph relates the spatial frequency (measure of resolution) to the contrast of the lens at all apertures and over the whole image field. It presents the most comprehensive information about the optical performance of the lens and correlates very well with the perceived image quality. But the MTF graphs are difficult to interpret and when generated by different methods are not comparable.
Of course the MTF does not tell you a word about distortion, colour rendition, and flare. So it is not perfect nor comprehensive. A well-trained individual can interpret the curves and may infer many propertes of the lens in question. Just looking at the curves and comparing them to other curves is a bit dangerous.
A very well corrected lens will always produce a good MTF. But is also possible to design a lens that will generate a good MTF while not being well-corrected optically. To get the real picture of the performance of a lens, you will have to analyse a whole family of MTF graphs, which are however not published.
Still the MTF graph is at this moment the best 'picture' one can get about the optical performance of a lens. Generally the 5 and 10 linepairs/mm give an indication of the overall contrast and the contrast of subject outlines. Preferably here the contrast figure should be above 95%. If the figure drops below 90% we have a low contrast lens, that gives a soft image.
The 20 and 40 linepairs/mm define the clarity and crisp rendition of very fine detail. The level of fine detail is so small, that it takes some photographic craft to get these details on film. In this case we should look at contrast figures above 60%, but if the 40lp/mm are around 40% still a very good image can be expected. The variation of the percentage is quite high at the 40lp/mm graph. A plus or minus of 5% will hardly make any difference.
The solid lines are the most important and these should be as straight as possible. Do not be overly alarmed if the edges of the image area are curving downward steeply. The corners are mostly out of the image field when enlarging or projecting. If the solid lines and the dotted lines are close together the image quality is excellent, as many aberrations (chromatic ones and astigmatism) are very well corrected. If the solid and dotted lines differ widely, more aberrations are left in the system. Again be not alarmed. It depends on the designer what the real image quality will be.
(TOP)
How meaningful are resolution figures for films. Kodachrome versus Velvia
A comment has been made that Velvia is better than K'chrome because the resolution figure of V is higher than K (±160 versus ± 125). No doubt that the cited figure is put in the data sheets. Has it any relevance?
No. I will as usual give a solid explanation why not.
Film manufacturers produce data sheets with info about resolution. They give resolution figures for low (1:80) and high (1:1000) contrast targets. And they give an MTF graph. For optical analysis the resolution data are completely obsolete and so they should also be buried for film emulsions. Because of the same reasons. I do not have to recall these, as they are amply documented. Now look at the high contrast figure. What does it mean. The test pattern is the well known and much abused barline target: black and white lines of diminishing width per mm produce a test pattern of ever increasing spatial frequency. (more lines per mm). This target is illuminated in such a way that the luminance difference between adjacent black and white lines is 1:1000 or 10 stops contrast difference. This type of contrast you might encounter when taking a silhouet aganst blue sky. But than we have a low resolution target (only the silhouet line has the contrast figure). Itis nay impossible to find in high res targets. Look at any picture you took the last decade and see if you can find a detail with very fine structures in it and ask yourself: do I see two adjacent very small object details that differ in luminance by more than 10 stops? You will not find any detail! So the high contrast figure is meaningless. If you are in need of a figure go for the low contrast value and now we see that V and K are identical. NO advantage for either one. Take a look at the MTF graph and now you see a big difference, The K graph tells you that from 1 to 20 lp.mm the MTF value is far above 100%! The same values for V are lower. So in the critical areas where sharpness is all there is the K wins. Why the threshold of 20 lp/mm. That is almost the value Leica lenses are calibrated for!! Why then is K for many purposes the better film: it is grain based where the V is dye cloud based. Recall that a dye cloud image is being generated by arificially restraining the growth of clumps of grain and replacing them by dye clouds of about the same dimension at about the same location. Note the vagueness here? A grain image is an exact replica of the optical image falling onto the emulsion. The dye cloud image is a chemical interpretation of this image. The capture of fine detail is better preserved with the grain based image and its 'hard' edges against the finer (smaller) dye cloud based image with the soft edges.
This is same the reason why fine grain developers in fact kill fine detail and acutance developers enhance fine detail up to the limit of grain noise. Recall the Rodinal discussion?
As in optical evaluation we must become accustomed to the fact that resolution figures are out and MTF graphs are in. That 's reality.
(TOP)
What about the 'edge spread function?
Recently a very interesting discussion has been launched about the difference in characteristics between actual and earlier generations of Leica lenses, Some of these differences are undoubtedly real. Some one valued the characteristics of the older lenses for many reasons. And I would not in the least question his opinion or choice. We have to be very careful around this topic. To like some characteristics is not the same as stating that these characteristics are virtues or to imply that they are desirable in themselves. The discussion now is focused on a perceived change in Leica design philosophy as exemplified by statements from Mr. Osterloh and Mr.Laney. The core argument seems to revolve around Laney's statement that "They started from the proposition that the subjects we photograph very rarely consist of grids of black and white lines on flat sheets of cards." The corollary is that one can design lenses to do very well on standard test subjects and lenses that are designed for real life subjects. These latter designs invariably (by design and subject matter) perform poorly on standard test subjects.
First of all: this argument is one big fallacy. Every lensdesigner and every lens ever designed is designed for real life subjects. The only exception might be special reproduction lenses where flatness of field is of utmost importance. All lens designers assume in their design that we have in front of the lens an object in three dimensional space. As this object has to be recorded on a flat plane (the emulsion) and because of light rays behaving a bit weird when eased through an optical system of several glass elements, we encounter aberrations. These aberrations have to becorrected and balanced. This is the task of the designer and he can do this with more or less creativity and expertise. There do not even exist any design rules or computer programs that are tuned to twodimensional objects as black and white barlines. Furthermore: Zeiss engineer Mr
Hanson introduced in 1943!! the notion of contrast (what later became MTF) and its exact correlation between optical quality and human sharpness perception. And Zeiss have always followed that lead. I would not be surprised if the 'defensive line' of Leitz/Leica in the past has
a lot to do with the image quality of Zeiss lenses. We have two types of testing equipment. One group that checks for production tolerances and are used in factories to assure prescribed tolerances. The second group is designed to test lenses for their capabilities in recording real life objects. Be it a barline test, an MTF graph or a beautiful girl, each one tries to find out the characteristics of the lens in question for recording 3-D objects. It is BTW remarkable that the Elmarit-M 2.8/28 (fourth generation) has been referred to as the last lens of the great old generation, while it in fact is one of the first of the NEW generation. Perception is a very quirky business. Also the notion that modern Leica lenses move too far in the direction of high resolution and therefore lose many of their former unique characteristics is not substantiated in practice. Yes modern Leica lense have a superb clarity of very fine image detail, a stunning repression of veiling glare and a very high correction of spherical aberration and many lateral chromatic aberrations, all characteristics that give 3-D objects a very faithful rendering in a flat plane. As a fact when comparing real life pictures (not barlines) taken with the old and new Summilux-M 1,4/35 (asph and non-asph) any viewer, not just me commented on the sparkling lifelike representation of the asph version. Let us not try to create false dichotomies where none exists. Specimen of older Leica lens-generations are very good and sometimes surprisingly competent lenses. And one should admire the perseverance and competence of the designers of these lenses. Newer generations incorporate more research and more knowledge about the way an on object in space should be recorded as an image on film. The Noctilux does not perform poorly on any testobject (girls or barlines or MTF graphs). It performs quite good on all test objects. If a tester would note that the Noctilux has less contrast and whatever
else he would like to mention than a Summicron he is probably correct. If he would conclude that it is therefore a bad lens, he only proves his own incompetence. A lens with the specs of a Noctilux cannot ever produce the image quality of the Summicron. No tester can use one yardstick and evaluate all lenses in a onedimensional way. Laney refers to a socalled ' edge spread width criterion' and somepersons now infer that this a criterion that favours older lenses while asph versions can cope better with normal test targets. Let me be quick and mercifully: this 'edge spread width criterion' does not exist. We have the point spread function, and the linespread function and we have the acutance measurement. What Laney does is copying the contents of a research paper that is just that: a research paper. An "Ansatz" as the Germans
would say that never was followed up and went the way many research papers do: they evaporate.
Are working photographers better judges of image quality than professional testers?
My brother Hubert is a conservative and a pragmatic. He does not believe that progress is real and he cannot imagine that anything written since 1970 has any value. He also refuses to consider facts or logical reasoning that contradicts what he believes is true. But he is deeply
involved in modern gadgetry (that is pragmatism). He uses these instruments as he pleases, whatever the intention of the designer might have been. He correctly justifies his approach with two arguments: it is my money and I like what I like. He constantly urges me to stop trying to update conventional wisdom in photographic lore to a level required by the actual state of the art of photographic and optical sciences.
Follow the mainstream, he pleads, then your life is easy. Repeat sales reps, marketing brochures and every snippet of Leica lore that is floating around in this info-soaked world. He made these remarks after my recent visit to Solms and Oberkochen, where I was inundated by MTF
graphs, spotdiagrams, ray fans and a plethora of optical aberrations that need to be corrected in a very subtle and artistic way in order for us mortals to start raving about Leica lenses. It took me three full years of testing and thousands of pictures (yes real pictures of real life subjects) to come to grips with the M-line of Leica optics.
Now the daunting and very exciting task is to engage my self in the R-line. Another three years? Hubert tells me to use my spare time for more pleasant activities as studying the theoretical base of Startrek physics. I think we are at an intellectual and moral low point in history. Is there a chance that the classical Greek ideals of beauty, rational discourse and lust for truth will survive? At least in Startrek I presume. I am very pleased that Data has a cat (or the other way: the cat has Data). If ever a creature has the basic characteristics of a Leica M camera, it is the cat. This species survives in even the harshest and unfriendly environments, it never loses its character, it does not compromise, it is the most effective small predator in evolution and it is a thing of beauty. And above all: you can study it for years and not know anything about it. It still is a mystery, but a very nice one. These thoughts have been inspired by the recent discussion around that most elusive of topics: image quality. This topic pops up quite often and then fades out without any real progress. Still the topic is of utmost importance. So let me try to make some observations. Sharpness does not exist in any objective way. It is a subjective impression of the eye/brain mechanism and cannot be measured or defined. Note that I in my reports never use this word 'sharpness'. I use the optically correct words 'contrast of fine detail' and 'edge contrast' to describe image characteristics. Generally we may surmise both words under the umbrella word "clarity".
When a designer creates an optical system, he/she has always a clear purpose in mind about what the lens has to accomplish imagewise. As any lens is always a compromise between many demands and variables, the resulting imaging characteristics can be related to the ideal lens: that is a lens without any defect, that will reproduce the object in front of the lens with 100% faithfulness in three dimensions. Let us make very clear that Leica designers NEVER assume that their lenses should or could be optimized for flat two dimensional test targets of whatever configuration. It is an undeniable fact that any real lens has only one, I repeat only one plane of accurate focus. That is by definition a plane of very thin depth. Thus evaluating a lens in the optical sense of the word is studying the characteristics of image points as they are projected on this plane of focus. It so happens that the MTF graphs are a very good analytical tool for just this: studying the point characteristics in the image plane. It is far beyond the truth to imply that a study of the flat image plane (that is what designers and serious testers do) disregard the three dimensionality of real life objects. To imply as some postings do, that a test of a flat object is irrelevant to practical photography, is to misunderstand the fundamental laws of optics. There are many methods, some very complex, some very secret to make sure that the image characteristics that are defined in the plane of accurate focus can be extended to a three dimensional area, part of which is constrained by the angle of view of the lens and part of which is constrained by the depth of field of the aperture used. We have to make some distinctions: some tests that are proposed in common practice: photographing a newspaper page, photographing one of many testcharts (bar lines in many configurations) are not very meaningful. This is not the fault of its inherent two dimensonality, but because the target as such is not very well correlated to actual imaging characteristics of a lens. In the case of standardized test patterns, we have the problem that most if not all users lack the necessary background to interpret what they see or think to see. Again we should not ridicule a methodology because it is falsely used.
An expert designer can predict with a high level of accuracy how a lens will perform, just by looking at all his/her figures churned out by the computer. We should really bring ourselves to a higher level of awareness that the two-dimensional test target versus three dimensional reality confrontation is not a meaningful distinction. We have bad tests and good tests and bad testers and good testers.
Now the issue of evaluation. Many postings assume that testing a lens in any objective way (that is using tests that produce contrast or resolution data) has no relevance to the demands and requirements of working photographers. First of all I hope to have made clear that measurement of image performance is MUCH more than looking at 'sharpness' (which is a non-issue as it is not existing). A responsible designer and his companion the tester will choose very carefully those measurable characteristics that are very relevant to the image as required by working photographers. To assume that testers are other-worldly and/or insensitive of the needs of real photographers is doing them injustice. Again we have good and bad.....
There does not exist any evaluation method that can handle in one merit figure all the many characteristics of a lens. And here indeed the acceptance of the working photographer is the last word. The role of the tester is to bring in figures based on carefully conducted tests to support or inform the user of he validity of his final decision. Or to help him make a meaningful choice.
There is no us-them situation. A tester might be a very good photographer him/herself and a working photographer can be a lousy tester. It is the role of a tester (at least that is the way I see and practice it) to inform a working photographer of the potential image qualities of a lens based on scientific testing of, yes, two AND three dimensional objects. If a working photographer likes or needs or is interested in these characteristics is purely his non-debatable choice.
The image qualities I try to investigate are very relevant to practical photography. Again: some of my test objects are indeed the barline patterns of a flat plane. If you really know how to translate these results into the demands of photographing cats and girls and snow storms and emotions, then there is nothing wrong with this method.
Sufficient or maximum quality.
There is nice but not true story about the salesman of Rolls Royce, answering a customer's question about the exact amount of horsepower of the engine with the words: "enough, Sir"!
It might be argued and some persons do indicate this, that all Leica lenses have 'enough' potential for image quality for all but the most exacting needs or for the farfetched demands of some out-of-synch-with-reality-testers.
In a sense I do agree. Many pictures of most every day scenes, including night shots do have acceptable or pleasing quality for most viewers.
There seems to be a certain consensus that modern Leica lenses have in fact many of the characteristics of older Leica generations, just with a bit more contrast. This is a simplification and a misguided one. The overall character of Leica lenses has changed significantly since the designers at Solms attacked the more viscous aberrations in a more effective way at larger apertures and over a larger part of the image area. Just looking at resolution or overall contrast misses the point. I have elaborated on these traits in many earlier posts and do not repeat them here.Being involved at the moment in an extensive test of many modern transparency films in the ISO 100 to 200 class reminded me anew of this contrast between sufficient or maximum image quality. None of these films will produce a bad image, all are fine grained (some more, some less), none gives correct colors (as measured with a computer assisted colorimeter for CIELab values of the MacBeth color checker), that is most of them produce sufficient imagery for most needs. Still some stand out for producing maximum quality, that is matching the best image quality of the Leica lenses. A good test is to take a close up of an object with extremely fine textures and shadow details, three dimensional in nature with strong specular highlights and some very sharply outlined color patches. If you do not find one object try to find several who combine these characteristics. Then step back one meter and take a picture of this object again. Repeat this several steps back, every step one meter.
The trick is to look at the various magnifications and to assess what happens first: loss of image quality by the lens or the film. Then look at the character of the image gradation. Then you get a feeling for the potential of modern Leica lenses.
Lens comparisons between brands
This topic as a classical one in every photo magazine and gathering of photographers. It is also a futile one IMHO.
Generally speaking the big four (C, L, N, Z) really do know how to design quality optics. The design of an optical system in essence means balancing aberrations that cannot be eliminated or canceled out. These aberrations are known to a Leica designer as well as to a Nikon designer. How a designer does this balancing act is partly art and partly specific know how of the effect of residual aberrations on the image quality. The definition of image quality differs per company/design staff. Again part of it can be quantified (merit function and its resultants MTF/OTF), but part is art (a spot diagram is easily generated, but analyzing this form with respect to potential image qualitdifficult. Leica is a master of this art and that is why we all like their products.
An optical system is too complex to be able to use a one dimensional evaluation scale as sharpness or contrast or whatever. So any discussion whether Nikon is sharper or not viz. a viz. Leica is void. Leica differs from Canon and Nikon and Zeiss, as all the others do from each other because every designer has a different set of priorities and weighting of the residual aberrations.
In stead of discussing top class lenses in simple terms as better or worse, it would be more instructive and enlightening to evaluate lenses as having certain optical characteristics that can support your type of imagery.
I really wish we stop this simple comparison game. If we could set up a complete list of lens characteristics we might end with a list of 50 or more personality traits. Let us not be so naive as to assume that any lens from whatever manufacturer can score on all characteristics.
Are older lenses better for artistic pictures?
The 2/ 35 asph improves visibly on its predecessor, the 2.8/24 is simply stunning in all kinds of hand held shooting, the 2/50 Summicron and the 2,8/50 are of sparkling clarity, the 2,8/90 is bringing almost any film over the edge, the new APO 90 is of superb quality as is the 3,4/135. Every transparancy shot with any of these lenses stands out of the crowd head and shoulders. If you do not see it, you simply do not want to see it.
It is true that the utmost of textural details will be seen only when a few films of high capability are used. The improved contrast, the much greater clarity and flare suppression, the fine shades of light in specular highlights, the flatness of field etc can be clearly observed when using any 100-ISO transparency film or 100 or 400 B&W. These current lenses are not only theoretical (or a few percentage points) improved, they are on a different level.
Would pictures by HCB be better if made with modern lenses? That is not a relevant question. Would Matisse have painted better Pictures if he used different quality paints or a different quality canvas? Artistic or aesthetic aspects are not at stake here. Why not ask if Salgado's pictures would be worse when made with older equipment?
The questions have been asked whether a modern Leica lens is more capable of rendering beauty than an older one? Or, is a modern Leica lens capable of rendering more beauty than an older one?
That is difficult but interesting to answer. Beauty is part emotion, part impression, but always it is primarily a feeling. Some socio-biologists will claim that the appreciation of beauty (of women) is universally imprinted in our DNA. But even accepting this fact, it will hardly correlate with optical quality. Beauty can be captured and represented with every possible artistically means and instruments (poems, paintings, movies etc.). And so also with Leica lenses. The goal of optical designers is to produce optical systems with a very small amount of residual aberrations in order to be able to represent reality as faithfully as possible. Leica lenses add their special flavor as these guys and girls really know what aberrations are important for photographic purposes. So if and when beauty can be objectively captured in reality modern Leica lenses will do this job more truthfully than older lenses. As long as beauty is ephemeral, any lens will do. Remember David Hamilton? Lartigue? Atget?
Older lenses have their peculiar fingerprints and some people love these characteristics. An old Harley motorcycle also evokes sympathy and admiration. Here one should walk softly. I myself have never seen in an older lens characteristics that have not been improved in newer versions. Even that most elusive aspect of three dimensionality is better represented in modern lenses. Bokeh might be different between old and current, but that is another story.
Again, older lenses can and should be admired (at least some of them) but I would like to ask the persons who favor the qualities of the older lenses to take comparative pictures of their favorite scene which really evokes the qualities of older lenses not present in the current ones with the older and a newer lens and then point out the differences if any.
I have made these comparative shots again and again and never saw something in the representation of the older lenses that has not been improved in the newer versions. The unsharpness area included. But I admit that the rendition of shapes and outlines in the unsharpness area is a matter of taste.
Are older Leica lenses as good as current ones?
Lens testing should not be representative of the demands of real life photographers in real life photo shooting sessions.
Sometimes it has been proposed that Leica users should stop searching for the best in image quality as the example of HCB 'proved' that great masterpieces can be made with the older equipment. Again this argument is not valid. The value content of HCB pictures is its representation of the human condition in its geometrical forms. HCB never was interested in any special optical qualities of Leica lenses. His priorities were quite simply of a different order. The argument that what is good enough for HCB should be good enough for every Leica user, is a very thin one. Why should HCB's imagery be the norm for everybody? Again it boils down to the position that only a certain class of photographers are allowed to define what is the proper use of a Leica as they seem to claim to use the instrument in its proper way. This argument is circular of course. HCB simply used the equipment available to him at his time. As did Eisenstaedt. It would be a bit rash to claim that some masters of the Leica (artistically speaking) should be used as an example to limit the quest for the ultimate image quality.
At least the Leica optical designers still think that the potential for improvements in optical quality is very real.
In this same category we find the often quite forcefully stated expression that photographically old and new lenses perform on the same level. And that by implication the quest for improved image quality is futile or at least not necessary. Or it is said that the improvements are not worth the trouble. In any case we note a Luddite attitude here. Modern Leica lenses have a generally much higher level of aberration correction than earlier versions, much smaller blur circles and a very different balance of residual aberrations, including the secondary spectrum. You can see this in every conceivable image characteristic. It is a bit disappointing to note that some observers dismiss the improvements as irrelevant for contemporary photography.
Of course if you shoot with the high sun in your back, use apertures of 1:5,6 and smaller and print on small-scale color prints, the differences will be small. Still the knowledgeable observer will note a higher overall contrast and a much crisper rendition of textural detail with the new lenses. And not every lens shows these improvements in the same scale. As example the second generation of the Summicron 50mm (from 1969) exhibits more aberrations than the third (current) generation. In practical shooting the chance that you will note these 'image defects' is very real. But if you happen to take pictures of objects with many very fine obliquely oriented textural details and high flare conditions, you will see the difference. And that is the point of current improvements: get optimum results whatever the level of subject detail or flare or contrast.
Sometimes these differences will only become visible under controlled and comparative test sessions. And this brings us to the next story.
Lens testing should not be representative of the demands of real life photographers in real
life photo shooting sessions.
I am not sure where this myth comes from as the supporters of this myth never explain what exactly they mean. It seems to be that the traditional test pattern (two dimensional and black-white bar line patterns as used by the USAF charts) is the scapegoat. Now no serious tester will base his conclusions on such a test pattern unless suitably educated into its interpretation. Used as a rough form of MTF related information it still has merits. Used as a simple resolution chart it is not of great use. That you may not interpolate from a two dimensional test pattern to a three dimensional reality is refuted by all optical handbooks and all optical design programs.
Every design program I know of has been based on the sharpness plane that is infinitesimal 'thin' or two dimensional. On of the best test patterns still is a large black piece of paper with very small holes punched into the paper. Lit from behind the forms and color fringes around the holes give you much info about the aberrations left in the design. Aberrations that will show up in every picture that is taken from a flat and a three dimensional object alike. The impression of three dimensionally may be based on the residual aberrations left in the optical system, but there is not one single theoretical or practical argument why sinusoidal test patterns or point spread functions could not represent the real world faithfully. Any reliable test procedure (at least mine does) will take into account all these theoretical topics and practical inferences to ensure that the results are useful for any user of Leica equipment who wishes to select lenses on the basis of real image qualities. The discussion should include the unsharpness area or depth of field. It is often stated that the elusive Leica glow is part of the answer as this characteristic is visible to anyone, but not testable by normal methods. Well I am not agent Mulder so I am unable to comment on this 'glow'. What I do know is that the characteristics of the unsharpness area are not specifically designed into the system, but are simply the result of the optimization of the aberrations as defined for the focal plane. Nothing mysteriously here.
The so-called 'light box' test might be preferred as a tool to quickly convince editors to select images on subjective and impressionistic impulses and to prove that the basic image quality (sharpness and color rendition) is good enough. To study the finer points of image quality and the real differences between several Leica lenses a much more elaborated suit of equipment is necessary.
Do we want or need to study these differences?
Some would say no, others are inclined to say yes. Persons with an engineering bias will enjoy discussing these finer points and in doing so try to find the truth of what are the best Leica lenses available today. There is nothing wrong with such an attitude. The measurement and comparative assessment of lens characteristics is a fascinating part of the Leica world. Anyone who does not wish to indulge in this activity can choose to leave it and study the language of the great Leica photographers. The language of art and the language of optics and engineering may be different but can be mastered both at the same time. That is the real challenge in my view. And Leica users are fortunate that the Leica products support both views and even integrate them in a fascinating manner.
The engineering standards to which Leicas are being designed and constructed are very high indeed and I think it a legitimate pleasure to enjoy and study these standards. I also think that taking pictures with a Leica, sensing the high level of precision engineering while taking pictures greatly enhances the pleasure of using a Leica.
Of course these pictures may not be great art, but I think it a bit narrow-minded to assume that Leicas may only be used in the proper way by 'vision-people'. I am a great admirer of HCB, but I could not make one picture in his style and quality. I still can admire craftsmanship in an instrument and use it accordingly.
The upshot of this long story then is quite simple: Leica M systems have been designed and constructed as precision engineering instruments that are dedicated to taking pictures in the style of the artless art of the snapshot. You can admire and enjoy both aspects of this fine instrument or you can choose to address one of both aspects. In any case it is up to any user to make up his/her mind. There is not one proper way to use a Leica, nor a canonized way to take pictures with a Leica.
There is only the pleasure of owning and using and studying this very remarkable instrument of photographic technique.
Bokeh is mentioned often as a lens performance qualifier. What is the bottom line here? Bokeh is a very elusive concept. It is related to the shape of out-of focus object details and the light-energy distribution within the unsharpness patches. It might be measured scientifically but no one knows how and thus subjective interpretations abound.
Bokeh is basically a function of spherical aberration and number of diaphragm blades. Clearly the out-of-focus areas in front of and after the sharpness plane are different depending on the overall aberration correction, which involves much more than just the correction of spherical aberration.Bokeh is not (and here I differ from almost anyone) a conscious design decision. Lens designers focus all their creativity to the plane of best focus and try to get an image quality that is consistent with their goals. As a general statement I would say that the clear rendition of extremely fine detail with high contrast and excellent shape preservation over the whole image area and over all distances and apertures would be the idea. This is not easy to accomplish and so compromises have to be made. A certain 'residue' of aberrations will be present in every lens. What this residue is composed of, depends on the compromise made. Now it is easy to understand that the way the plane of sharpness is defined has a bearing on the unsharpness areas in front of and beyond this plane. So the unsharpness rendition is a direct function from the degree of correction of the sharpness plane.
Current Leica lenses have a very rigorously corrected sharpness plane by a secret design formula, which differs quite a bit from the older design philosophy of Leitz. But even the older Leitz lenses have not been defined with any idea of bokeh.
So bokeh might be detectable in older Leitz lenses, but this is not a design decision, just the result of the overall correction.
Modern lenses indeed have less bokeh as I understand the idea, they are corrected to a much higher degree than older Leica lenses.
When comparing older and recent Leica lenses to study bokeh I noticed these aspects:
Modern lenses have a steeper slope of sharp to unsharp and older lenses while not as good in the sharpness plane as current ones seem to hold on better in the unsharpness area. This is however the deception of the eye. If you try to look at those distances where the degree of unsharpness is equivalent the modern lenses have a better shape preservation. But again this is not bokeh.
Are the characteristics of modern Leica lenses the result of cost reduction?
Some will tell you that is easy to make a lens with a high contrast, but it is allegedly difficult and costly to produce a lens that gives roundness and resolution at the same time. Reference is made to cheap and contrasty Japanese lenses and expensive, but softer and rounder German lenses of older vintage. And it is stated (without any evidence, as often is the case) that current Leica lenses could benefit from improved contrast because of reduction of production cost. Or the other way around: because production costs have to be lowered, Leica designers have to accept the easy design of a high contrast lens and do not have the time or resources to do an elaborate redesign for a lower contrast!
A lens that exhibits high contrast and high resolution at the same time is very sensitive to the smallest deviation from the correct design values. Production cost must be higher as the machining of parts, the polishing and coating of lenses, the centering and positioning of lens elements, the quality checks during assembly are all much more demanding. Some current Leica lenses demand a hundred hours of work from start to finish.
Classical European lenses give a unique flavor? True or false?
Some will tell you that European lenses (Zeiss and the older Leitz lenses) produce a distinct imagery that is sometimes described as Leica glow or in more descriptive terms as roundness, soft skin tones, balanced color richness and sense of 3-dimension.
None of these terms can be quantified and measured and belong to the vocabulary of the art critic. As said in another question, the analysis of a photographic image as art is different from an analysis of a photographic image as a function of optical performance. It is to be regretted that so many persons mix up these areas.
If these concepts cannot be measured, could they be true descriptions of the photographic image and point to different ways of rendering reality onto the emulsion? No. Roundness or the sense of 3-dimensionality or plasticity is related to our world of three dimensions with depth in space and solid objects that extend in three dimensions. Our binocular vision allows for the depth clues. Close one eye and all of the depth in space is gone. A picture (which is flat) will evoke this impression of reality when the representation of the reality is as close to the original as possible. Modern Leica lenses give a very high fidelity representation of reality and are able to record the finest details and shades of tones, much more so than older lenses. So their recording capability of the real world is higher and so the impression of reality is also better.
Give an example!
Take a picture, very classical!, of a girl with bare shoulders in a three quarter pose. And position the light in such a way, that the shoulder gets some highlights to bring in depth and the impression of roundness of the shoulder. A modern lens will record all the tiny details in the skin and will also render with high fidelity all the fine shades of white in the highlight area, and when expertly exposed also in the specular highlights. This gives a true impression of roundness of shoulder. An older Leica lens will wash out the finer shades of white and the outlines of the shoulder will be softer, blending more into the background. So the visual clues for a realistic appearance are less well recorded.
But a higher contrast lens will crush the fine shades of tone and the finer details? As said before, a higher contrast lens (everything else equal of course) will record finer details than the lower contrast lens. Highlight and shadow detail is also rendered to a higher degree as the higher contrast lens will give a bit more contrast to tonal areas that are very close in luminance and thus makes them just noticeable where the lower contrast lens will record the shades below the threshold of vision.
Canadian lenses are better than Wetzlar or Solms or Portuguese lenses?
A notorious topic on every list. Canadian lenses, as a generic term, are not inherently better or worse than lenses made elsewhere. Nor is a lens, manufactured in Portugal by origin not as capable as one from Wetzlar. The quality of a lens is its design in combination with the manufacture and the quality control. A sloppy design from Canada and lax quality control there, will not give you the optimum quality you expect. And meticulous manufacture and a very creative design in Portugal will give you the best of Leca quality. The paramters here are creativity and art in optical design, excellent choice of materials and glass and accurate production processes and dedicated human labour. That can be found in Portugal and in Solms and in Canada. But not one geographical location has an inherent advantage that is not to be found elsewhere.
But high aperture lenses are specifically designed for high contrast and lower resolution, aren't they?
One of those floating myths again. One of the aberrations that is most difficult to correct in a high aperture lens (1:2.0; 1:1.4 or wider) is spherical aberration. The effect of this aberration is a fuzzy image over the whole image field. The wider the lens, the more spherical aberration and thus the tendency to soften the details and lower the contrast. In this particular case, the tradeoff between the plane of best contrast and the plane of best resolution is the most marked. The designer almost always will choose for the plane of contrast to focus on as this will help detail rendition in low contrast situations, typically the ones for such a lens. But the contrast of such a wide aperture lens (1:1.4 as example), while relatively speaking quite good is always lower than the contrast of a well-corrected 1:2.0 lens. With the balance between plane of contrast and plane of resolution now shifted far in the contrast direction, resolution will suffer. Again not a design goal, but a best offer in a worst case situation.
The tradeoff here has a quite wide margin. So in this case we see many constructions, ranging from higher resolution, lower contrast ones to higher contrast, lower resolution designs. Note however that all this higher-lower is relatively within a design.
As soon as designers could break out of the grip of spherical aberration, a new generation of high contrast, high resolution wide aperture lenses could emerge. Witness the Summilux-M 1:1,4/35 asph as the first of a new generation.
There is a persistent discussion that the Noctilux lens (in 1.2 and 1.0 versions) are designed only for wide open performance and will deliver worse imagery when stopped down. This is absolute nonsense! Wide open both lenses perform admirably, given the aperture and the respective design constraints. But they do improve when stopping down
What are the sources of the high and low contrast in a lens, in terms of optical aberrations? Is there something more going on then simple flare in the lens? Yes, there is more to it (as usual). One obvious source of low contrast is the unwanted reflection of light rays from every glass surface of a lenselement. The more lens elements, the more reflections (non image forming illumination). Another source is the mechanical reflection, due to bad internal construction.
A most important source, hardly ever mentioned is more difficult to explain. In general we have several different planes of sharpness in any lens-system: the plane where we see the highest contrast and the plane where we see the highest resolution. The designer will generally choose a compromise that gives high contrast and useful resolution. Note that I do not say 'low' resolution. In the plane of highest resolution we will find very small core of concentrated light energy, representing the point sources of the object, but surrounded by a large and fuzzy circle of diminishing light energy. Here the light rays are randomly distributed and produce a kind of soft halo around the core. The resulting effect is a low contrast image.
The plane of highest contrast will give spots of a larger diameter, but with a steeper edge gradient and less fuzzy haze around the spot. The contrast is much higher, but the core is a bit larger. So the very smallest detail is not recorded In the other case (high resolution) this same small detail might be recorded but will never be visible because the contrast difference is below the detection ability of the eye. Any way the small shift away from maximum resolution will give a much higher useful resolution, that will be visible with great clarity. Current Leica thinking in lens design is to opt for a high contrast and a high resolution, and many of their lenses show clearly the advantages of this approach.
Older lenses had a lower contrast and thus a lower resolution, not because of particular design goals, but because the state of the art at those decades did not allow for better imagery.
Contrast and resolution are two different and even conflicting phenomena, most people will say. True or false?
If ever a myth should qualify as the number one in photographic technique, this would be a strong candidate.
According to popular opinion (which has been based on the discussions in many books about the Leica) high contrast implies low resolution and low contrast is often accompanied by high resolution. Many Leica users will have read about the distinction between lenses with low contrast and high resolution and lenses with high contrast and low resolution. It is commonly stated that the older (that is pre-1980) Leica lenses exhibit the lc/hr character, while all Japanese lenses (old and new) are of the hc/lr character. Some observers currently state that Leica lenses are as good as the competition in the resolution area, but are of the high contrast type. Others will note that modern Leica lenses are of Japanese type, that is of still higher hc/lr character.
In fact high contrast is always correlated to a high resolution. Resolution refers to the ability to distinguish between closely spaced lines or points as individual entities. This is also called the spatial frequency, where the numbers relate to the number of lines (black and white) that are squeezed into one millimeter of length of space. A spatial frequency of 10 tells you that 10 separate lines of alternating black and white tone will be packed in a millimeter. Any line then has a width of 0.1mm and the distance between two lines is also 0.1mm. These 10 lines are referred to as 5 linepairs/mm.
Contrast refers to the relative luminance differences between the blackest and whitest area in an object or negative or print or transparency. The whitest surface in nature reflects about 99% of the incident light and the blackest surface (black velvet) about 1%. That is a relation of 0.99. Theoretically we can reach a contrast of 1.0. If contrast drops to 0.7, the black area will reflect more and the white area will reflect less light. If contrast is zero the black and white areas are equally gray and we are not able to distinguish the two areas or lines or points. It is clear that the higher the contrast the easier we can detect the separation between closely spaced lines or points.
Therefore high contrast and high resolution are closely related. It is not possible to have a low contrast and a high resolution. The eye would not be able to separate the two closely spaced lines of almost equal brightness.
High resolution: how important is this as a performance parameter?
The interesting topic about the alleged differences between the human perception and the physical test parameters, has been extensively discussed by many engineers since the introduction of modern high contrast lenses in the mid sixties.
These new lenses were the result of a fresh look at those physical parameters within lens design that really could have direct relevance for the human perception of optical qualities.
So it had been noted that contrast around 10 lp/mm had the most impact on the perception of sharp subject outlines and added to the general impression of sharpness in a picture. It was also noted that a very high resolution in fact detracted from the perception of image clarity and the clear rendition of fine details, including subtle gradations in small object areas. Again the fine delineation of subject matter in strong highlights and deep shadows adds to the impact of a picture. But these aspects can not be captured by notions as resolution, bokeh or whatever.
High resolution is an ambiguous reference. In the past some lenses have been analyzed and recorded above 300 lines/mm (150 linepairs/mm). This figure for some unknown reason has been fixed in the minds of many Leica users as a benchmark figure.
In reality it has been established that 40 lp/mm (80 lines/mm) exceeds the image recording capacity of the film-lens-camera system. Of greater importance is the contrast with which these lines are recorded. Many lenses are capable of recording 150lp/mm, but the contrast is so low that we do not see the white-black grating with any clarity, just a mushy gray noise.
We should forget about resolution figures when not related to information about contrast.
Are Leica lenses expensive?
The most obvious argument for Leica's high cost production of lenses is quite simple: volume and not necessarily better QC.
The cost of designing a lens (giving the ubiquitous computer programs), and going through an elaborate testing program is the same for Leica as for other main marques. The Leica design process adds an additional step, that is the matching of the assembly production tolerances to the requirements of the optical designer.
If a certain optical parameter can not be guaranteed by the subsequent assembly or production line the design has to be changed. This matching and finetuning takes money. When a lens has a fixed design and the production can start (QC in place etc.), then we have the famous economies of scale.
Small volumes and small production runs are invariably more expensive (and not in itself better) than larger ones. So all costs of a lens (that is design. production, documentation, PR, overhead etc.) has to be spread over a small volume.
It is quite clear that the cost of glass is not the factor that fully determines the cost of a lens. That is again one of many myths around Leica lenses.
Cheap glass at the moment is $80/unit and expensive glass is $800/unit, but many lenses can be made out of one unit. So even extremely expensive glass would add no more than $400 dollars to the cost of a lens.
Of course expensive machine tooling must be amortized over a lesser production volume, many QC-checks are manual and add to the cost etc.
So my view of the high cost of Leica lenses is simply this. A low volume necessitates a high price (see also the more exotic Nikon or Canon or Zeiss lenses). If the cost price is by necessity high, because of small production runs and therefore the selling price must be high also, then superior optical quality and a stringent and elaborate QC program may be the only
way of economical survival.
Another factor comes into play. Older Leica lens generations lasted for at least a decade and some for more than two decades. Costs could be spread over a longer production run. Now a Leica generation will hold less than ten years.
Leica needs to invest in ever more elaborate production machinery and also needs to educate the workforce to a higher degree than elsewhere.
So while QC is certainly part of the answer for the high cost, there is more to know about the production process. Leica lenses are designed and engineered to very small tolerances and the selection and subsequent processing of the glass (grinding, polishing and coating and centering) are as important as the elaborate QC stages to ensure that the tolerances are as required.
Sales of bodies versus lenses
There is a persistent story that Leica mostly sells lenses and the bodies are just vehicles to put the lenses onto. A 1 to 3 relation is often quoted (three lenses for every body).
Economic results will tell another story. In three years from 1993 - 1996 the M bodies sold 9322, 11208 and 10171 bodies, the lenses resp. 18009, 19170 and 21186. The average body price was 3400DM and for the lenses 1600DM. Remember that in the serial figures of the bodies, the Minilux and the Digilux are included. Roughly there are now 2.700.000 bodies produced and close to 4.000.000 lenses.
Still many people will insist on a strong relation between the artist's interpretation and the image quality of a lens and or film?
Any object in front of the lens (or the eye) is just a random pattern of patches of varying brightness (and color), shapes and areas. That is what reaches the retina or the emulsion. The prime directive of the lens designer is to ensure that this random pattern is recorded as faithfully as possible. No more, no less.
BTW: this pattern is also the basis for exposure metering and much of the discussion on reflective/incident metering would benefit from this perspective.
For the eye this pattern is the starting point. The pattern recognition mechanism of the mind will interpret the random patches as a cat, or a baywatch girl or the interior of the Louvre. The next step is another cognitive one: we attach emotions to what we interpret. We dislike the girl, we like the cat. This is part of our cultural training and our sense of symbolism.
This cultural interpretation is subject to a vast literature of scholarly works. This has basically nothing to do with the designers prime directive.
In the course of history the lens designers have tried to fulfill this directive more or less successfully and in different ways. But bottom line no designer would use a different optical formula or even a different paradigm.
Fact is that the measure of the degree of faithfulness can be objectively ascertained. This, again, has nothing to do with cultural influence or personal opinion. You can like or dislike the way this faithful recording has been accomplished. Witness the discussion between admirers of the
Sonnar way or the Summar way.
This discussion then is limited to the measurable part of how close the prime directive has been fulfilled. Some choices are necessary. But in essence no emotions are involved. The perfect lens not being invented there is a certain bandwidth of choices and balances. (Leica versus Canon versus Zeiss).
This approach to lens design is valuable and objective. It has not yet any relation to the way a picture can be interpreted culturally.
Now the prime directive of a photographer is to create pictures with meaning and purpose within the cultural context the pictures are likely to be viewed. Second part of the directive is to develop a visual language and vocabulary in order to express oneself more eloquently. Here we are in the realm of language and symbolism. Ever heard Cartier-Bresson saying something about the quality of Leica lenses?
Of course a photographer, in following the directive, can opt for a lens system with certain optical characteristics, but still we can clearly distinguish between the optical and expressive part.
Now it is a matter of debate if lenses with certain optical characteristics may add or distract from the clarity of the visual statement a photographer is trying to make. This I presume is what Alf is referring to when he speaks of certain lenses as being better suited for his way of photography.
I do not feel qualified to add anything substantial to this kind of reasoning. I do however see the need for this discussion.
I feel more at home discussing the optical designer's prime directive without the additional topics of visual language or interpretations of these statements.
The discussion can be complicated, but will be purely ad random unless we learn to separate the several equally interesting topics.
If the physical parameters support one side, and the emotional view supports another side - isn't the test missing something essential which is required for human perception ?
Human perception is a very complicated topic, involving psychology, neurology and brain sciences. A number of facts about the way the eye-brain tandem work when processing visual stimuli, have been recorded. As an example, we may note that the eye when detecting a difference in luminance (as when crossing a white-black border) scans this border many times in rapid succession and so the nerves registering the white line are more and more stimulated, thus enhancing the impression of a high contrast edge. In black and white photography the well known neighbor effect is based on this phenomenon. We can explain many of the visual phenomena by referring to science. Some areas, the cultural and esthetic ones in particular, however, are not so easy to study in scientific or experimental terms. The question refers to the art of seeing and interpreting photographs.The pictures of Henri Cartier-Bresson are masterpieces of vision and invoke emotions far beyond the technicalities of image quality. The world of seeing and appreciating the artistic and human dimension in a photograph is a totally different one than the world of analyzing the image quality as an optical parameter. Both worlds are valid, but should not be confused.
Many people refer to the HCB and Eisenstaedt pictures and note that they used Leica lenses of the older generations. This is a truism as they could not use the newer ones as these were not available. Claiming the optical superiority of the older generations by referring to their use by HCB or AE is a fallacy. In this case you would be confusing artistry with optical engineering. While optical engineering is more of an art that most people know, the art of seeing and the language of images is a cultural domain. The content of a picture is of course related to the technique of photography. The representation of the world in front of the camera is mostly mechanical as the optical and chemical processes of lens and film work independently from the artists intentions. HCB nor AE were interested in the technical part of photography and certainly not trying to find the limits of the optical performance of the lenses they used.
Optical testing versus appreciation of lens quality
This is one of the most hotly debated topics. As the glass in front of the camera determines to a large degree the quality of the image, the performance of the lens is most important.
As noted above, there is a big difference between the perception of a physical property and the measurement of that property. The relation between the two might be very complex. The perception of 'sharpness' and the measurement of 'sharpness' do not necessarily correlate as the concepts may be different. In this particular example we say that sharpness is a subjective impression and the physical correlate is edge contrast or acutance.
We should be very careful here to be clear and to define any concept we use. Otherwise a confusion of tongues will result and in fact does exist.
Optical testing when done properly gives the most reliable results for the assessment of a lens.
There is a story that optical testing is not suited for real photography as we take pictures of three dimensional objects and optical tests are based on flat (two-dimensional) test charts. This is a myth and totally unfounded. Any optical system produces one and only one plane of focus, that should be identical to the filmplane. There we have the best optical performance. Anything before and after this plane of focus will become progressively out of focus and blurred. The out of focus areas are nowadays studied for their perceived 'bokeh', but that is not important here. So when photographing an object extended in space, only one vertical slice of that object is on the plane of focus. Presumably it is that part of the object where we have focused upon with our M-rangefinder or R-slr screen. Every other part of the object is more or less blurred.
Indeed. When testing a lens on an optical bench, we look at the plane of focus to assess the image quality. But it is very easy to defocus slightly before and after the plane of focus. The tester then can simulate the out of focus areas quite well by looking at the image when defocusing in small increments. Any optical design program can accomplish this too and this method is an important instrument for tolerance analysis. So a serious test method will do a through-focus assessment to study the out-of-focus behavior.
A well conducted test, based on modern optical evaluation methods and image quality criteria will produce objective results about the optical performance.
Optical performance is not to be confused with the perception of an image. A lens that is optically superior to another one, might give imagery that is perceived as less pleasing or good as this other one.
When talking about image perception we walk into a totally different realm of lens evaluation. Here personal opinions abound and every opinions is as good as any other. Of course some one will claim a higher status and a more valuable perception based on his experience, his prolonged use of a certain lens or whatever argument suits his position.
Be on red alert here. While some persons do have valuable insights to share, many talk without any substance.
Some persons will tell you that they have conducted real life tests, which are of course much preferable above any laboratories test. As the variables are never strictly controlled, such a 'test' can tell you nothing that is meaningful.
Measurement methods and subjective evaluations
Personal experience is by definition subjective and relative. It spans the whole range from incident-based impressions to hard won experience based on thousands of hours of photographic practice with Leica equipment. Obviously practical experience of long time Leica users whose photographs have to meet the expectations of demanding customers are very interesting and valuable.
But photographers are by nature technically conservative (they dislike experiments that might jeopardize their assignments) and they try to find their personal style as artists or craftsmen.
This experience however is expressed in a vocabulary that is quite imprecise and difficult to evaluate.
Any person will perceive the tonal quality of a print or the contrast of a lens in a different (personal) way. If one speaks of a 'rich tonality' or a 'better highlight separation' or a lens with 'a high sharpness', it is the perception of this person not the physical property that is being discussed.
If some other person notes less sharpness or a 'smooth tonality', we have no objective evidence to compare these statements, nor to find out which one is correct.
If we could measure the tonality we would proceed as follows: we measure the highest and lowest luminance in a subject and find values of 0.5 c/ft^2 for the shadows and 500c/ft^2 for the highlights. This is a contrast of 1:1000 or log 3. If we have a print of this subject and measure the reflectance we find values like logD 2.16 for the shadows and logD 0.05 for the highlights.
This is a density range of a little above logD 2.0. So we note that the subject tonal range has been compressed. If another print has a range from 2.3 to 0.06, we clearly note less compression and if we would measure all tones in small groups (darkest shadows, deep shadows, dark gray areas etc.) we might make comparable statements about the tonality range.
Of course the scientific and subjective evaluations should complement each other. But they should not be mixed up. Perceptions are very important for the assessment of the final result (the print or transparency). But a perception is no substitute for a measurement. Nor can it be equated with a measurement.
Elmar 3.5/50 red scale
There is some discussion if the so-called Redscale Elmar 3.5/50mm is an improved version compared to previous versions.
All books note that around serial number 905000 a recomputed and improved Elmar has been introduced. The alleged recomputation is based on a comparison of the curvature of the front element, which is slightly less strongly curved than the prewar type. It is also researched that some other versions had still different curvatures (not as flat as the Red scale but flatter than the prewar or postwar black scale versions).
The original 1926 Elmar had glass types (front to back: SK7, F5, BK7, SK15).
The Red scale is supposed to have (SK14, F8, LF5, BaSF10). Data from article in LHS England #48.
First of all: small changes in curvature, certainly from the front element of the Elmar are of often insignificant value for the design.
This element does not take the burden of correction of aberrations. The second lens element does. And lens bending as it is called is often done to accomodate other changes. If bending is needed for optical correction it is for coma and spherical aberration, which is difficult to optimize in a 4 element triplet.
Secondly: a change in glass types is not always done to improve the performance. Sorry to destroy another myth.
Sometimes a glass type becomes extinct and a new glass has to be found and some recomputation is needed. But if the lens is already very good (or can not be improved) a recomputation to accommodate the new glass does not infer better performance. Sometimes a glass type is changed for easier production or coating purposes or better or smoother grinding. Not of these in itself does improve the quality measurably. The 4 element triplet is quite resistant to improvements when changing glass types. What you need is a high index crown glass with matching negative flint glass.
SK7 has 1.61, BK7 has 1.52, and SK15 has 1.62 (rounded figures).
SK14 has 1.60, BaSF10 has 1.65, but is not a crown but a flint.
Kingslake in Lens Design Fundamantals uses several sets of glass types to correct a 4 element triplet but notes that is does not make big changes, unless all refractive indices are above 1.60. This is already the case with the 1926 version (except BK7).
So we cannot in itself conclude that on the basis of these data there is an improvement of design. Schott indicates in the glass catalogue two types of glass (preferred and not preferred). The newer glasses are mostly of the preferred type, and the previous ones of the not-preferred type.
I do not believe that the change gave much improved performance. And I am not ruling out a small drop in performanc in the outer zones from the type with the preferred glass.
Now the serial number. I have these numbers in my database:
Elmar 3.5/50 from 904.001 to 910.000 and allocated in 1951 (note NOT produced in 1951).
The next is 941.001 to 950.000 and allocated in 1952.
It is highly unlikely that Leitz would change a design in the middle of a production run.
I can almost with certainty declare that the indicated serial number of 905000 for a change in design is wrong, at least not based on any plausible facts or authoritative written sources.
Studying the available documents there is indeed a change in glass types (which not necessarily means a change to an improved design) and I would locate that change at serial number 955001. (might it be that the original author of the # 905001 did misread the German handwriting???, which is indeed very hard to encypher) It is a guess but that could have occurred as a preparation for the bayonet version from # 1140xxx.
Now the 4 element design has an interesting additional story. The 4 element lens is just capable of correcting the 7 basic aberrations. Read my book for details.
To give the expected performance there may not be any change in production tolerences as the design is extremely sensitive to assembly and manufacturing errors.
A 5 or 6 element design can give the same or even somewhat better performance with a much higher latitude in tolerences. Is this the explanation that the Japanese in the 1950’s when they introduced their cameras used 5 or 6 element designs which were not much better than the 4 element German ones but much easier to produce in lage quantities?
Why it is hard to get high image quality
Today I shot 13 BW films in less than 2 hours.With a beautiful model, that is no problem at all. Without motordrive I even managed a film in half a minute. I have used APX25, Panatomic-X 64, Maco UP25 (replacement of the old and veryfamous Adox KB 14), Maco 64, (replacment of Adox KB17), Maco Ortho 25(replacement of Agfaortho 25), APX100 and Maco 100 (replacement of Adox KB21).
In addition to my previous test of Agfa Copex, TechPan, D100 and TM100 and FujiAcros 100, I am slowly covering and testing all of the slower speed BW films.
These films are needed when the Leica quality needs to be demonstrated. Ofcourse the leica characteristics can be spotted when using ISO400 and above, but then you need to have a very trained eye to discern the significant differences.
As I am also using a slew of developers, I can comment on the final quality of the films used. My initial impression is that the developer is much less important than the correct exposure, correct focusing and reduction of vibration. Still a TP print does give you a visual edge when compared to D100 at all. But I still claim that I can get to medium format quality with a 35mm negative when put beyond a Leica lens.
I also "discovered" that a 30x40cm print is the minimum needed to show leica quality. But at this enlargement any personal failure of technique is remorselessly exposed.
When carefully studying my bigger prints, I noticed that it is very difficult to get on the negative/print a resolution of 30 to 40 lp/mm. It is indeed very difficult to jump from 20 lp/mm to the 80 lp/mm that is technically possible.
Anyone who assumes that a 100 lp/mm (let alone 200 lp/mm) is a piece of cake and easily possible with a Summicron DR should think twice before making such a statement. And if such claims can be heard occasionally then from people who do not take pictures themselves. To be frank: it is simply impossible!! A lens that can capture 100 lp/mm on film is a highly unusual specimen! Most people are not aware of the effort and quality needed to produce such a lens. Of course there are lenses that are fully aberration free. Look at the lenses made for wafer chip reproduction. Zeiss makes them and they are fully and 100%aberration free. BUT; the lens weighs a ton, has 20 and more lens elements and
EVERY element is individually adjustable laterally and longitudinally to compensate for the inevitable manufacturing errors. IF we reflect on this we may assume that a 17 element zoom lens has better image quality than a 5 element fixed focal lens. Why? The 17 element lens can be corrected to a higher degree, this lens can spread the sensitivity of the aberration correction over 17 elements, (compared to the 5 element lens), therefore production tolerances are less critical, and a cheaper production is possible. In a 5-element lens all of the aberrations have to be taken care of by a mere 5 elements, where the zoom can do it with 17 elements. That is he reason why many Cosina/Voigtlander lenses have so many elements: production tolerancing is eased, and quality can be held at an acceptable level, even if production has a wider bandwidth of errors.
To be fair: Voigtlander lenses are excellent value for money. But dismantle one and you will see the cost cutting: plastics, wide tolerancing etc. As they say: water boils at 100 degrees, in Germany and in Japan too!
I am now focussed on low speed BW film. I am now testing a new emulsion that is closely related to the famous ADOX KB14. This film has a very thin emulsion layer and a thick silver content. The pictures are very convincing, but of course not to TP quality. Still it is a challenge to match certain Leica lenses to this film and see what happens in picture quality. My Japanese friends may be pleased to note that film characteristics and lens characteristics (beyond the bokeh issue) can be matched to deliver a unique personality to the print that is reminiscent of the drawings of the Japanese masters of previous centuries.
Being in close contact to the emulsion factory in the former Yougoslavia (efke) who produces the KB14 I can give additional info about their way of thinking and producing emulsions. More of that later. Some of you have been inquiring why I can take pictures with a ISO25 film. With the sunny 16 rule a ISO25 film will have an exposure of 1/30 at f/16 in a sunny environment. That is equal to 1/500 at f/4 in the sun or 1/250 at f/2 in the darker shades. Quite good for handheld shooting. Even 1/60 at 1.4 is OK and then you are in really dark surroundings.
Essential differences between R and M viewing
Quite often you read about people remarking that some Leica lens will obscurepart of the Rangefinder window. Noctilux 50 or Elmarit 24mm or Tri-ELmar being cited as examples. This is true, no discussion about it. But the interesting question is this: are we using the M-camera as it is intended. Let us start with the viewfinder of the typical SLR. We see through the lens and view the object in a size that is the correct relation to the focal length and by nature we select the part of the object that is most interesting to us. With the SLR then we select, frame and compose through and with the viewfinder. In fact the SLR method is not different from using a view camera where we throw a blqck cloth over our head and study the exact image on the ground glass. This is a static process: we detach from the world at large and selectively focus on one element that interests us.
With the Leica M camera we are in a completely different way of looking at the world. Generally when using a M camera we are using a mental technique that some writers have called the stream of consciousness approach. We are in the centre of the scene we are attracted to and our eyes/senses absorb everything. Then in a split second we are triggered emotionally by an event or a juxtaposition of objects (the decisive moment) and then we raise our M to the eye, frame with the finder and while still being fully we have mentally captured the whole scene, the framing is an act of selection, not of composition or studying the subject and then the fact that some parts of the scene are obscured by the lens is of no importance as we already know what
the full picture will be. We can mentally fill in the blocked out parts, because we are so immersed in the scene that we can se the whole scene. As soon as we are using the framelines in the M as a precise selection mechanism for the scene to be framed and composed we are using the M as a version of the SLR way of looking at a scene.
The M frame lines are a tool to slice spatially trough the stream of consciousness dimension of time. They are not a substitute for SLR type viewing and selecting of topics. We need to abandon the SLR style of viewing when using the M. Then the technical shortcomings of the M (frame lines not exact, obscured by lenses etc) can be disregarded. The M style of photography is not a better type of SLR photography (like a better mouse trap), but a distinct type of photography, more generic than being comparable to the SLR. If you do grasp the meaning, then the M-photography will become a new experience.
On the true difference between the R and M systems.
To start with a short history: In the '30s, the system camera has been born with the large array of interchangeable lenses and all kinds of medical, scientific and copying accessories. The Leica and Contax systems expanded from the rangefinder body and had trouble to provide accurate viewing and framing. They solved it with the Visoflex. The Exacta Varex started with a mirror box and reflex viewing, but had a problem with fast and dynamic photography. Basically we see here already the tension between the two base systems, one tuned for dynamic style of photography, the other tuned for static photography. In the fifties the Japanese cameras, like Pentax, and specifically the Nikon F tried to bridge both concepts and provided tools that were ergonomically well designed, had improved reflex systems with clear viewing and so for a long period became the universal photographic tool. Based on 35mm film, they had added capabilities of motordrives, and the quick shooting of a full 36 shots in a few seconds. This was their approach to dynamic photography. The big advantage of the SLR species was the ease of lens change combined with the correct view in the finder. And it could include zoomlenses too. The Hasselblad on the other hand became the preferred tool of the studio photographer and the art and nature photographer who needed careful and dedicated composing and demanded excellent print quality.
But a system of lenses was a burden and the limitations of the rollfilm (12 shots, dubious film flatness) restricted the usefulness. The Zeiss Contarex tried to combine the quality aspirations of medium format with the ease of use and expandability of the 35mm SLR. Zeiss engineers assumed that the 35mm photography had matured to a stage where careful composition, accurate framing to the edge of the image, exploiting as much of the small area of the 35mm negative as possible and excellent optical quality would be seen as advancing the art of 35mm photography.
The market was not ready for this philosophy and Zeiss stopped production and with the Contax RTS joined the mainstream of thinking. The next stage is the incorporation of the AF module, and now we can even take pictures without looking through the finder at all.
But the basic tension between a dynamic and mobile system and a static, tripod bound system did not disappear. The reflex system is indeed at its best when doing macro photography, studio work, or use lenses with a very long focal length or very strange perspectives, like the 15mm or the PC control lenses. In short everywhere when the accurate match between what you see and what will be captured on film is needed.
The current R-system: If we study the features of the R8, we see that the incorporation of the flash automation, the several exposure systems, the range of high quality lenses, the provision of extenders and elpro lenses, all do indicate that the R is designed for the static type of photography where the small format can be used to its advantage: relatively compact bodies and lenses, optical quality to diffraction limits, and zoomlenses without a loss in image quality. The studio flash in the R8 is often overlooked but to me this item is very important as it indicates the direction of photography for which the R8 designers created the system. (including the lenses). The superb 2/180 is not useable without a strong tripod, as is the zoom 70-180 or the 4/280. Let alone the 2,8/400. Even the 2.8/100 macro will be used on tripod to get the best imagery. And indeed,when you see the R8 in this perspective, the true value of the viewings system can be appreciated. The viewing screen isolates the photographer from his subject and the subject is seen frozen in a small confined space of the groundglass, without relation to the surroundings. You see the final print or slide as you want it or as it will be captured on film. And if you use this screen as intended, you compose, arrange and create the picture.
You need time and dedication to do this, but that is the choice of subject and your method of interpreting. The R-system then is Leica's answer to the need of the medium format photographer who wants to use the advantages of the 35mm format without compromising the ultimate print quality.
The current M-system: The rangefinder could never compete with the reflex-screen in framing accuracy and the possibilities of careful composing and seeing exactly what you will capture on film, whatever the lens or accessory. But its small body, and compact lenses (compromised in optical quality compared to the the best R-lenses because of volume considerations) allow a different style of photography: to be involved in the scene and have your sensory system wide open to freeze a moment in time. You sense this from your emotion or relation to the subject or scene. You cannot compose carefully or create the picture: you have to wait and hunt and then act in a swift movement of fleeting aiming and shooting. With a rangefinder you aim and shoot in an instant. If you approach the M and R on face value you are missing the basic difference.
Both are systems, both have a range of lenses that overlap in focal lengths, both have the same type of facilities (TTL, 35mm film etc). Both can be used for a wide range of photographic tasks: reportage, landscape, portraits, you name it. So if you look at the systems from this viewpoint, you become confused as both systems seem to compete in the same type of photography. If you follow my approach to distinguish between static and dynamic photography and composition versus involvement, you have two dimensions along which you can make a satisfactory selection.
Optical topics (1)
90% of the pictures I took have been made with the TriElmar and Kodachrome 64 (amateur version). I am still convinced that the Tri-Elmar is a most underrated lens. Its optical qualities are outstanding and is ease of use is exemplary.
Most people are put off because of the alleged low maximum aperture, but here the stubborn leica myth that you have to take pictures at apertures of at least f/2 to be considered a true leica photo is interfering. Now Leica myths still abound as the latest Viewfinder issue does demonstrate. And it is true that talking esoterically about leica is more fun that exerting hardwon facts from a universe that is not well understood. Optics is a most difficult area, be it discussing lens facts or myths or interpreting MTF graphs or even doing one's own tests. One of the books I have been reading has been written by a professor in theoretical physics and had as its theme a layman's introduction to the nature of light. Supposedly to be a simple book, it started on page 1 with an avalanche of formulae of double and treble integrals and differentiations and continued on such a high level that I needed days of reflection to understand what is going on. Luckily the farmer who just sits on his tractor and wonders where to get the water for his crop. After 50 books on optics I am amazed that you can still learn new facts every time and that is indeed very rash to assume you know your stuff after digesting a few articles.
This professor tells the reader that all image formation is a diffraction pattern and can be condensed into one arcane equation. He insists that even in a lens that is limited by its geometrical aberrations, the diffraction effects distort the image severely. This is not a new fact. Abbe of Zeiss fame knew it in the late 19th century when he noted that physically small lenses did deform the higher spatial frequencies more than did large lenses. The basics? The image of an object is this diffraction pattern and by Fourier analysis the original object can be reconstructed from the image/diffraction pattern. BUT: in a small lens only a part of the full pattern can be captured and so the resulting image is only a fraction of the original. The reason that R-lenses can be optically better than the M-versions is now clear. An R-lens has the ability to capture a bigger portion of the diffraction pattern and so has a more faithful representation of the object.
Why are multi element optics inherently better? The basic idea of a lens is the bending of the light rays coming from a distant object. An object at 10 meters distance sends rays that are close to parallel to the optical axis and need to be bend quite sharply to focus on a plane at a mere 5 cm from the lens. It is well-known that rays that pass straight through a lens are aberration free. So if we can arrange a system of lens elements such that any element produces only a small deflection of the ray, we have small aberrations. If the total deflection of the ray has to be done by 5 elements, every step is a substantial deflection. And introduces aberrations. A lens for microchip production has 30 elements and every step is very small, resulting in an aberration-free system. But the alignment of 30 elements and the tolerances are extremely costly. So a 5 element system may be the better option. Again I am very impressed by the Leica designers to get such a performance from M lenses which have few elements and are physically small.
All theory is against such a design. I do think that there is generally a lack of knowledge to really appreciate what it is to design and manufacture a high class lens.
On production cost.
Studies have shown that the reduction of the overall tolerances rom let us say 0.03mm to 0.01mm treble the cost of producing the lens, even if it may be difficult to see a much improved image. Here we see the Leica problem: users expect best quality, which is equivalent to say that production tolerances must be very small, which in turn boosts the costs. BUT many a Leica user will never see or be able to get to the level of expertise to appreciate the image quality that is possible. A Cosina/Voigtlander lens of 1000 dollar may bring the same results as a 3000 dollar Leica lens, where the Leica lens is manufactured to trice higher tolerance standards, but without command of the imaging chain or a suitable subject where the differences do show, this theoretical advantage may be lost.
My whole point is that to appreciate and exploit Leica imagery we should move beyond the obvious or traditional topics (is the Summicron DR the best 50mm lens) and try to master our subject of the whole imaging chain and choose carefully the several elements. For me it is of much more importance to find the optimum film (Provia 100F or Delta100) or exposure and to study subjects that do justice to the Leica lens philosophy.
But as long as the Leica scene is dominated by spin doctors who draw smoke screens about he really important topics there is hardly a chance that we will ever move forward in the evolution of the photographic image. No wonder that digital image capture captures such a large audience. It is an effortless undertaking and gives fast and pleasurable results and it is image oriented where the classical Leica discussion is myth oriented. No wonder too that analogue photography is on the defensive. If this type of photography is being killed in a few years, we know who to blame. Chasseurs d'image notes that in june 2001 60% of the sales of photographic apparatus is digital. Here we see another instance of careless interpretation. What is being compared statistically? Value or numbers of units. What is the comparison base: slr sales or all sales of photography including single use cameras? It is easy to get a false impression here. But numbers are rarely used to illuminate facts but to manipulate the opinion.
The question is then what level of quality can we appreciate or see in normal circumstances. While an MTF graph may give valuable information to the optical designer, in the hands of an untrained user the interpretation of the curves may be hopelessly inadequate and give rise to conclusions that may be wholly off track. And the relation to an MTF graph and the resulting picture may not be obvious or clearly demonstrable. There is a fragile relationship between visual images and mathematical computations as an MTF graph is. Even when done on a bench there are many variables to look at. Why are the Photodo results different form the Leica results: even a different width of the slit which is used to measure the edge gradient has significant influence on the data that are presented. It is remarkable to see how a professor of physics with 40 years of experience in the lab makes very cautious conclusions compared to the easy going interpretations of the non-expert.
Read the latest Photo Reponses where you can see some test figures that try to evaluate Technical Pan with APX25 and the Copex film (AKA Gigabit). Only one picture is shown, no data are given and the fundamental question if this picture (read lab setting) does represent a valid test for this comparison is not at all discussed. But is gives he impression of a comparison and thus it will satisfy most readers, while all important facts stay under cover.
Optics (2), image clarity and the digital scene
Jay wrote: I have always read that the size of the front element is a direct consequence of the maximum aperture. Then from the above quotation, would it not follow logically that the fastest lenses in a given focal length should be optically superior to the slower ones (in the same generation of course)?
Well, in theory this is partly true and it follows from the diameter of the Airy disk, which is in fact related to the aperture. The smallest diameter can be obtained with the largest aperture. But any large aperture lens is so loaded with geometrical aberrations, that the size of the Airy disk is often less than a tenth of the size of the unsharpness blur that we get when using a wide aperture lens. To be precise: for a f/1 lens the calculated Airy disk (smallest diameter of a point) is about 1 micron. But for the Noctilux f/1-50mm lens the actual spot size is in the order of 20 - 30 micron. The professor in his book was not referring to wide apertures but to wide angles including wide apertures, as in this case the biggest part of the diffraction pattern can be captured.
The size of the front element is a good but not perfect indication of the actual entrance pupil, which does govern the true light gathering capabilities of the lens. As example: the Elmarit-M 2.8/28 and the Summicron-M 2/28 have the same physical front lens diameter, but the latter one has a much larger entrance pupil. The entrance pupil is important, not the physical diameter. But the EP is a much more difficult topic to understand and is avoided by most. Here again we see the limitations of what we are accustomed to read in most magazines, books or internet discussion groups.
Speaking of conventional themes. There is an overwhelmingly strong opinion that testing of lenses is done by taking pictures of meaningless testcharts and/or gathering of numbers (MTF values) that have no relevance to real photography. And the true appreciation of a lens has to be done by interpretation of the image content and way of representation of the subject matter. This view is as wrong as it is popular.
It relates to the old dichotomy that engineering is dull and numbers and cool and anti-human and that art is exciting, full of feeling and immersed in the human condition. And that both worlds are opposed to each other. In fact there does not exist such a naive dichotomy. Mathematics has been called by many a perceptive person as art or even poetry. And much art is as mechanical as it is devoid of true life. We should try to abandon many a myth to get to the essence of what we are doing. In a recent issue of The Economist a particular poem has been described as exhibiting the precision and clarity of a B&W photograph. Here we capture the essence: precision and clarity are attributes of a forceful statement and it is photography that can accomplish this. But precision and clarity are also characteristics that are close to the faithful reproduction of a scene, supported by a lens that is as clear and precise as is possible, in fact a leica lens. Testing a lens for these attributes is done by looking at MTF graphs and residual aberrations. So in my view a thorough scientific test of a lens ( a method ridiculed by many) will reveal properties of the lens that are instrumental in its role as a poetic imaging tool.
There is no fight between concepts here: to add a thing to something, that something should lack the property that is added, otherwise it makes no sense. Only persons who love to simplify and distort would claim that a scientific test is anathema to an artistic interpretation: the last one can benefit significantly by acknowledging the value of the first. I for one fail to see why a sincere test and analysis of a test pattern should be valued less than a cursory glance at a print of a landscape only because the landscape suggests reality and a test pattern suggests artificiality.
The very strong point of a B&W photograph is its capturing of the essence and extent of the variety of the dimensions of reality. A lens that is corrected highly will do the job with better results than a lesser lens. My description of older leica lenses as reproducing the scene with less transparency (muddiness) and precision has been questioned as inaccurate and even ridiculous: still I stand to this description as it is a true reflection of the state of lens design of yore, even if someone does not can appreciate the significance of this analysis.
In my view then, poetry and clarity of vision and accurate reproduction are closely related and so is the scientific analysis of a lens and its capabilities to record the reality as accurately as possible. And as meaningful as possible. A not so well-known book by Bryan: "Cameras in the Quest for Meaning" tries to explain this symbiosis. It is a pity that many a Leica user seems to be caught is the fallacy of either a scientific approach or an artistic interpretation. The meaningful mix of both is what constitutes the essence of leica photography.
There is a strong indication that digital photography is gaining at the expense of analogue or silverbased photography. No denying here: the convenience and speed of the digital method fits into todays lifestyle. Even if it is expensive and lacks the ultimate quality of the analogue method: but image quality is the victim of the adage that the best is the victim of the good enough. In fact the industry is killing the analogue photography as fast as they can. In a recent interview the Kodak manager in France said that the profits of analogue are used fully to subsidize the lossmaking efforts of Kodak into the digital realm and that Kodak has no intention to invest any money into analogue products anymore. But as we read in Chasseurs d'images: the industry killed analogue photography as a hobby or a lifestyle when first introducing the camcorder in the seventies and now introducing the single use camera, which has done more to kill the hobby of photography as has done the digital camera or the computer.
Analogue Photography is more related to poetry than to anything else and we all know how many of us read poems.If we want to pursue the fascination of analogue photography we have to act and act quickly. We may quiet our conscience as leica users by assuming that Leica will someday introduce a digital M or R, which is technically and commercially possible. But what do we loose then?
The capacity and capability for a precise and clear expression of our inner feelings is the essence of B&W Leica photography.
Remarkably this capability is engineered into the image qualities of todays Leica lenses, but we prefer to stay in the past and worship mythical issues and state stereotypes as fundamental truths, afraid as we are to face the future.
Konica Hexar and register
I am currently using the Hexar RF for several reasons. To test the new Hexanon 2/35mm, to check on Hexar body - Leica lens compatibility and get a feeling for the Hexar system. To start with the body: the specs are well known, so I can jump to the more philosophical topics. The body appears to be a very high engineering quality, has a very solid feeling and is really easy to use. The electronic shutter-motordrive unit is a sealed box and can not be separated. It is the same as used in the Contax G/2 series. As an aside: if Leica were to use this unit, the manual advance lever would have to go. The viewfinder is slightly lower in contrast than the Leica and the Hexar rangefinder patch has a distinct yellow tint, that will lower contrast and makes it more difficult to focus at objects at 10 meter or more distance. While the body has almost identical dimensions to the Leica, the look and feel is distinctly different. The rounded body contours of the Leica and the clean top cover make it look more elegant, compared to the squarish and somewhat boxy character of the Hexar. In use the Hexar is quite simple and its controls are well laid out and generally useful to the photographer. The exposure compensation feature is nice, but with the leica a simple half click stop of the aperture ring will do the job as fast and easy.
Biggest drawback of the hexar is the small time delay between pressing the shutterknob and the actual firing of the shutter. This delay and the instant of wait and thus insecurity is most annoying and you can not use the Leica technique of prefocusing and fire when the object is sharp in the finderpatch.
When you close your eyes and pick up the Leica and the Hexar several times, the difference in feeling and haptics emerges. When you hold the Leica, your thumb slides behind the advance lever and your finger lays on the shutter release button, which is sharp as a trigger. This simple and intuitive act signifies to the brain a state of alert attention and you fall into the mood of a hunter or an active sportsperson anticipating the moves of the other players.
When holding the Hexar, both hands hold the body and wen your finger touches the release button, there is no trigger effect. The finger just rests there and you do not get any feedback from the body. So you switch almost automatically into a more passive state of mind and allow the camera to work for you. That is easy to do as the automatic functions of the camera (exposure, film transport, motorwinder) are so well executed that you start to rely on them and even transfer control to them. In fact you are starting to become an operator of the camera, adjusting the wheels and not the driver who forces the camera to do as he wants it to act.
The transfer of controls to the camera and the mood of becoming more passive in the photographic act is in my view the fine distinction between the Hexar and the Leica. Photographing the same objects with a leica and a Hexar in quick succession underscores this difference: with the Leica the work is harder (more to think and act), but your act blends in with the subject and you are part of it. With the Hexar your work is easier, but the remoteness of the controls acts as a filter between the object and yourself. Let me say, that you become a bit lazier when using the hexar and that shows in the pictures.
Technically there is nothing wrong with the Hexar pictures, well exposed, sharply focused etc. The Hexar then is for photographers who avoid technicalities and want good imagery with a minimum of technical and manual control and who feel that the visual involvement with the object has to be separated, even detached from the tool they use. In this sense the Hexar is close to the Contax G. The family resemblance goes a step farther. My test of the Hexanon 2/35 indicates that Hexanon imagery is in character very close to the Zeiss philosphy of correction. The Hexanon is an 8 element lens (with the now familiar negatively curved front lens, pioneered by Leica and quickly adopted by Konica and Voigtlander). The Summicron has 7 elements, but has one aspherical surface, and one such a surface equals two spherical surfaces). At full aperture the lens exhibits a medium contrast (less than the leica lens), has visible flare in the bright areas and small detail rendition. The performance on axis till an image height of 6mm (image circle of 12 mm diameter) is excellent with a very good definition of very small detail. In the outer zones the image quality drops significantly and now we see small detail with quite blurred edges. Astigmatism is very well controlled, but there is some curvature of field. The lateral chromatic error is quite large, and may add in the bokeh preservation. The corners are very weak. At 2.8 the flare is gone and the image crispens a bit, the central disk of excellent quality now extends to a image height of 8 mm, with the corners still bad and the outer zones hardly improving. At 4 we find an overall improvement, but the chromatic error still softens the edges of very small and tiny detail. At this aperture the quality is comparable to the Leica, that shows better reduction of the chromatic error and thus a crisper and cleaner image. If resolution figures were relevant, I had to note that the
Konica has the edge here. But these are bench mark figures (large scale projection test) and in actual photography the small advantage would be lost. This sideline indicates that differences in resolution of 10 line pairs/mm are not indicative of superior image quality. Optimum aperture is at 8, and after that contrast and resolution drop due to diffraction effects. Close up performance at 1 meter is identical to the tested distance which is at 100 times the focal length.
The inevitable question of course is how this Hexar lens compares to the last non aspherical Summicron. In my view the Hexanon is the better lens overall. But you cannot use the Hexanon lens on a Leica body: a collimator check showed that the Hexanon lens has a focus plane that differs from the Leica lens by 0.09mm. Is that important? The discussion on the Lug about the Hexar body/Leica lens compatibility dismissed small differenes in the area of less that half a mm as irrelevant, because some uses could not detect any difference when comparing different lens/body combinations. The truth is this: a did a test on the bench and focussed carefully on maximum image quality. Then I used a micrometer to defocus by 0.03mm (which is quite small). In the image the loss of contrast was very evident, but resolution at least at the lower frequencies (around 40 lp/mm) did not suffer. What did suffer was the edge sharpness. If you were to do your own testing and looking at the negatives with an 8-times magnifier, you would not see any drop in resolution (beyond the detection capability of the eye at that magnification). But at a larger magnification you begin to see it quite clearly.
Now the continuing saga of the Hexar/Leica lens compatibility. First a few remarks: You can not measure the actual distance from bayonet flange to pressure plate by using the pressure plate itself as a reference. The slightest and unnoted pressure from the instrument itself on the pressure plate will give errors and the pressure plate itself is hardly ever a plane itself. So additional errors. The only way to do it is to remove the pressure plate and insert a device that is calibrated to be at the same distance where the pressure plate ideally has to be. To start from here. The distance from the bayonet flange to the pressure plate or more accurate the top of the outer film guide rails ( pressure plate rails) in the Leica M is 27.95mm. This distance is also (but wrongly referred to as register. But this distance and measurement is used to check if the guide rails and the bayonet flange are parallel to each other and have the correct distance. The second important measure is the distance from the film rail (the innermost film guide rails) to the bayonet flange. In the Leica this is 27.75mm. The film gate then has a distance of 0.2mm. In every Leica book I know of there is a reference to the filmplane/flange of 27.80mm. What is this. Rogliatti, Roger Hicks, Collectors Checklist, Hasbrouck you name them, all refer to flange to film plane distance or flange to film register. Now in German the word is "Auflagemass". This can be correctly translated as "flange focal length" or "flange focal distance". But this measurement is done for the lens itself on a collimator where the lens is adjusted such that the distance from the lens bayonet flange to the true optical focal plane (focal point) is indeed exact 27.80mm. First lesson: NEVER believe what is written about Leica in books that are focussed on history or collecting: these persons are no engineers. In every other book, check, double check, triple check to make sure the person knows what he talking about.
To sum up: we have an optical measurement done on the lens to adjust the flange focal distance and that distance should be 27.80mm. We have a mechanical measurement on the Leica body, which is the distance from bayonet flange and the pressure plate rails which is 27.95mm. The film gate is 0.2mm. If we now use a film with a total thickness (emulsion plus base) of 0.13mm (APX25 as example) the thickness of the film will not fit into the film gate. There is some play and therefore the film will curl and curve inwardly (away from the lens). By using a focal distance of 27.80mm, Leica will ensure that the film when bowed a little, still will be correctly aligned in relation to the focal plane. It is intriguing to note that thick colour neg films of about 0.27mm will fill the film gate completely and the pressure plate will press the film to a plane position, instead of the curved position with thin film emulsions. Theoretically a thick film would have a better flatness than a thin film. Of course more research is needed, but these investigations do show that the information in the public domain is at best scanty or at worst misleading.
Now for the Konica Hexar. Here I have only one official fact: that is the bayonet flange to the pressure plate rails of 28.00mm. But I do not have official info about the flange distance to the film rails (or film gate distance). Nor about the lens flange focal length. My own measurements on one Hexar body and lens showed that the film gate had a thickness of .24mm and the lens a flange focal length distance of 27.71. On the basis of these measurements the flange to film rail distance is 27.76mm. These results are however no reliable enough to draw firm conclusions. What I do know from discussions with konica people is that their tolerances are wider than with leica and are choosen such that the best fit of Hexar body to hexar lenses is assured. The many inconclusive reports about problems or the lack of problems with fitting a leica lens on a Hexar body is partly to be explained by these tolerances and partly by the unreliability of the reports themselves. The Konica people at the factory told me that the Hexar is designed for use with the Hexanon lenses and that all dimensions inside the Hexar are based on that fact. If a hexar user fits a leica lens and he has problems, than it is caused by these different dimensions and/or the chain of tolerances add up unfavorably. If he has no problems: than he is plain lucky as the tolerances are such that they are close to what is expected for leica bodies and/or his demands are such that they are below the visibility threshold for the mismatch to show up. This is not the end of the story. People would expect quick solutions and fast answers and move on to the next topic. That is living in the fast and superficial lane of user group discussions. Serious research takes time and experience and dedication: scarce resources in a hasty world.
Image evaluation
Some of you asked for more information about the measurement of performance or the evaluation of image quality. Let us first start with some basics, that are very important. The ideal lens will form a perfect point image from a point object. A true point object would be a distant star as here the diameter of the star image is extremely small compared to the distance between the lens and the object. We all know that even an ideal lens will form an Airy disk of any point.
This is a patch of light with a central core of high illumination (where most of the rays will be focused on) and a series of concentric alternating bands of low (dark) and high (white) intensity, the intensity of which quickly fades. This is the diffraction pattern of the point source. The ideal of a lens designer is to create as much blackness as possible around the central core: in this case we have maximum contrast and the point will be recorded clearly. The Airy disk is a three dimensional object with length and width and height. Length and width represent the size or extent of the disk and the height represents the intensity (amount) of the energy concentrated in the core and the bands surrounding the core We have all seen the shape of this figure: it is the Gaussian distribution from statistical distributions: the steep hill with a sharp point and surrounding much lower wavelike pattern. If two of these disks are separated by the radius of the first dark ring in the pattern, the intensity midway between the two peaks drops to 0.74 of maximum intensity. We then say that the two points are resolvable and there are several equations to calculate the theoretical maximum resolution. This value however has no relevance to the actual optical performance of a system.
In actual optical design there are several methods for analysing the optical quality. One is the ray tracing itself: every ray is traced through any lens element and the path does indicate the quality of the lens. Another one is the spot diagram: here we look at the pattern formed by the lens of a bundle of rays: spot size and form of the spot give valuable information about the aberrations still present. A still better method, but very difficult to understand is the analysis of the optical path difference, where the spatial and temporal components of the ray tracing is analysed. A ray not only has a location on the image plane, but also a speed and if rays which are wavelike, are out of phase, they will not arrive at the exact moment in time on the film plane. This difference is the phase transfer function, a part of the optical transfer function. The other part of the OTF is the well known MTF. Also used is the encircled energy, which is a measure of how much of the total energy of a point is concentrated in a spot of a certain diameter. This is obviously related to he idea that you need to focus all rays and thus all energy in one small spot.
The MTF is probably the most comprehensive of all methods, while the above methods are more specialized and give partial info about the state of the system. The test target is a periodic object that varies sinusoidally in its intensity. It is a waveform as we see often when discussing radiowaves. The wave has peaks and valleys and if we have a perfect lens, the wave is represented exactly on the film. If not, the peaks and valleys will be changed in form and height. The maximum distance between a peak and a valley is called the contrast of the signal and when this contrast is changed (lowered) we call it the ratio of the modulation of the target versus the modulation of the image. The target of the MTF method is a very small slit of light, that is captured by the lens.
The distribution of the light energy over the slit has the same characteristics as the pattern seen in the Airy disk. That is why it makes no difference if we study a point or a line. Fourier transforms help us to go from one to the other.
The slit can be changed in horizontal and vertical directions and represent the aberrations in tangential and sagittal directions. As the reproduction of the edge of the slit and its energy distribution is influenced by all aberrations left in the system, and the sagittal and tangential directions represent the extreme positions of the angles of the rays going through the lens, we have in one diagram all info about the contrast drop of the spatial patterns being recorded and the angle of view of the lens. The MTF diagram is easy to look at, but extremely difficult to interpret correctly. The recent example in the Viewfinder magazine from LHSA is a case in point. Here by casual inspection of only one aspect of the diagrams (the highest contrast value at several locations and apertures) the conclusion is proposed that the older 7-element Summicron has better performance than the current Summicron. In fact to get a good view of the performance of a lens , based on MTF diagrams, we need to do this: You have to look at the overall shapes of the curves at all frequencies at once and changes of the curve pattern at all apertures. That is not easy: a high contrast value at the sagittal curve may be nullified completely by a low contrast value of the tangential line and the dips and meanderings of these curves reveal coma and chromatic aberrations, when you compare them wide open and stopped down. Again a simple high contrast value in the center of the image, may be degraded in real life when chromatic error is present, which can also be detected in the curve shapes, but only when you are trained in reading these diagrams.
It is understandable that photographers or users of lenses would want to have a simple figure for comparison of lenses, as the old resolution figures. It is comforting to know that a lens with a resolution of 60 lines is better than on with 50 lines. Much has been written to try to get rid of this criterion. We are not advancing however if we now change to a simple contrast value and state that a lens with a contrast transfer, at the 20 lines frequency and in the center, of 89% is better than with 80%. The evaluation of optical performance is not that simple: wish it were. All methods mentioned so far are not available to the normal user with the exception of published MTF diagrams, which are not easy to interpret and now are as much mistreated as the previous method of resolution determination. Still the last one is not obsolete, when used with some common sense. The classical USAF target (1951) can bring valuable info, when you have a careful setup and a good microscope. A very simple analogon is the white picket fence with alternating white and dark lines. Photographed at a very large distance it gives a single spatial frequency pattern that can be enlarged and analysed (edges, contrast, sharpness etc).
The Siemens star test (radial pattern) is also a good method for testing the performance of a lens. Keep in mind that you need a whole bunch of these charts to cover the whole image area and what is most important should be scrupulously aligned parallel to the film plane. That is obviously a big problem and the crude "testing" reported on the Lug recently of the even simpler test target of the page of a paper with small print, indicates that testing when done in such a simple way is very dangerous and in fact highly uninformative, as the only thing you can test here is the competence of the person who conducts the test.
Generally then I would say that testing in whatever method or format should be left to the experts with the right equipment. The photographer should restrict him/herself to taking pictures and evaluating the results, related to the level of demands that are relevant for the type of photography s/he engages in.
Here I must insert a warning. I have seen many pictures by Leica photographers that have been presented to me with the comment that here we have fine examples of Leica quality or that here we have pinsharp accurate representations of the reality. Quite often and I am really saddened to say this, these pictures are way below what we can extract from our equipment. I would remark that some leica photographers live in a kind of pseudo reality where the wish to relate the quality of the imagery to the reputedly high quality of the equipment may blur the senses to see what is not there. This is a delicate topic, and I am well aware of this: I would not dare to discuss it on any public forum like the Lug or Leg as I am shot to death in an instant. Still my goal in testing Leica lenses and the factual degradation of the image through the imaging chain (film. exposure, etc), is to define a true anchor point for optimum image quality. You are free to depart from that or being not interested in striving for this level of optical performance. But I do find it difficult to accept that some photographers demand the best equipment, discuss the possible image defects (like dust specs in a lens or a filter in front of the lens or the accuracy of the rangefinder within a hundredth of a millimeter) at extreme length and then create pictures than could be improved upon by anyone with a high class point and shoot camera.
Of course: technical image quality is not the only characteristic of the Leica camera or any other topclass camera system. It has been argued correctly that a 4x5 inch negative is always superior to whatever 35m negative. And that in many instances a print from a Leica negative negative is indistinguishable from that of other systems. True again. Still we buy leica equipment for the real advantages in performance or differences in fingerprint. To try to achieve these advantages and differences is a noble goal, in my view, and needs an open minded discussion where topics can be discussed and facts analysed with a modicum of rationality. Objective testing by proven methods and diligent interpretation of the results is a minor but important contribution to that goal.
The general misinterpretation of MTF data to 'prove' a point is an example with wider scope. As long as we do not want to delve below the surface of common sense and cherished myths, we will never see the truth and start to enjoy (Leica) photography as a fine (mechanical) expression of our emotions and intentions. But to extract a high level of image quality from our equipment takes time, experience, dedication and technical expertise or craftsmanship, and all of these requirements may be in scarce supply.
That mysterious digits on M lenses
There is a lot of discussion about the meaning of the double digit figures on mounts of Leica M lenses. But before explaining the facts and ideas behind them, I have to make an observation that may upset some of you. The sources of info about Leica are large and varied and comprise published books, articles and a vast amount of discussions, from presumedly knowledgeable, but anonymous sources and an even larger amount of free floating texts on the internet (newsgroups, websites) that are very uneven in quality and authority.
In scientific research the situation is quite simple. Facts and theories are original if the writer/author is the first to present facts (experimental research) or create a theory: a new interpretation of known facts. Frauds excluded (and there are many in the scientific field) any serious researcher will acknowledge his sources by referring to previous texts or to his original published research data. So anyone can trace the history of the facts or verify its origin and authority. In Leica lore this is not the case: in most cases you will find a report or a discussion or an explanation without any reference at all. And without being able to verify what is being stated, anything goes. A recent example is the analysis of the distance from the bayonet flange to the film plane, which may be 27.8 mm or 27.95mm, depending on resources and interpretations or translations. The topic of the double digit figures on the M mount is in the same vein. Explanations and figures are numerous, but is there any one who will acknowledge his source and so allows fore identification of the original data or individuals who wrote about this? In fact most discussions and explanations are based on a very few sources, Rogliatti, being the foremost one. It would really help if the sources would be mentioned, as it is clear that most discussions are a mix of previous reports and articles. By not mentioning the sources, one simply perpetuates the myth and evades the possibility of being wrong.
Focal lengths groups: the dimensions. It is well known and this info can be found among many writers, for example Rogliatti: Leica and Leicaflex lenses (2nd edition), that the tolerances in the manufacturing process of lens elements (distances between elements, some difference in curvature of surfaces, different refractive indices per charge of glass melting etc) will generate some differences in the actual focal length of the lens. But we have first to establish the calculated the true calculated optical focal length of a lens. For the Summicron 2/50 (second generation) this is 52.02 and for the Summarit 1.5/50 it ois 52.16mm. In the past the production process was not as accurate as it is today and a wider range of measured focal length could be found. The older Elmar 3.5/50 as example has been recorded as ranging from 48.6 to 51.9mm in steps of about 0.3mm (not exactly. (Info from the book " 25 years Leica Historica" and the magazine of the Leica Historical Society UK). The newer Elmar 2.8/50 had only three groups: 51.6 and 51.9 and 52.2mm. The older Summicron has the same groups: 51.6 and 51.9 and 52.2: a difference in distance of 0.3mm. The Summilux has these groups: 51.0 (indicated as (10),; 51.14 (11); 51.3 (13); 51.45 (14); 51,6 (16); 51.75 (17); 51.9 (19); 52,05 (20) and 52,2 (22). The Noctilux (current) has only 50.00 (00); the 1,2 version has : 51.75 (17); 51.9 (19) . The 75 and 90 and 135 have even more different designations. Sources: 25 Years Leica Historica and My book:Leica Lens Compendium. So one should be careful to differentiate between lenses and the same figures do not indicate the same differences or focal lengths. Focal lengths groups:why. It seems to have escaped most observers that Leica R lenses do not have these numbers on the lens. Still we may assume that the same tolerances and manufacturing processes are being used. So why M and not R? The explanation is quite simple. The true focal length of a lens is a characteristic of the optical cell of the lens: every lens element has its own focal length (negative or positive) and the sum of every focal length of these individual lens elements determines the system focal length or the focal length of the lens. The variations between the focal length is different per lens type. When assembling a lens, one can try to compensate these tolerances and use lens elements with plus and minus figures to stay within specified ranges of focal length. Having established an actual focal length per lens, one has to mount the lens, first in its proper focusing arrangement and secondly match the rangefinder curve that is calculated and machined for a specific focal length. This is the essential point. A change in actual focal length has no influence on the length of the mount itself, but only on the steepness of the RF curve of the lens. That is why the R lenses do not have these numbers: here you focus on the groundglass. But with M-lenses you focus by matching the alignment of the rangefinder patch with the extension of the lens governed by the cam of the lens. The focal length groups then indicate the true focal length and the fact that the correct cam has been fitted to the mount to ensure correct focussing. The RF roller movement "assumes" a true focal length of 50mm. The engineering complexities of the M body and its lenses are fascinating as are the solutions.Depth of Focus
Now on another topic: We know all the idea of depth of field: the extent in three dimensional space that is interpreted as acceptably sharp when recorded on film and enlarged/projected. In the film plane we have the same concept: depth of focus: that is the displacement of the true focus before we detect an unsharpness. also called defocus shift. This has particular relevance to the idea of film plane and focal plane and the amount of defocus by film curvature or inaccuracy of visual focusing or focus acuracy by engineering tolerances. A simplified equation tells you that Depth of Focus in micrometers is equal to the square of the f-number. So when using an f/4 lens we can defocus by 16 microns before any unsharpness blur will be detected. An f/2 lens has a defocus margin of 4 micron. All of this in critical demands. For normal photography the following equation can be used. Depth of focus = 2*CN. C = circle of confusion N = aperture. For a 2/50mm lens the Depth of Focus is about 1/30 * 2 = 1/15mm = 0.07mm or 7 microns. There is also an equation that relates depth of focus to resolution.
DoFocus = 4N/R(esolution) IF we need a resolution of 100 lines at f/2, we have a defocus limit of 8 microns. Contrary to popular opinion, the depth of Focus INCREASES when we focus at closer distances. So all tests done to prove that the Hexar can be used with leica lenses and using close focus settings actually mask any focus errors instead of exposing them.
If we do close focus photography and use f/2 but accept a resolution of 40 lines (which is the resolution most people would be very much satisfied with) the DoFocus might be close to 0.5mm!!
Formula in Rays " Applied Photographic Optics"
DoF is defined as the difference in distance that the film plane or the focal plane may move axially without disturbing the sharpness impression. Lenses are habitually checked in the design stage for the tolerance in through-focus MFT values. How does the contrast suffer when the ideal plane of focus is shifted for a certain amount. Lenses can even be tuned to be rather insensitive to changes in depth of focus by optimizing the through-focus behavior. But in general: if we do not know anything about depth of focus in addition to what we noted as elements of possible mismatch, the entire discussion and "proofs" of (in)compatibility is vapour ware and more misleading than enlightening.
Goldberg uses a crude rule of thumb to establish the depth of focus as the product of blur diameter and and aperture.If the blur diameter is taken as 1/30 or 0.03mm and the aperture as f/2 we have a depth of focus of 0.06mm. So if our combination of tolerances would be off by 0.06mm or 60 micron, a drop in contrast and image quality would result. As the drop in quality would be more noticeable as a drop of contrast than as a drop of resolution (see my Hexanon 2/35mm test), it might go unnoticed by many who check the sharpness of the image, by looking at a single negative, trying to check visually the plane of best sharpness.
A more sophisticated equation relates the depth of focus to object distance and here we see the opposite of what we assume: depth of focus increases at the shorter distances and so will cover any imaging defects, caused by a possible mismatch.
RF mechanism, accuracy and Depth of Focus
The Rangefinder mechanism, the attainable accuracy, the required precision of engineering and assembly make for fascinating reading and study. Let us first review the simple mechanism in the SLR to get an idea what is involved.
Generally we have two separate actions when focusing the lens on the object: 1. setting the distance by physically moving the lens relative to the film plane 2. Checking the setting by visual inspection to ensure the movement gives acurate focus.
Both acts are indeed separate and do not need to be connected. When we use a fixfocus box camera, we skip both and concentrate on framing. When we set the distance ring of the lens to some estimated hyperfocal distance, we do action 1, but not action 2. With the SLR mechanism both actions are combined in a direct way. That is: moving the lens and visually inspecting the sharpness are one act and we use the same mechanism.
In the SLR the lens transmits the rays which are focused via a mirror to a groundglass. We assume correct focus, when the image on the groundglass is sharp, that is when contrast is highest (coarse screen) or details can be seen clearly (fine screen). This principle is identical to the one used in a technical camera. As we can always move the focusing ring of the lens backward or forward over a large distance, we can always find a point of sharp focus. We do not have to know the focal length of the lens or bother about deviations from the nominal or true focal length. Or even about the true distance setting of the lens. All we need to know is the simple fact: is the object in sharp focus? And this we can establish by looking at the focusing screen.
There are no mechanisms to transmit any mechanical information from lens to body and none are needed. That is why you will not find focusing groups engraved on a R-lens. The manufacturer will of course stay within the tolerances of the focal length, but for the focus mechanism it is irrelevant whether the actual focal length is 51.3 mm or 49.9 mm. Both lenses will give accurate focus at the specified focal plane (51.3 or 49.9), when the projected image on the ground glass is found to be sharply focussed by visual inspection. As long as we can ascertain that the distance from the mirror to the groundglass (reflected ray upward) and the distance from the mirror to the film plane (transmitted ray horizontal) are the same, we will locate the focal plane image at the same location where the film plane is located.
Technically the Leica R body is partly assembled, including the chassis and the lens flange, and then at the end of the assembly line, the film guide rails are machined to the required specifications and automatically aligned parallel to the lens bayonet flange. The mirror box is machined and adjusted such that the two distances (mirror-ground glass and mirror-film plane) are within specifications.
Separately, any lens has its own distance from bayonet flange to focal plane and this distance is the same, irrespective of the actual focal length as a 19mm lens should focus the rays to the same physical location as does a 600mm lens. In the R series this is given as 47mm (see Osterloh books, where he names it wrongly "distance to film plane").
What are the required specifications? The film plane is not, contrary to what you will read in most books and hear in the public domain, a fixed location. Film support, remarks Goldberg in his famous book: Camera technology, is very complex. Sensitized surface must be located at the desired distance from the lens (our flange to focal plane definition) and lie in a plane coinciding with the image plane. Between exposures the film is transported and may not be scratched or subjected to mechanical pressure. Film surface location is the most difficult and makes "camera manufacturing an art as well as a science" (Goldberg).
Film is put through the camera body inside a film channel (invented by Zeiss and not used in the TM Leica bodies), comprising an outer guide rail, and an inner rail. The film is held in place by the outer rails and the pressure plate tha rests on the outer rail. The inner rail should hold the outer edges of the film in a flat position. BUT: The distance between both rails is 0.2mm. Exact distances vary as manufacturers have different tolerances. But on average a film has a thickness of 0.13 to 0.18mm. Thus the film has a clearance of 0.02 to 0.07mm and this is enough to jeopardize ideal film flatness. Films tend to bulge forward.
So the designer has some options when he has to locate the exact focal plane for his lens. Use the outer rails (lens flange to pressure plate distance) and be sure that you will miss the emulsion of the film where the image should be located. Use the inner rails and the flatness of the film is a problem. So here we have the real problem of locating the image plane. Every manufacturer has its own ideas and in the case of Leica they have decided that a distance of 27.8 mm from the bayonet flange will locate the image plane inside the film gate dimensions and take care of the curving of the film it self. The wellknown dimension f 27.8mm is often described as the flange to film plane distance. Correctly described we would have to say: distance from flange to focal plane, which might be identical to the film plane under a certain set of assumptions.
Let us look at actual figures. In the leica M-series the distance from bodyflange to outer film rail is 27.95mm. The thickness of the film channel is 0.2mm. (I measured 0.21mm). So the area where the film might be located ranges from 27.95 to 27.75. With a lensflange to focal plane distance of 27.80mm the lens, when correctly adjusted would focus onto the film emulsion layer. For the Konica, I measured film channel depth of 0.24mm, distance from body flange to outer film rail of 27.95mm. This last dimension is identical to the Leica figures but outside the official Konica specs of 28.00 +/- 0.03mm. More measurements would be required, but this example indicates that the Quality Control Criteria at Konica are somewhat more relaxed than the factory specs indicate OR I had a version which had been adjusted already.
The M-camera RF system.
If we now focus on the M-camera, we see that the rangefinding act of the SLRcannot be duplicated. We have an indirect relation between the two actions. We focus manually by moving the lens mount, but we check the distance setting with a separate act and mechanism: the rangefinder. There a number of identical engineering elements between the M and the R: film channel distance from pressure plate to body flange distance from lens flange to focal plane (27.80mm with the M)
The new element is the coupling between the lateral rangefinder movement (the distance that the ragefinder patch moves) and the physical axial movement of the lens mount. This is done through a very complicated engineering trick: the roller cam and arm on the RF side and the steepness and angle of the distance curve on the mount. Disregarding here the issue of the focal length groups (where you match the pitch of the curve to the exact focal length), we need a mechanism to translate the axial displacement of the lens mount to the axial displacement of the RF patch. The roller arm and cam do the job and this humble instrument is the Achilles heel of the RF system.The roller cam and the curve on the lens mount should match exactly and any movement on the cam or the fact that the cam does not follow the curve exactly (tolerances, non-parallel surfaces) is a source of trouble.
Leica-Konica compatibility.
It is clear from the facts that the dimensions and tolerances between Leica and Konica differ, even if the M-bayonet and the KM-bayonet fit. The mismatch that has been reported, may be caused by any of the factors involved: differences in flange to pressure plate distance differences in film channel thickness differences in cam/curve engagement differences in lens flange to focal plane distance differences in tolerances differences in engineering solutions (the Konica roller arm and cam are more sensitive to changes and tolerances than the Leica version differences in film flatness between several film types. It is quite rash to identify one of these aspects as being the sole source of the reported mismatches between Leica lenses and Konica bodies.
Verification in the "field".
Several individuals have checked that their Leica lenses on a Konica body do not deliver results that differ from what they expected to get from the combination Leica lens - Leica body. This is a minefield, really. The checks as reported used short focal length lenses or close focus distances or any combination that was available.
First of all: these checks without a proper and methodologically sound lab situation with controlled comparisons and predictable results, are quite subjective and without merit I am afraid to say. If one does not know what the results have to be from a set of quantified parameters, how can one reliably remark that the problem dos not exist? Checking a Konica body and a Leica lens and noting that the combination gives correct results at a certain distance, because the tester by visual inspection gets a sharp picture, is not a proof. The conclusion depends on what the observer accepts as "sharp" and without an objectified definition of that most elusive concept, we are lost in the desert.
Film flatness
Now the film flatness issue.
In the previous post I noted tat the only fixed dimensions are the focal plane of the lens (relative to the flange) and the film guide rail distances from the flange (the film channel). The focal plane, that is obvious, is located inside the film channel ( in the Leica case 0.05mm inside the film channel (measured from the flange).
Ideally the film emulsion should be at that same position. In the classical Rolleiflex you could insert a glass plate in front of the film emulsion that held the film flat and in a location such that the front of the film emulsion would coincide with the exact focal plane of the lens. In a 35mm situation you can not do it. So the film lies somewhat loose inside the channel, the back pressed on by the pressure plate and the perforation sides are limited in their forward extension (to the lens) by the guide rails. The natural tendency of the film is to curl away from the lens, but all studies will tell you that in practice a film emulsion at the film gate will bow outwards (towards the lens).
The center of the film area will be closer to the lens than the outsides. So if I use a Techpan with a total thickness of 103 micron (0.0103mm) (base 100 micron, emulsion 3 micron), the pressure plate will ensure that the front of the emulsion is at least 103 micron towards the focal plane which is in this case located at a distance of 150 micron from the pressure plate (assuming a zero tolerance for simplicity). Some outward bulging then will guarantee that the emulsion will be in a location of the focal plane. A thicker film with a thicker emulsion layer will have the focal plane in the middle of the emulsion, but these differences do not matter at all. Now what are the measurements to try to capture this bulging of the film towards the lens. Kyocera, when introducing the RTS III and the vacuum back stated that they had found following figures. A true flat film (with their vacuum mechanism) would still deviate at most 10 micron from the ideal position and films without vacuum plate would deviate 20 to 30 micron from the plane position.
Adding the 30 micron that Kyocera found to the 103 micron of the TP gives 133 micron which is very close to the ideal location of the focal plane. The APX25 has a total thickness of 123 micron and with bulge it would be 153, so exact where the focal plane should be.
The focus depth we discussed earlier for a 1.4 lens is 47 micron (in both directions). So this depth would cover the small deviations in film bulging.
The Kyocera figures are not alone. Zeiss did their own analysis and noted that film could deviate by 80 microns, which introduced in the Planar 1,4/50 a contrast from from 60% to 20%!! Goldberg studied a large number of SLR's and found that the difference between focal plane and film plane (including curvature of the film surface) averaged 70 micron, with extremes to 170 micron. IN such a case the focus depth would not cover any errors. But even with the average figure of 70 micron you could not get exact sharpness with a 1.4 lens as it exceeds the focus depth.
Film flatness is mostly caused by the film cassette and the use of the film. But the reported cases of sharpness plane differences might be related to this phenomenon of film flatness.
More to come. Heavy stuff, but I am afraid you need it to know what is going on
BW films of classical character
The monster test of the BW films is underway. I had some old rolls of Panatomic-X (20 years old), the film that introduced high resolution acutance photography to 35mm users. I also used the Maco UP25, 64 and 100, which are all versions of the classical Adox high actance series of KB films. And an ortho 25, APX25, APX100 and previous tests included PanF, TM100 and D100. To keep it manageable I used one developer (the famous CG512) and tried to develop to the same CI value. You need to do this as otherwise the steeper curve of the APX25 may lead you think this film is sharper than as example a D100, while in fact both are as sharp (seen as recording the same information from the object) but the 25 has higher contrast so the pictures have more punch, which could be sen as more sharpness. All pictures were enlarged 14x. which in my view is the minimum to differentiate meaningfully between films. The shots were of a model in an old desolated factory, giving ample fine details, tonal scale and resolution possibilities. The Pan-X showed outstanding sharpness and acutance, but its grain pattern was a bit rough but very tight. It resembled the grain pattern of the APX100, which is a bit finer, and indeed the two films are close. Finest details however were suppressed by the grain pattern. The tonal scale showed quite subtle grey values, again till the threshold of the granularity noise. The whole atmosphere is an image of very pleasing tonality, gritty sharpness and details painted with broad strokes. The UP100 (Adox KB21) has surprisingly fine grain, but on inspection the grain is clumpier but the edge sharpness is low so the fineness is bought at the expense of definition. Overall quality is still commendable and while not up to todays standards, in its day it certainly was a winner. The Pan-X and KB21 images indicate the progress realized in 20 years of emulsion technology. In itself of amazing quality, these films lag in all significant areas when compared to todays super stars.
But the differences are on the other hand more evolutionary that revolution. The APX100 gives images that suit the reportage style of location photography very well. These images have a fine realistic imprint: some what gritty, but with a smooth tonality and sufficient fine detail to make the scene interesting. The APX25 has a higher inherent contrast and so small details are recorded somewhat more forcefully. Grain is absent, which adds a creamy tonality to the scene, but on close inspection the recording capabilities are just a small edge compared to the APX100 or PanX. The finer grain does record the faintest shades of grey values, which adds to the 3D impression of the scene. The UP25 (KB14) is very
close to the APX25. Grain is slightly more pronounced, but much less so than PAnX or APX100. The tonal scale is identical to the APX25. The intriguing characteristic of this older thin layered, thick silvered emulsion is the edgy grain clumps, which, being very fine, also roughen up the image structure. It makes the picture very lively and especially for model photography and architectural photography adds an effect that can be described as underscoring the main story. Compared to the PanF as example the KB14 is definitely less smooth and its finer details lack the stark micro contrast of the PanF, but all said, this film is a worthy emulsion, that deserves a try. On a normal viewing distance, the main subjects literally jump from the picture. The Ortho25 is a trouvaille: I had some films and asked myself: why not? In the same setting, the prints proved excellent.
The skin of the model came out very realistic and I did not notice any strange grey values. f course there was no red in he scene, so all other gray values are more or less 'natural". Sharpness is excellent and grain very fine. The film has a clear base and so looks very contrasty, even if the values are close to normal. Not a film for every topic, but I am inclined to use it more often and when using some filters can even add some additional tonal scale. Definitely a film to try and use for portraits, glamour etc. Take care of red of course. But more versatile than mostly thought of.
As a preliminary conclusion I have to say that the UP25 and Ortho 25 are very potent films with a potential for intriguing results that need to be explored.
They are not as good as current top performers, but the distance from a TP as example is less than often imagined. So it is as easy to note that there is hardly any progress in BW emulsions in the last decades or to state that we have advanced a big stride to deliver superior results.
If you habitually use enlargements below 10X, the difference are even smaller The lesson: try more film than you use now: it will add to your toolkit and visual awareness.
Limits of digital capture
Many of you asked to get some reliable information about digital capture, digital techniques and the possibilities, (dis)advantages of a digital M or R. I have to admit that the discussion on the Lug is not the best source to come to grips with this exciting technology and its basics. Let me try to shed some light on he matter, as closely related to Leica products as I can. First some starters. I draw heavily on Schneider, Zeiss and Rodenstock information (published and unpublished). We all know by now what a sensor array is: a grid of pixel elements of a certain size. While we now can create pixel elements with a size as small as 1/4 micrometer, this is not the size we can expect in digital photography. Here the minimum size would be close to 3 micrometer ( one thousand of a millimeter). Current 3 to 5 million pixel chips have sizes ranging from 6 to 12 micrometer. Let us assume for easy calculatons that we settle for a pixel size of 10 micrometer (which is still smaller than we have at the moment in production). With such a size we have 100 pixels on a row over a space of 1mm. The classical measure for resolution is the bar test, which has alternating black and white lines of diminishing width. If there are 3 black and 3 white lines in a mm, we say that the spatial frequency is 3 linepairs/mm. With 50 black and 50 white lines, we have a spatial frequency of 50 lp/mm or a resolution of 50 lp/mm, or a resolution of 100 lines per mm. Consider our 100 pixels in a space of 1 mm. With such an arrangement we could reproduce the spatial frequency of 50 lp/mm in a one-to-one fashion in the pixel row. The first pixel reproduces the black line, the second one the white space and te third one the black again, etc. Very easy and no problems. With current 1/4 inch chips (30 by 30 mm total area) and a number of 2000 by 3000 pixels on this area we cover a 6 miilion pixel ccd, where every pixel has a size of about 12 micrometer. Our assumption of 10 micrometer then is quite realistic
It is established since many years that the eye can resolve at best 6 linepairs/mm and on average 3 to 4 linepairs/mm. On the assumption that a digital picture will be reproduced on a A4 size, we can calculate that a 7 to 10 times enlargement is needed. Working backwards we see that a pixel area that can capture 40 lp/mm is all we need or can use. Let us settle for this 80 pixels per mm as a base. It is clear that we can not use MORE resolution than the 40 lp/mm from a lens. If a lens would capture 80 lp/mm, there would be one black/white pattern that we need to squeeze into one pixel. If we had a lens with 60 lp/mm, one black and a half white bar would have to be captured by one pixel. That will not work. But what is more important: if we have a lens with a higher resolution than the sensor array can capture, we introduce false information and a phenomenon called anti-aliasing.
What to do? And now we are the bottom of the matter, which is fully neglected in all discussions on the internet, not only at te Lug. In front of the sensor array there is placed an anti-alias filter or a high pass filter, that will cut off the higher spatial frequencies that the lens can capture but the sensor array cannot handle. As Jim does not stop telling us (rightly) that we need 4 pixels to capture colour information, we need a second filter in front of the sensor array: the colour filter that has the wellknown blue, green pattern. The red colour is interpolated and that introduces noise. So a third filter is placed in front of the array: a low pass filter to introduce some softening of the image points to help the software to calculate the info. Then we need an infrared blocking filter and a noise filter etc. In sum there are at least six filter layers in front of the array, with a total thickness of 6 mm!!
One consequence of the limiting resolution of 40 lp/mm is the fact recognized by Schneider and Zeiss at all, that lenses for digital capture have to be designed specifically for the resolution limit of the sensor array, typically 40 lp/mm. This resolution need to be complemented with high MTF values and be as good as can be ove the whole sensor array. Schneider states that they can design lenses with a true cut off at a resolution at the 40 lp/mm point, after which the resolution drops rapidly to zero, to evade noise and anti-aliasing. This is not true: the MTF graphs show a much more gradual fall off and that is why the high pass filter is needed: to cut off the unwanted resolution.
It is true that normal 35mm lenses, designed for the analogue system, are not good for digital capture. Specifically wide angle lenses whose oblique rays in the outer zones might not be captured by the array. Here we must add the filter layers. A plastic stack of filters with 6mm thickness is a lens element in itself. And when oblique rays fall on this plane element of 6mm thickness, heavy astigmatism and coma is introduced. Furthermore (and now we are in optical theory again) a wide angle lens has its exit pupil close to the film plane. But now we have that filter in between. And the oblique rays from the exit pupil strike that plane at extreme skew angles, with all optical errors. So we may safely say that lenses designed for analogue photography are not good for digital capture.
If you look closely at the specs of true digital lenses for digicamcorders as example, you see that they try to design lense with limited resolution at the cut-off point AND (VERY IMPORTANT) lenses that have the rays from the exit pupil as parallel as possible to the sensor array. A special rechnique is needed, and remember the word as it will be more important than ‘boke’: it is telecentric optical systems. A telecentric lens is specifically designed such that the exit pupil is at a location at infinity and now all rays from this exit pupil are roughly parallel. This design is not a new concept: it is quite old, but now needed for digital systems. Telecentric lenses tend to be physically big and a new bag of tricks is needed to reduce the size. The second technique is to design a lens which uses the filter plane as an integrated element of the lens.
To sum up this part: normal lenses for 35mm photograpy (and now I concentrate on Leica lenses, but it works fro all others as well), have too much resolution, a different type of aberration correction and an exit pupil at the wrong place to make them good for digital photography.
When we look at the interviews given by Mr Cohn, when he notes that a new line of lenses and presumably a new bayonet is needed for future use dedicated to digital cameras, he is right. The idea that it is enough to put a 6 million pixel array/chip in an M or R and we have a digital camera is naive to the extreme and neglects all important topics that differentiate lenses for digital and for analogue photography. Were we to use current Leica lenses in front of a digital array, we would lose all that makes Leica lenses in the current state-of-the-art so special: we loose resolution from 40 to 120 lp/mm, we add errors the image because of the obstruction of the filter layer to the exit pupil, we loose the edge sharpness of the lower frequencies (the outlines of subjects) because of the high pass filter and we add blur to the finer details because of the low pass filter.
On the Lug there is an equally naive discussion that equates pixel count with true resolution. But again there is more behind the scenes. When we read that a camera or scanner can capture 3000 pixels per inch, it is easy to calculate that this amounts to 118 lines per mm or 60 lp/mm. And we simply assume that this is the same as 60 lp/mm captured by a lens on analogue film. Far from this! The 3000 pixels per inch are nothing more than the sampling frequency andWHETHER this sampling will result in a true 60 lp/mm depends on all kinds of factors, some of which are discussed above. So it is not unusable to see that there is a 50% drop in true resolution, compared to calculated resolution. A simple test is reported below.
I scanned a test chart with 600dpi and printed the result. The original chart has a resolution pattern of 1 to 4 linepairs/mm. The scan was 1:1 and so was the print. A direct comparison was made and it did show that the scan/print combo gave a final result of 4 line pairs/mm. So I reduced the chart by a factor of 2 and repeated the procedure. Now (under a loupe) I could detect with some difficulty 5 line pairs/mm. This was done with a very good 1440 dpi printer. So where the simple calculations say that I should have a resolution of 1440 pixels per inch or 57 pixels per mm, I got at best 10 line per mm, a far cry from the 57lines/mm that simple theory 'predicted'. When I used the sharpness filter to enhance the image, I indeed got results that gave higher contrast at the edges by reducing the grey values, but I also saw a severe colour fringing at the edges that might indicate optical flaws in the scan array or interpolation faults in the software.
Why can Canon and Nikon lenses be used on their digital cameras? First of all, the small size of the chips do change the focal length and a 50mm will be a 75 etc. This longer focal length cuts off the troublesome rays in the outer zones, minimising the oblique angle problem. Secondly: the size of a pixel (in a 3 Meg ccd) is about 0.01mm. And by applying the Nyquist rule (ask the experts what this is -:) )we can calculate that the limiting or cutt-off frequency is about 50 to 60 lp/mm. For general photography and high digital prints for magazines etc this is more than enough. So C and N are happy to limit their lenses to this frequency if they are able to reach it. And the N and C engineers, knowing the quality of the lenses can adjust algorithms such that blooming, ant aliasing etc is reduced or compensated.
Medium format cameras get their final quality by using much larger chip sizes (30 x 30mm) and even 70x100mm!!. A 30x30mm chip with resolution of 2000 by 2000 pixels, gives a 4 Meg size. But pixel size is still 0.015mm. And medium format lenses and large format lenses have typical resolution figures of 20 to 40 lp/mm. Newly calculated Rodenstock lenses for these formats have now 60 lp/mm at optimum and 40 lp/mm at other apertures.
So whatever size of format you use, the current pixel size of 0.01 to 0.015 is still valid when doing calculations. The pixel size of 7.5 micron in digicam cameras is offset by a still smaller chip size.
It is clear then that all high end digital cameras operate at optimum/maximum resolution levels of 50 to 60 lp/mm. That is fine and in the same league as when scanning a 35mm slide with a high end scanner.
The specific characteristics of Leica lenses are partly achieved by the resolution/contrast level beyond 50 lp/mm. Therefore the difficulties in putting current Leica lenses in front of a ccd sensor array.
As all calculations (50 lp/mm cutt off frequency) are all based on he fact that the image will be printed at a Din A4 size with a 60-raster, here we see the rationale and the limits.
Image perception.
The appreciation of final image quality ( as being observed at an analogue print or digital print or projected slide or as a reproduction in a book) is not easy. We all know that. We look at a picture with our eyes, and this basic fact has enormous consequences. Our visual system is not objective, and we perceive a picture with a large dose of psychology involved. Vision science (yes, this exists too!) tells us that we have a three step process: the retinal interpretation (the image as recorded by the retinal cells), is transformed by the neural response of the cortical pathways (the visual streams in the cortex) and lastly the perceptual interpretation will add perceptual properties (as shape, color) to this image, and this is again interpreted by our knowledge of the external world. We know we live in a three dimension world and the interpretation of the image will be done such that visual clues are interpreted as happening in a 3-D world. And of course these interpretations are also embedded in our cultural and emotional heritage. In order to get to the basics of an image, we have to be aware of all of these layers of interpretation. The simple concept of ‘sharpness’ is a most difficult topic to analyse. We have to find and isolate aspects of an image that most people will identify with ‘sharpness’ and we have to find ways to reliably reproduce image elements that are associated with ‘sharpness’. Now ‘sharpness’ is not a quantifiable aspect of an image as sharpness clues are contextual and dependent on psychological mechanisms. I have seen pictures described as being ‘tack sharp’ that in my view were fully unsharp, compared to other pictures that I would consider sharp. If I define sharpness (as example) as a measurement of the total amount of visible information in a picture I am telling you that for me the sum of gradation (the differences in tonal value that can be detected) and resolving power/contrast ( the smallness of details that can be identified) are of paramount importance.
BUT: gradation and resolving power are dependent on magnification and a lot of other factors. For the sake of simplicity. let us assume only magnification is important. Then in order to compare two pictures, we need to enlarge them to the exact magnitude and they must have the same content. If they have a different content, our layers of interpretation could get different visual clues from the picture and make a different assessment.
Here we have the first problem when individuals are telling you that picture A is much sharper than picture B and by inference that lens A is not as good as lens B. First: we forget about the imaging chain: same aperture, distance, film, speed, development, enlarger etc are needed. Then we need the same content in order to compare. This is normally not done. The many comments about lens performance you read on the net (newsgroups) and in articles hardly ever do a comparison, but an isolated assessment: or a comparison of pictures taken in very different circumstances and with different subjects. How it is possible for someone to take a 400ISO picture at 3 meters from a person with lens A and a 100ISO picture at infinity from a landscape with lens B and then give a verdict about lens performance escapes me. Unfortunately this is the general case. Even if you use the same film and aperture and speed and distance but different subjects there is a danger that the visual processing done by our mind will tend to the picture that is more pleasing and thus distort the assessment.
If you look at the care that is needed to do really meaningful tests of the working of our visual system, you may wonder why photographers are so quick to express their opinion that this picture is sharp or better than another one.
Remember that there is no agreed standard of image quality and a large latitude of interpretation is possible. Attempts in the past to try to find a statistical average by showing hundreds of persons a set of pictures and asking them to select the ones they like best or would be best in terms of ‘sharpness’ or ‘color fidelity’ or whatever you want to analyse, have not failed but are hardly conclusive. Given these aspects (image chain, comparison, visual processing) I am very hesitant to make an assessment of one or a few pictures and derive conclusions from this about the lens quality (just one of many parameters, if a very important one). And when I read that some photographer remarks that on the basis of a few rolls of pictures he does not like a lens and there fore the lens is bad, I am wondering what he is trying to do. It is human nature to want to rank items: this restaurant is better than that one, this book is better than another one, I like this ice-cream more than the one I had a week ago. This car is a bad one, and I want another one. This type of discussion can be found in most social discourse and there it is fine and appropriate.
Personal views are always acceptable and in a certain way the truth . Every individual is his own final arbiter. Like something and you will be pleased. It is as simple as that. If you like the results you get with a 400ISO film and a Summarit lens at 1.5, the world is OK. You are pleased and that is all there is.
Replace lens by film, analogue by digital, wet print by inkjet print and the same statement holds. No discussion is needed. And if we would stop here, we could all benefit from an exchange of opinions and be interested in why some one is pleased. But now a remarkable process occurs. The statement that “I am pleased” is stretched to cover the statement “the products I use are better than the ones some body else uses to get results I am not pleased with”.
This is not allowed. There is no logic that can allow you to infer from : ”I am pleased” to “the products I use are better”. If you are ranking products you need a standard and a measurement method. If you are ranking lenses you need to have a standard to compare different lenses. And you should be willing to accept that your personal opinion is not the best yardstick. Now we enter the realm of science: most people assumed on the basis of common sense that a feather would fall at a slower rate than a brick. But what happened: under identical lab conditions, a feather falls as fast as a brick.
The same holds for the assessment of a lens based on an appreciation of a picture. The picture is the result of a large chain of acts and factors that do influence the appearance of a picture and the content of the picture evokes an emotional chain of reaction in the brain that may influence the appreciation.
So it is simply impossible to extend the personal impression of the image quality of a picture beyond the individual statement. You as a person may express your individual feeling or impressions, but that is it. A personal opinion may be very valuable and give insight into the topic, but that is quite far removed from stating that some one’s personal preference can be seen as a normative statement. To transgress beyond the personal opinion we have to do a group test , just as what is done in art.
Up till now no one I know of has ever proposed a meaningful definition of image quality as perceived by photographers or others. Being inherently subjective, such a definition is not feasible. So it is natural that lens designers and theorists of optical performance need objective criteria. The goal of a lens is to reproduce as faithfully as possible the object in front of the lens. So the easy definition of image quality (as seen by optic designers) is the closest approximation to the ideal of the perfect reproduction. We know that the deviation from the ideal is caused by aberrations and mechanical faults (tilt, decentring, wrong spacing of elements, bad glass surfaces). Neglecting for a moment the last ones, we have as objective criterion the amount of aberrations in a lens. In a previous newsletter I have told you about the several methods to measure the aberrations in a lens (MTF, Strehl Ratio, etc). So from this position it is easier to define a lens as good and better or best. Even here we have design philosophies which make it impossible to rank all lenses in one row. Zeiss has a different approach to design than Leica and that makes it difficult to say that either one makes the best lenses. But at least we have criteria which can be measured, discussed about, compared and ranked in importance.
The proponents of the subjective assessment school, will argue that you cannot use objective measurements because they are done on a flat subject and what is being measured is not related to the 3-D world and the practicalities of photography. This argument is extremely weak. Aberrations are not confined to a simple plane of focus. But they are at work too in the depth of a subject. And any computer program can do what is called through-focus analysis, meaning that you move the plane of focus over a certain distance to see how the difference in focus affects the aberrations. This through focus movement is identical to depth of field and so to picturing an object with depth.
The real topic is to find a way to translate the optical quality of a lens to some sort of picture quality and taking into account the imaging chain. You have to take real pictures to do that. But not at random but controlled and comparable.
If we want to test a lens or make meaningful statements about a lens, we need to create a situation where only the characteristics of a lens make a difference to the result we study. In other words: keep all parameters the same, except the lens. That is the only way to make valid comments about a lens performance.
That is why I always use the same objects when studying the behaviour of a lens. I photograph my cat, at a specific distance (which is diferrent for a 90mm than for 21mm) with flash and the same film in the same environment. And a landscape in identical situations: sometimes the weather does not help and I have to postpone my test. And a model and several other representative subjects, but always same apertures, magnifications, light, film developer etc. I take care to change only one variable: the lens. This is hard work and often boring. From film tests and developers i know which film/developer combo allows for the maximum performance.
Then I have an impression of how a lens performs in practical situations. But I know that there are many limitations at work here. To back this up I do a benchtest to measure infinity setting, contrast, resolution, decentring, vignetting, the changes in optical performance from center to corner, at all apertures and distances. Again hard work and boring. But I know at least what to expect from a lens and when my practical results are in accordance I am happy. When not I have to refine my practical testing to get the optical performance onto the negative.
Now there is that age old argument that you can not make meaningful statements from one sample. That is statistically irrelevant. In theory this is true. You can not generalise from one sample. In practice we all do this. We use one lens and say that we like it or not. Read any test magazine about cars, computers, scanners, motorbikes, airplanes. What you have. EVERWHERE only one sample is used and conclusions drawn. No reader of a car magazine has ever questioned the validity of the test, because it has been based on one sample. It is typically of the photographic world and in fact only within the Leica world, that this argument pops up. Why: because someone 50 years ago made the remark and it has been repeated over and over, without even questioning whether it is still correct now or even was in the past.
The justification of the one sample test is two fold: One: current production standards are such that is it unlikely that a sample way off specs will emerge from the assembly line. In the past the quality tolerances were arger and then it may be the case that you could test a particularly good or bad example. Two: if a responsible tester gets a lens that seems to be out of line given the specs of the manufacturer, the MTF graphs or other data, he will try to find out what is happening. I my case: I first check all equipment to see if they are still calibrated as should be. If so: I make new pictures with additional checks of all parameters in the imaging chain. If the lens still is below or way above what is expected. (given my experience of having tested hundreds of lenses) you contact the factory.
Any tester knows that in a one sample case there are some question marks as to how representative the results are. These have to be faced and checked. This can be done by cross checking, cross comparisons, etc. How large should a sample be to be really meaningful in a statistical way. Sampling theory tells you that the size of the sample is not related to the size of the population, but is only dependent on some internal calculations. In fact at least 20 items have to be included in a sample for true statistically relevant results with 99% reliability. So what number of items below 20 you take, it will not improve the reliability. Whether I use one or three or five, is irrelevant. Twenty I need. So I do not increase the reliability of my results for the whole lens population, if I use one lens or even 10. That being the case I can safely state that testing one lens is as good (bad?) as testing five.
It would be nice if these lessons of statistical theory would be diffused in the Leica community to get rid of yet another myth: that you can not get relevant and valid results from testing only one lens. If we follow the strict rule: that you need to test 20 items before the results are statistically valid, we could stop all discussions about Leica products: Who has tested 20 items of every lens or camera he talks about?
It is my strong impression that for some reason Leica discourse has been stuck in the same groove for the last 60 years. Would it not become time to embrace modern ideas and concepts? The recent discussion on the Lug about glass for filters is another example of the myth creating power of being out of synch with current technology and science.
Digital versus analogue. No doubt that printing technology is improving to a level where it may be difficult to see the difference between a digital print and a wet print. But that is not the issue. Any digital print is a rasterised version of the digitally captured image, just as when you would take a photograph to the printer and ask him to produce a book. And we all know that the craft of the printer is such that at higher rasterized value the pictures in a book look convincingly close to the original at least at A4 size. There have been some studies about the relationship between sharpness and graininess. The conclusion is that image quality is not a linear combination of sharpness and graininess. The overall quality tends to be determined by the lower of the two aspects. If graininess is high then the image looks poor even if sharpness is high. If sharpness is low that the print has low quality regardless of the grain level. Electronic images are still low in sharpness and low in graininess, and so will be perceived as lower in quality than analogue prints that have good edge effects or acutance and so high sharpness. Of course you can enhance the sharpness of the digital print by the several filters available tfor sharpness manipulation. But another study noted that the sharpen filters introduce unwanted artifacts and are not so easy to apply with good effect. Specifically this report noted that the sharpen filters in Photoshop were quite weak if not bad) and to really sharpen a digital image with convincing results a person had to work for many hours to get the required results. Then it is still easier for me to jump into the wet darkroom and start with a negative that is superior and a process that is simpler and takes less time. Amazing is it? As a famous Dutch football player once said: every advantage is accompanied always with some disadvantage.
Leica and medium format
The issue of image quality and how to use the Leica equipment (and optimize the imaging chain) is an important one. When Mr Stein and his team designed the M3, they had in mind the creation of the ultimate photographer’s tool, not a collectors piece. Leica equipment is made for taking photographs and enjoying the act and the result. We are not getting any closer to that goal as long as we stay in easy dichotomies as analogue versus digital or medium format versus 35mm. While these topics bear a relation to the concept of image quality, it is best if we try to define a somewhat manageable yardstick to see if we are approaching the standard of image quality. Every one is familiar with the idea of resolution coupled to contrast. Resolution is measured in spatial frequencies as a string of alternating black-white bars of ever smaller width (per mm) and contrast is defined as the relation between the black and white as a percentage where black is seen as 0% intensity. The Leica goal of optical design is to have 40 linepairs/mm at a contrast of above 50% from center to corner of the image. And most current M lenses reach that goal. What we need is a different matter. And here opinions range high and wide. Some would say that at least 100 lp/mm can be captured on the negative and have to be for superior results. Let us take a realistic approach and ask ourselves if we have some visual awareness of these numbers.
When I give a course. I always project a slide of some landscape scene, which is by all standards sharp and full of detail. Then I ask anyone to identify a part of the scene where (s)he would see 5 or 10 or 50 lp/mm. It is remark able that no one can!! We all discuss the concept of resolution and are quick to throw into the discussion some numbers, but we are unable to visualize this. One example: a landscape mostly consists of shapes of larger or smaller dimensions and within the smaller shapes we hardly see still smaller sharply defined details but shades of colour or grey values. Does it occur to you that outlines of small and large structures can be equated with just one linepair? And you may have trouble finding structures that can be related to 3 or 4 linepairs. Indeed it has been established long ago that for exhibition quality landscape pictures 2-3 linepairs per mm and for high quality portraits 1-2 linepairs is adequate.
The medium format advantage. Here we have the origin for the superiority of the medum format camera. When we have a large print of 40x50cm, the negative is enlarged about 8 times. Two linepairs in the print imply 2 x 8 lp on the negative. That is a meagre 16 line pairs/mm on the negative. Now most lenses, even the simple one in the Seagull, can resolve 16 lp/mm with a fair contrastand the real advantage of the larger format can be played out. If we picture the same scene on a 35mm negative and the medium format negative it is obvious that every part of the subject will be covered by a much larger area with the medium format. This larger area does not translate in higher sharpness (as both systems do resolve the required 2 lp/mm), but the medium format reproduces the gradation in the tone value differences more faithfully as there is more silverhalide available to reproduce these subtle gray scale differences. However good the Leica lenses, they will loose this game as there is (as the word goes in the automobile world) no substitute for cubic inches, in this case square inches of negative area. My own comparisons between Leica and Hasselblad show a virtual equality till 30x40cm (which is already a big tribute to Leica and the lens/film combinations used) and a gradual loss after that as the grain of the 35mm negative starts to break up the smooth tonality, NOT the sharpness or reproduction of fine details.
Resolution and contrast in the practice. Our eye has a visual acuity of 2 minutes of arc per line pair or in normal parlance we can see the letter E with a size of 9mm at a distance of 6 meter. That is we can distinguish separately at that distance the three horizontal bars of the letter E. Some people dismiss the testing of a lens with a barline pattern as nonsense. Still when doing so, you get a very good idea of what the dimensions are. When I did this type of testing I had a bar line pattern of one linepair per mm and my simple assumption was this: 100 linepairs is a piece of cake for a modern Leica lens so I need to take a picture at a magnification of -100 to get on the negative a structurethat is 100 lp/mm. So I took the pictures at a distance that is 100 times the focal length. For the 280mm Apo-Telyt that is 28 meters. At that distance I could not even distinguish the target and had to introduce a deep black line on the chart to help accurate focusing. After developing I was not able to see any structure: the whole test chart area was one grey patch. Only under the microscope at 40x I could see that mythical 100 linepairs. And at first I did not capure details at all: every line was unsharp by movement of the camera body. Only after mirror lock up, a very heavy tripod and additional weights on the lens and body to balance any vibration of the lens translating to the body.
Who says 100 lp/mm are easy and can be done even handheld? I am surprised that in the current Leica Fotografie International there is an article which claims that with this 280 lens outstandingly sharp pictures can be made when using that lens handheld or when using a monopod. It is quite clear that this photographer has a very different interpretation of what constitutes a sharp picture or he is referring to the quality definition used above ( 2-3 lp/mm in the print). And then you under utilise the apo-telyt 280 significantly.
Even the limit of 40 lp/mm is not so easy to reach. Hand held photography at normal shutter speeds hardly ever brings that level of resolution onto the negative. Sad but true. After doing all these tests I am now acutely aware that these three digit figures of resolution are more elusive and hard to capture than most assume.
The comparison 35mm versus 120 format. If I would use a very fine grained film in 35mm format like Techpan, I am able to compete with the micro gradation (subtle changes in gradation in small areas of the negative) and tonal smoothness of the medium format. And in fact till 30x40 or a bit larger (40x50) there would be hardly a visible difference. The comparison between formats is always a system choice. If I would compare Techpan 35 and Techpan 120 at the same final size of 40x50, there is a difference, however small. But using the wider apertures of the 35mm format, the most often used comparison is Techpan 35 and let us say Tmax100 in 120. Then both systems are equal. BUT: there is more. The microcontrast and overall contrast (sharpness at subject outlines) of a Leica lens (MTF values) is higher than that of a comparable Hasselblad/Zeiss lens. (incidentally: Mike Johnstons article about MTF and contrast is not correct when he tries to explain that there is no overall contrast in a lens when we are talking MTF). The superior MTF values of the Leica lens cannot be transformed easily onto the paper. If I use a 15 times enlargement in my enlarger, I have by nature a larger amount of light scatter and a general loss of contrast. At 5 times enlargement I can use Grade 2, but the identical negative at 15x needs Grade 2.5 or even 3, to compensate for light scatter and the need to illuminate a larger area with the same lamp, which must bring a drop in contrast. In fact at enlargements above 10x, there are all kinds of disturbances that degrade the final image: like light scatter, lens quality and vibration. These effects should not be dismissed.
IBW developers
While on this topic: the best and most satisfying paper size for Leica pictures is the 30x40cm. Why? With this size you can bring within visual range all the detail that the lens/film combo can capture and you can be close enough to the print to see the overall picture and the small details. When using 20x25cm, the enlargement factor is too low to bring into the front all details of the negative and at 40x50cm you have to stand back to see the picture and then you are so far from the print that the resolving power of the eye does not enable you to see the richness of detail.
It is quite clear to me that BW photography with prints at 30x40 is the ultimate standard to judge Leica quality. If there is progress in the mature industry of analogue processing, then the area of print papers holds the most promise. The tonal scale and gradation and print contrast (deep black and paper white) can be significantly improved. Normally a paperprint gives you a Dmax of D=1.9 to 2.1. But with special toners you can reach D=2.3 to D=2.5, which is a spectacular improvement that digital prints can not even approach. With careful exposure and development you can create a negative that holds its shadow details such that this deep black improvement retains the shadow details. For the negative I see hardly any improvement, since Kodak and Agfa have shelved the activities to improve photon efficiency. Agfa BTW will retain its analogue business and even try to expand it. On the topic of developers I have a simple but iconoclastic view. Most of the developers now on the market deliver identical results, given the film emulsions we have on the market. I know that many photographers hold on to their preferred solution (developer and dilution) and claim all kinds of special advantages. None of these can stand a careful comparison test. And the wellknown handbook of Anchell and Troop has many claims but not one is being documented and/or proven by comparison tests. I do believe that the whole discussion around the special secrets of BW processing are a relict of the past when alchemy was thought of as a real science. It is also part of the profile that a photographer wishes to have: having access to hard won personal experiences that add an edge to his photos. Of course there are some clear differences between classes of developers. like Xtol and Rodinal. But after having tested and compared about 50 different developers with all kinds of film I have to state that the differences are in 90% of the cases marginal if not trivial. One may be able to measure a longer toe in some film/developer combo when doing densitometric tests, but I wonder how much of that aspect can be brought to paper, as the characteristic curve of the grade of paper used may not support or diminish that effect. As the print is the final result/arbiter, I am not so much interested in stuying one step in the chain, but finding a total solution that gives me a fine print.
I have done all kinds of tests and used prints, the microscope, the densitometer to find reliable and reproducible differences between combinations of film and developer and paper. But the range of parameters is so great, the incidental user interference unpredictable, the match of components of the single chain too subtle and the effects so badly understood that we are facing a process that has a wide latitude of unknown influences, the impact of which we cannot predict. As example: many would stick to a proven process of agitation. (30 seconds rhythm or whatever). Developer manufacturers will show you results that the agitation is of minor influence on the result (at least in 35mm format) and chemical theory will learn that the movement of molecules and the chemical chain of interaction between the developer substances and the silver halide is hardly influenced by the agitation sequence. If someone will tell you that he has two negatives that are indeed different, he may hardly be able to single out the effects of one process.
It would be best for your conscience and your improvements in photographic technique to stick to one of a few possibilities: PanF+, Delta100 or Tmax100 (possibly Acros: but my comparisons did not show any clear advantages of that film to the rest) in Xtol (whatever dilution as long as it gives you negatives that cover a six stop contrast range in the object with a density range of D=0.1 to D=1.3. Rodinal is also a good choice, but expect some grain and noise that reduces the micro gradation. But it depends on your choice of subject. But test your film/developer choice consistently at 30x40 or a selection of the negative at enlargement 12 - 14x. If you stay at much lower enlargements, any possible differences and effects may be lost beyond the visual threshold.
Why do East European films have a clear base as compared to Ilford, Agfa and Kodak with a grey coloured base. First, as usual, some theory. When light is hitting the emulsion layer three separate processes take place. The easiest one is where rays are passing straight through the emulsion and hit the halide grains. Then we have the situation where the rays are deviated from their course because of refractive index of gelatin, and because of scattering at other grain clumps and form what is called “irradiation”: the scattering of rays within the emulsion, producing a softening of the edges of contour lines, and diminishing the effect of acutance. Thirdly: some rays are passing through the emulsion and are being reflected back by the pressure plate or the backing of the film: these rays produce the wellknown halo-effect or halation. Irradiation can be minimised by thin layer films and by a grey colour of the film base. Halation can be minimised by the use of a grey base, a special anti-halation backing or by a additional layer of 1 micron thick that prevents rays from reflection back into the emulsion. Many 120 format films use a anti-curl backing and Kodak or Ilford add to this layer an anti-halation backing. So several 120 format films are clear based after development and fixation. The original Adox films used a three pronged attack on irradiation and halation: A very thin emulsion layer A grey colour for the film base An additional anti-halation layer of a few micron that is washed away after fixation. This layer however is very sensitive to scratches when the film is long., so suitable for 120 format and not for 35mm. A grey film base reduces the sensitivity of the film and reduces overall contrast. The East European films use a clear base and an additional anti-halation backing. So they reduce halation, but not irradiation, which is countered by their very thin emulsion layers. In my testing all Efke and Maco films exhibit a much higher overall than comparable Ilford, Agfa and Kodak films. A clear base is more cost effective and a grey coloured base is more complicated and it can reduce irradiation more effectively. The visual impact of the Efke/Maco films as having a brittle edge contrast may be expained by this clear base. A topic for more study.
MTF and flare
MTF graphs. In the book there are MTF graphs of all M current lenses and their predecessors. As I have noted that interpreting an MTF garph is not an easy act here are some guidelines. We are all well aware that image quality of a lens is better when the lines of the MTF graphs are straighter and located higher in the diagram. But here are some pitfalls: The actual contrast transfer in %points is not easy to assess. As example: for the 5 lp/mm graph (which defines overall contrast) a difference of ONE %point is significant. So a lens with a 94%
contrast transfer at 5 or 10 lp/mm will show a drop in overall contrast when compared to a lens which has a 95% contrast transfer. On the graphs that is not easy to see. But for the graph that covers the contrast at the 40 lp/mm a difference of 10 % points is not very important. So two lenses, one with a transfer of 50% and another of 40% may be very close in performance. To get a good grasp of general optical performance of a lens it makes no sense just to look at two graphs, (wide open and at aperture 5.6). The performance needs to be studied at every aperture and the behaviour of the curves at every aperture needs to be compared. As no one gives you these data, you are probing in the dark. That is why I persuaded Leica to publish graphs at aperture wide open and at optimum performance and not at the usual aperture of f/5.6 (Zeiss, Canon) or f/8.0 (Photodo). Most recently designed Leica lenses are already at their best around f/4 or wider. So a comparison at 5.6 is not the best way to go. Thirdly there are several image planes where the designer may want to focus his lens upon. Remember that a point is not focused as a point but as a patch of light.
The rays that enter the lens are focused not to a point but to a caustic, which is a kind of paraboloid that is an hourglass shape turned horizontally. Therefore the image plane can intersect the waist of the hourglass at several locations, giving a different balance of contrast and resolution. You can go for the finest resolution, but loose contrast or the other way around. If the person behind the MTF gear does not know where the designer wishes the focal plane to be, he can seriously go wrong. I asked the man from Hasselblad who does the Photodo measurements what focal plane he chooses. He refused to answer and that means that the Photodo results are very suspect. Fourthly and most
seriously: MTF measurements (calculated or really measured as at PopPhoto) are generated without taking into account that most devious aspect of flare. MTF graphs are normalized to zero and disregard flare. So a theoretical high MTF value of let us easy 92% (at 5 lp/mm) could be in practice drop to 80% as flare reduces the contrast (micro and macro). This is never discussed (not by Photodo, nor Zeiss or even the many interpreters of MTF graphs). You will find it discussed in the M-broschure of course. Flare is the most significant of all image degraders.
Digital trends and the role of a brand name
The topics this time are a bit heavy and wide ranging, but still relevant for Leica photography.
The decline of the PC as a big force in the industry is imminent, as the merger of HP and Compaq indicates. The PC is now a simple commodity in the world of consumer electronics, and a low priced one, with a thin profit margin. Its role as an information machine is losing out to modern gadgets like mobiles, handhelds etc. And this is a normal development. It does show that consumer electronics all will follow the same path:
Innovation, mass production and acceptance, overtaken by another innovation and becoming a low priced commodity.
The question is what will be the fate of the digital camera, which, like it or not, is for most people an extension to the PC as is a scanner and a printer.
Just as PC owners see a limit to upgrading to ever more speed in the processor, so digital cameras will settle for a 5 to 6 million pixel size, as this is all the average and even ardent amateur will need or is willing to pay.
The industry is now very keen on selling digicams because if the very enchanting profit margins. Note that the average digital camera is based on the same camera chassis and production technology as the analogue version, which sells for a third or even lower of the price. The chip is expensive, the software is? None of that: people pay a high price because the product is still in the early innovator stage of the product cycle.
But once largely adopted (as the PC is) and the product life cycle is stretching to that of the video recorder, the price will drop, the margins go down and the electronics industry has a new challenge.
Do not believe for a moment that the digital camera companies have photography in their mind when devising and selling the digicams.
In my view the fundamental change is the shift of industry from photographic to consumer electronic.
The classical photographic industry had a vested interest in the art and culture of photography as the tools they made and created were dedicated to photography.
The current electronic cameras can migrate to any type of capturing device, stillframe or moving. The consumer electronics do not foster a culture: video recorders and camcorders are in the same class as refrigerators.
Of course many will argue that it is possible to use a digital camera as a tool for photography: we can seek the decisive moment as well with any capturing device.
It is predicted that the Xmas season this year will see an avalanche of digital cameras in the lower price region and in the high tech class and the overtaking of the sales of analogue cameras may be an economic indicator that the last fortress of analogue recording is crumbling.
In a sense photography will continue as an art form. A well composed picture with interesting content and fine colors and details or a daring composition with new insights into the subject can and will be made with the digital camera
But what is fading is the craft of photography as a knowledge base and experience to exploit the finer points of the medium. Walk around for a while in a range of new groups and you will see the frightful thin knowledge base on which most people lean for information and opinion.
It is fully true that Cartier-Bresson was not a technician, he even boasted that he would not know a word of phototechnique. Still he knew intimately what the emulsion he used could do for him. And he exposed the film as an aid to the message he wanted to convey. This intimate knowledge and rapport to the medium is part of the fascination of true photography.
The medium is still (part) of the message. And the Leica camera is at the core of the craft of photography. We all know that most Leicas are owned and or used by amateurs, and if Leica had to live on the sales to the professional users, the cameras would be 5 times more expensive.
Leica has made a wise step to combine forces with Matsushita and to put their lens design expertise in a basket with a comsumer electronic company, that also manufacturers the Minilux for them.
But will the analogue line survive? Yes, if the craft of photography thrives in a world where image making increasingly means image manipulation. No if the dominant force will be the computer assisted creation of a digital image.
CAI (computer assisted Image manipulation) is a fine tool, but as with all these terms (CAD, CAM etc) the fallacy is to assume that the computer can add to the deficiencies of the human operator. No lens design software can design a decent lens without heavy input from the user. So he still needs to have intimate knowledge of the theory of aberrations, even if the compter can optimise the lens without user control.
My point is that without intimate knowledge of the craft, no computer can do anything meaningful. Lose the basics of the photographic craft and no Photoshop can compensate.
The Leica is designed around the ideas and the tools of analogue photography. It is one of the finest instruments in this medium. The medium dies, so will the Leica.
Leica is a brand name and the connotations are with a certain product. Mac Donalds is a fastfood supplier. A car made by Mac Donalds would not sell, as this is not their area of expertise. Leica cars would not sell, nor Porsche cameras. A brand, as has been noted recently, is a necessary element in todays overcrowded product arena. A brand is and was based on one feature: a guarantee of reliability and quality. People return to the product if these aspects are guaranteed. A Leica badge to a product that dies not live up to these standards will fail in the market. Consumers are not loyal to the brand name in itself. They are loyal to what the brand provides: quality, reliability, and the ease of choice.
Sigma lenses
Lens tests.
As I told in the previous letter I am now testing a series of Sigma lenses to see where they stand. Sigma is one of the better third party suppliers and certainly a daring one, giving the exotic specs of some of their designs. The comments I will make should be read in context and I hope you will respect this as I am unhappy with selecting some statements to prove a point rather than see the whole story.
The test is done in two stages. First the results from the optical bench and the conclusions that can be drawn from these results. The next part is an assessment of how real life pictures look in practice.
I have to say at the start that some usual opinions will have to be relegated to the dustbin.
It is not true that this type of lenses is always of inferior quality, compared to first rank optics.
It is true that mechanical engineering is more important than optical design as such, as the potential of the design can only be put in practice with a mechanical design and construction that matches the optics.
It is true that the quality control and tolerance bandwith are of significant influence on the price. As example all lenses had dust and specks above the level we are used to from Leica lenses..
All lenses are fitted with pressed aspherical elements, but this is not translated into small size or superior performance in itself.
The zoom 3.5-4,5/15-30mm.
A very big lens, with a deep blue coated front lens. Why this coating is not known, but it makes it very difficult to look into the lens. Its performance is excellent:
At 15mm and full aperture quite crisp definition at the center part of the image with the usual steep drop in quality at the outer zones. Amazingly low distortion and visible vignetting of normal proportions.
At positions 21, 24, 28/30 the quality stays in the same league or improves a bit. There is at the wider apertures a veiling glare and a visible amount of spherical aberration that softens the edges and brings some blur circles around the star images. There is overall a quite visible amount of chromatic aberrations, which shows in color fringes at the edges of points and lines. You would however see this only at larger enlargements.
Tilt and decentring of elements is also noticeable, but again quite low for a lens of this complexity.
All in all: a very fine design.
The zoom 2.8/28-70.
At positions 28 and 35 quite good quality in the center part of the image, with the same veiling glare and chromatic aberrations. Distortion at the long end the negative is more visible, but not visible at the short side of the negative. At focal lengths 50 and 70 however the lens shows a pronounced distortion. Now there is much wobbling in the lens groups that make up the design. This is only visible at some focal settings as in other ones the tilt will cancel out. Here we see a phenomenon that often occurs with these zoom lenses. The AF movement needs to be very smooth and fast and so the zoom groups (variator and compensator) are mounted with less secure mounts. Now you can understand why Leica’s 2.8/35-70 was so heavy and difficult to assemble.
The 1.8/24mm macro.
A big lens as usual with these SLR designs. This lens was a real surprise. Very good image quality at full aperture over most of the negative area. Minimal distortion, only a trace of veiling glare and wide open aberrations. In fact this lens at 1.8 is a good as the Elmarit-R 2.8/24 at 2.8. Admittedly this Minolta originated design is 15 years old, but we can see where current optical design is heading to. The mechanics of the Sigma lens felt a bit sloppy, but that is usual with many AF designs and it this case did not affect the performance. Some tilt and decentring that reduced contrast at the outer zones.
Stopped down it improved with a jump as then the aberrations are reduced and the scattering of light is almost eliminated.
The 1.8/28mm macro.
This lens is a big one too. Optical performance was way below the Summicron 2/28, and below the 24 companion, but still in itself of average to good quality. Many photographers would be happy with the quality. Again the fingerprint or family trait of veiling glare, chromatic aberrations, and in this case a higher level of decentring and tilt. The center of the image,where most people would focus their attention to, showed fine detail with the soft edges of the color fringing and the spherical aberration. The outer zones showed a big drop in quality that was improved when stopping down. Distortion was low but of a complicated pattern of compensation in two twists of direction.
Overall these lenses do indicate where the cost/performance equation lies:
Wide open performance is degraded by aberrations, due to a less stringent optical correction. But stopped down there is less to complain about. The mounts and mechanics and assembly are the parts where the cost containment does occur. At least we know why we pay the premium for Leica lenses.
But generally the leveling of performance to a high standard with some downward glitches is the trend of the last 10 years. A good design now is easy thanks to powerful design programs and many examples of how to design for good performance. A cost effective assembly is a strong point of these manufacturers and they can allow themselves the equipment to manufacture complex synthetic parts, that ease assembly and whose high cost can be distributed over many tens of thousands of lenses. The other side of the coin is the wider tolerance band and the weak wide open performance. But there are some remarkable designs and the 15-30 may be designated as a truly good lens.
Next part is a report about what pictures we took in practice with real film and real objects.
Then we will give an overall assessment. These preliminary results are just a midpoint evaluation. And some finer points may change.
Best lens??
The statement in Pop Photo that the Planar 1.4/50 is the best 50mm lens ever and that the new 3.5/50 Elmar in the '0' series is the second best one, is remarkable and rash at the same time.
First some general remarks. It is not often done to lump all 50mm lenses in one basket and select the best one. It is wellknown that with larger apertures, the number and severity of aberrations increases with a squared and cubed magnitude. This increased amount cannot be corrected to the same level as can be done with lenses of less wide aperure. You can of course add more lenselements to offset the aberrations, but this equation has its own limits. In the past 50 years designers have found the best compromise with a six element f/2 des ign in the 50mm focal length. And all lenses that offered a wider aperture (1.4 or 1.2 or even 1.0) had a performance drop compared to the best f/2 designs. This almost iron law worked in the thirties, when the first wave of highspeed designs were created and it works today with all mayor manufacturers. Nikon, Canon, Leitz and Leica never designed a 1.4/50 lens that equalled or surpassed their own best f/2 design at the widest opening.. There are crossovers and generation changes of course, but as a general rule this works. I at least do not know, (from practical tests or from the literature where thousands of designs have been examined), of any 1.4 lens that at its widest opening (1.4) is better than the best f/2 lens at f/2.
The PopPhoto description „best“ is not clear in its meaning, but I do presume that they are talking about optical properties at all apertures and comparing the performance per aperture with other 50mm lenses. At best, a 1.4 design is stopped down as good as the 2.0 design at comparable apertures.
Now the Planar lens: for strict comparison of optical properties, the MTF is the best method as it represents the overall correction of all aberrations or in other words the residual aberrations.
PopPhoto uses its own derivative of this method with the SQF measurements.
As it is difficult to translate these curves into a useful interpretation of what you can expect in every day photograpgy with real film, these curves must be complemented by practical shootings. Given the nature of the photographic process and the many uncontrollable influences that every individual photographer inserts into his/her own imaging chain, the practical end result may or may not conform to the potential image quality. So it is alright for any photographer to interpret the end result as sufficient for his/her needs/desires, but it not allright to migrate this result into a statement this result is the final representation of the ultimate or feasible image quality.
If we now look at the MTF values of the Planar, we see a good, but not spectacular result. The 10 linepairs figure (which represents the overal image contrast or bite of the image) is quite low with 82% in the center and about 75% at image height 15mm. The 40 linepairs (representing the sparkle of the lens and the crispness of defintion of very fine detail) is low with 40% in the center and 20% at 15mm image height. For f/5.6 the figures are
94%/94% and 65%/73%, which is very good for the 40lp, but also for the 10 lp, given a 50mm focal length and angle . There is also a rapid rise of the contrast from center to the zonal area, indicating a strong presence of spherical aberration and focus shift.
I do not know what measurements and or pictures PopPhoto has used for their interpretative findings, but I can state on the basis of the MTF graphs (provided by Zeiss and these are widely regarded as honest), that the PopPhoto statement is questionable and even wrong. There are better figures for f/2 lenses in this focal length.
The Summicron 2/50 for M has at aperture 2 and at 10 lppmm figures of 90%/85% and at 40 lppmm 50%/40%. These values are better than the ones for the Planar at 1.4.
Compare these data with the ones for the Elmar 3.5/50 new and we see the following pattern:
At 5.6 the 10 lppmm have a contrast of 95%/95% and for the 40lppmm of 70%/70%, with a straight line. At 5.6 the Elmar is better by a margin than the Planar at 5.6. You cannot compare the widest apertue as this is 3.5 and that is close to the performance of the 5.6 aperture.
This analysis does show that a comprehensive discussion is needed in order to paint the performance profile of a lens, that global statements, like „this is the best lens“ are dangerously low in real content and that it is rash to compare lenses of the same focal length and widely different maximum apertures.
What aperture is good for Leica photography?
There seems to be a need to make short and easy statements to define or defend a position. The new Leica lens 3.5-4/21-35 (as seen on the Japanese website) has been quickly condemned by some as un-Leicalike in its aperture and so useless for the real photographer.
Now I do not know why there is such a strong feeling in the Leica community that you need to use the wider apertures of at least f/2 to qualify as a true Leica picture.This approach is even seen as a law: use a Tri-Elmar and you no longer can be taken seriously as a Leica photographer.
In my view there are many valid uses for Leica lenses at all apertures. And a picture with the fabulous 2.8/180 or 4/280 does in my view count as a legitimate Leica picture, even at aperture 4 or 5.6 or 8. And I often use my aa 2/90 at aperture 5.6 or even, my god, at 8. In some circles that counts as blasphemy, but I see lenses as instruments to deliver the performance that I need and not as icons or symbols of Leica lore.
The R8 has a wide range of excellent lenses in the 2.8 to 4 and lower category. And for the intended use of the camera an f.4 design fits in quite well. And I prefer an outstanding f/4 aperture in the 35mm focal length above a not so well executed 2/35mm design that I cannot use at the wider openings with the same confidence .
I sincerely wish that the Leica community would adopt a more tolerant stance. See lenses for what they are: optical instruments that are designed with a certain intention and goal and within a wide range of parameters, like size, weight, cost, performance etc.
Scanner basics?
Today I finished testing a number of recent slide scanners for 35mm film (in the 2900 and 400dpi class).
I will not give you all details as this would require several pages.
Highlights are these:
The software is quite sophisticated and the scratch and dust removal programs are amazingly effective.
The density range: the oft noted range of D3.6 to D3.8 and more is a purely calculated value based on depth of 8, 12, 14 or 16 bits.
But what is the real measured value? I used a Kodak step wedge, and measured the strips with my densitometer. The wedge ranged from 0.1 to 3.0
I scanned it with scanners in both ranges and I found to my big surprise that every scanner only could register a maximum D of 2.1! Every step darker than this was not recorded by the scanners. They even refused to autofocus on that part of the wedge as they could not see any density change. The difference between the 2900 and 4000 scanners is minimal. Also hardly a significant change invalue when using different bit depths.
Whatever the claims: I could not verify them. I also scanned real life slides from many film types and with different contrast ranges. The chosen slides were defined by photographers: real in the field persons who earn their living with photos, not stupid testers who do not have a clue of what is important. The slides were selected based on the deep blackness of the shadows as seen with loupe on a light table.
All scanners of both types could register the full range of tonal scale and here there were some differences between marques. Most of them could indeed capture all the shadow detail, but not more than could be seen with the naked eye.
The solution is this: I measured all slides with the densitometer for the black parts and the meter registered values of 1.97 to 2.07!. So current slide film is not able to reproduce deep black outside the Dmax range of 2.0. This any current scanner can handle.If you live under the assumption that the black of a slide is around Dmax 3.0 you might be impressed by the dynamic range of the scanners. Truth is that both are around D=2.0, which does notr even exceed a good black print.
Now another hot item: resolution. Again: resolution fgures quoted are calculated ones. I used a special slide with a very fine resolution pattern. The result: the best 2900 scanners could capture around 35 linepairs/mm and the 4000 scanners managed to get slightly above 40 linepairs. But this value is again lost in a 300dpi digital print.
Generally the differences between 2900 and 4000 scanners are becoming very small and most people would be smart to buy the much cheaper 2900 ones. Loss in image quality is hardly visible.
On all measurable aspects: resolution, depth, contrats range, shadow details etc, the scanners were very close, with no significant winner.Good software will help.
The resolution slide also showed optical defects in the scanner lenses.
I once noted that in general handheld photography a resolution limit of 20 lp/mm would bequite good and I was ridiculed by the experts. Having established that the best scanners are limited to 40 lp/mm and using slides made by reputable professional photographers, I could establish that the level of detail in the slide could be captured by the scanner. Thus the limit of resolution in the slide is below 40 lp/mm and on average quite below this value. Still the slides were regarded as quite good and critically sharp by the photographers.
Can a lens be corrected for BW only?
When researching the history of the Elmar 50mm, I noted several times that some authors remarked that the earlier Elmar was not designed to be fully colour corrected, while others stated it was. Elsewhere I heard that the Noctilux was designed for BW photography and not for colour.
The idea that lenses can be corrected for BW or colour photography is often found in the literature.
Can this be the case?
The visible light ranges from 400 to 700 nanometers. And optical glass always transmits the full spectrum. Colour correction of a design is a two part act. The most importat one is the selection of the glass. That is the familiar story of the APO correction which is done with the selection of glass (not APO glass, but glass with certain characteristics that have to match to get the apo correction).
So whatever the design of the lens , the glass will transmit the full range. A good colour correction implies that you want all colours (but at least from the blue and red part (450 to 650nm) to focus on one point of the axis (longitudinally) and also that the discs of confusion before and after the plane of focus are equally large (lateral colour). Now here comes the correction part. If we take a white point source and project this one through the lens to the plane of focus you will see a scattering of light rays of all colours. Some times the green rays are spread out most, sometimes the blue and sometimes the red, depending on correction and design. A designer now has a choice: he could neglect the scattering of the red rays, knowing the lens is used with orthochromatic film. But then the blue rays are not well focussed either. And when he corercts the blue rays, the red ones will improve too. In simple cases the designer just calculates the lens with the green-yellow light, assuming that the other colours will behave nicely.
Of course when a lens is designed it is impossible to correct every part of the spectrum to the optimum. As example. the deep blue light (430nm) is often neglected. And here we see a subtle diference between Mandler/Canadian lenses and Kolsch/Solms lenses. The Mandler designs are not so well corrected for the deep blue part as are the Solms designs. Do we see this? Yes! When comparing a large scale projection of the Summicron 50mm, I can see a very small colour fringe of deep blue on small details, which is absent when using the Apo-Summicron 2/90. A minor point to be sure, but some times of value to know.
To return to the original question, I would say that you can not design for a certain part of the spectrum. But you can be some what less stringent when correcting the lens and give some priority to one segment of the spectrum versus another.
Berek in his Leica Brevier notes that the Elmar is fully colour corrected, as BW emulsions are sensible to the full spectrum or will be soon. And in his handbook of optical design (1930) he presents the Elmar as a design example and shows it to be colour corrected, as he calculates with the full spectrum.
It is true on the other hand that if a lens (by accident or purpose) is corrected not so well for the red part, then the use of this lens with ortho film will give slightly better results than with pan film.
Only when using lenses with extremely high magnifications as long telephoto lenses with very high inherent chromatic errors, we note in the older design books that they then are correctd for a specific part of the spectrum. But only for scientific use.
To conclude: the Elmar has been colour corrected from the start, and a correction philosophy for BW emulsions does not exist. You can optimise (or neglect) for a certain spectralband for scientific purposes,. But I have never seen an example in any of my books (old or new) for a lens that is specifically designed for a spectral band when being used for general photographic purposes.
The secret of the Leica magnifier 1.25.
There has been some discussion about the loss of viewfinder transmission when using the magnifier.
Generally there is always a loss in light transmission when there is a magnification without compensation for the Entrance Pupil Diameter. Look through a 8x42 binocular and a 10x42 and you may be able to notice a loss in light transmission.
This also works for the several viewfinder magnifications in the M bodies. The 0.58 has the finder with the highest clarity and the lowest flare factor. The 0.85 has a finder that is a bit dimmer and a somewhat highr propensity to rf-patch flare. With a higher magnification the Exit Pupil Diameter becomes smaller. The Exit Pupil of the RF patch has a diameter that is close to the average diameter of the eye. But when you focus in the dark, the Pupil of the eye is enlarged and becomes wider than the RF Exit Pupil, which causes a slight loss of the transmission values. So in general when you focus in dim (non)-available light with the Leica RF, you will experience a loss of brightness, as outlined above. This loss is most likely compensated for by the eye's sensitivity as no one ever complained about this. The loss is greater with the 0.85 than with the 0.58.
Now the magnifier. As is the case with every magnifier (remember the Apo-Extenders?) there will be a loss of brightness with the magnification. With the 1.25 magnifier it is 1.25^2 (1.25 squared), that is 1.5625 (calculated with the HP 41 (the Leica of the calculators). This is a half stop and not the full stop as mentioned in recent discussions. Adding A plus B, we may note that the 0.58 finder with a 1.25 magnifier has equal brightness as the unaided 0.72 finder. And the 0.72 with magnifier is equal to the unaided 0.85 finder.
I hope this clarifies the issues involved. A half stop by the way is not noticed at all when we study and analyse the natural vignetting of a lens