Sensor and lens matching
The interaction between an imaging sensor and its lens is a key part of designing and implementing machine vision systems. Optimizing this relationship is often overlooked, yet it strongly affects overall system resolution. An incorrectly paired camera and lens can waste cost and performance. Choosing the appropriate camera and lens combination is increasingly complex as new sensors and lenses are developed to exploit manufacturing advances and improve performance. New sensors introduce challenges for optics and make correct pairings less obvious.
Nyquist-frequency imaging
Imaging at the Nyquist frequency may seem attractive; this frequency is defined in equation 1 for advanced lens selection. However, this is generally inadvisable because it means the feature of interest will fall exactly on one pixel. If the imaging system moves by half a pixel, that feature may fall between two pixels and become completely blurred. Therefore, imaging at the Nyquist frequency is not recommended. Assuming no subpixel interpolation, imaging at about half the Nyquist frequency is typically suggested, since that ensures a feature of interest always spans at least two pixels.
Another common but incorrect assumption is that a lens is unsuitable for a given sensor unless the lens has a relatively high contrast (>20%) at the sensor's Nyquist frequency. As noted above, imaging at the Nyquist limit is unwise and can introduce several problems. The whole imaging chain must be examined to determine whether a lens is suitable for a particular sensor, and the correct choice often depends on the application. The next sections discuss what happens in imaging systems at or near the Nyquist frequency and how this affects overall system resolution.
How sensor and lens interact
As pixel sizes continue to shrink, a primary challenge is that smaller pixels do not always translate to higher real-world system resolution once the optical elements are considered. In an ideal world without diffraction or optical aberrations, resolution would be based solely on pixel size and object size: as pixel size decreases, resolution increases. However, this simplified model ignores noise and other parameters.
Lenses also have resolution specifications expressed as modulation transfer function (MTF), but the underlying principles are less concrete than pixel size. When light passes through an aperture, diffraction always occurs and reduces contrast. Optical aberrations, present to varying degrees in every lens, blur or displace image information depending on aberration type. For fast lenses (f/4 or faster), optical aberrations are often the reason a system departs from the diffraction-limited “ideal”; in most cases the lens will perform poorly at its theoretical cutoff frequency.
As pixel frequency increases (pixel size decreases), contrast seen at the sensor declines; every lens follows this trend. But this does not fully represent the lens's real hardware performance. Lens manufacturing tolerances and consistency also affect aberration content, so real-world performance can differ from nominal design data. Estimating real-world lens performance from nominal specifications alone can be misleading; laboratory testing helps determine whether a specific lens and sensor are compatible.
MTF, sensor MTF, and system MTF
One method to evaluate lens performance on a particular sensor is the USAF 1951 bar target. Bar targets are better than star targets for assessing lens/sensor compatibility because their features align well with square pixel arrays. The examples below show images taken with the same high-resolution 50 mm lens under identical illumination on three different sensors. Each image is compared with the lens's nominal on-axis MTF curve. Only the on-axis curve is used here because the regions of interest cover only a small portion of the sensor center.
For example, when pairing a 50 mm lens with a 1/2.5" sensor having 2.2 μm pixels (at 0.177x magnification), the sensor's Nyquist frequency implies a theoretical minimum resolvable feature size of 12.4 μm. However, these calculations do not include contrast values. In that case, imaging the target at the sensor's Nyquist frequency produced only 8.8% contrast, which is below the commonly recommended minimum contrast of 20% for reliable imaging. Doubling the feature size to 24.8 μm increased contrast nearly threefold. Practically, the system is much more reliable when imaging at about half the Nyquist frequency.
Thus a system may be mathematically capable of resolving a 12.4 μm feature according to resolution equations, but in practice it cannot reliably image that feature because the contrast is too low. This discrepancy highlights that first-order calculations and approximations are insufficient to determine if an imaging system can meet a target resolution. Nyquist-based calculations should be used only as a guideline to indicate limits, not as a definitive predictor of system resolution. An 8.8% contrast is too low to be considered robust because small variations in conditions can reduce contrast below a discernible level.
Other example sensors—such as a Sony ICX655 with 3.45 μm pixels and an On Semiconductor KAI-4021 with 7.4 μm pixels—produced images with contrast above 20% for the same lens and illumination. Although these sensors can resolve larger minimum feature sizes than the 2.2 μm sensor, imaging at their respective Nyquist frequencies is still inadvisable because small object motion can shift features between pixels and cause loss of resolution. Note also that increasing pixel size from 2.2 μm to 3.45 μm and then to 7.4 μm has a relatively small effect on contrast when comparing one-pixel-per-feature versus two-pixels-per-feature.
An important difference shown in the examples is the disparity between the lens's nominal MTF and the actual contrast observed in images. In one case, the lens nominally should provide about 24% contrast at a certain frequency, yet the captured image showed only 8.8%. The main causes of such discrepancies are the sensor MTF and lens tolerances. Most sensor manufacturers do not publish sensor MTF curves, but sensor MTFs have similar shapes to lens MTFs. Because system MTF is the product of the MTFs of all components, the lens and sensor MTFs must be multiplied to obtain a more accurate estimate of system-level resolution.
Lens tolerance MTF further shifts real performance away from nominal curves. All these factors together change the system's expected resolution, so a lens MTF curve by itself is not an accurate representation of system-level resolution.
Practical recommendations
Examples indicate that the best system-level contrast appears with larger pixels; contrast drops significantly as pixel size decreases. A useful guideline is to use 20% as a minimum contrast threshold for machine vision systems because contrast below this level is overly susceptible to noise fluctuations from temperature changes or lighting crosstalk. In the earlier example, the image captured with a 50 mm lens and 2.2 μm pixels had only 8.8% contrast, so the lens would likely be the limiting factor for features corresponding to that pixel size, and image data would be unreliable.
Very small pixels remain useful. The fact that optics cannot resolve a whole pixel does not make small pixels useless. For some algorithms—such as particle analysis or optical character recognition—the important factor is how many pixels can be placed on a given feature, not whether the lens can resolve a single pixel. Smaller pixels can also reduce the need for subpixel interpolation, improving measurement precision. When switching to color cameras with Bayer filters, the relative loss in resolution may be less severe.
If it is essential to observe features at single-pixel scale, it is generally better to double the optical magnification and halve the field of view. This causes features to cover twice the number of pixels and increases contrast significantly, at the expense of reduced overall field coverage. From the image sensor perspective, a better approach is to keep the same pixel size but double the sensor format. For example, a 2.2 μm pixel sensor at a larger format will yield the same field of view and spatial sampling as a smaller format sensor at higher magnification, but with higher theoretical contrast. However, increasing sensor size introduces cost and optical design challenges: lenses for larger formats require more and larger optical elements and tighter tolerances, often increasing lens cost substantially even if nominal pixel-limit specs remain similar.
ALLPCB