Skip to Main Content
Custom Product Design Available. Learn More

A Practical Guide to

Machine Vision Lighting


It is well-understood that properly designed lighting is critical for implementing a robust, and timely vision inspection system. A basic understanding of illumination types and techniques, geometry, filtering, sensor characteristics, and color, as well as a thorough analysis of the inspection environment, including part presentation and object-light interactions, provide a solid foundation when designing an effective vision lighting solution. Developing a rigorous lighting analysis will provide a consistent, and robust solution framework, thereby maximizing the use of time, effort, and resources.



Perhaps no other aspect of vision system design and implementation has consistently caused more delay, cost-overruns, and general consternation than lighting. Historically, lighting was often the last aspect specified, developed, and or funded, if at all. And this approach was not entirely unwarranted, as until recently there was no real, vision-specific lighting on the market; lighting devices were often consumer-level incandescent or fluorescent lighting products.

The objective of this paper, rather than to dwell on theoretical treatments, is to present a “Standard Method for Developing Feature Appropriate Lighting”. We will accomplish this goal by detailing relevant aspects, in a practical framework, with examples, where applicable, from the following three areas:

  1. Familiarity with the following four Image Contrast Enhancement Concepts of vision illumination:
    • Geometry
    • Pattern, or Structure
    • Wavelength
    • Filters
  2. Detailed analysis of:
    • Immediate Inspection Environment – Physical constraints and requirements
    • Object – Light Interactions with respect to your unique parts, including ambient light
  3. Knowledge of:
    • Lighting types, and application advantages and disadvantages
    • Vision camera and sensor quantum efficiency and spectral range
    • Illumination Techniques and their application fields relative to surface flatness and surface reflectivity
    • When we accumulate and analyze information from these three areas, with respect to the specific part/feature and inspection requirements, we can achieve the primary goal of machine vision lighting analysis — to provide object or feature appropriate lighting that meets two Acceptance Criteria consistently:
  1. Maximize the contrast on those features of interest vs. their background
  2. Provide for a measure of robustness

As we are all aware, each inspection is different, thus it is possible, for example, for lighting solutions that meet Acceptance Criterion 1 only to be effective, provided there are no inconsistencies in part size, shape, orientation, placement, or environmental variables, such as ambient light contribution (Fig. 1).


Fig 1
Cellophane wrapper on a pack of note cards, a – Meets all Acceptance Criteria, b – Meets only criteria 1. In this circumstance, the “wrinkle” is not precluding a good barcode reading – what if the “wrinkles” were in a different place, or more severe in the next pack on the line?

Review of Light for Vision

For purposes of this discussion, light may be defined as photons propagating as an oscillating transverse electromagnetic energy wave, characterized by both magnetic and electric fields, vibrating at right angles to each other – and to the direction of wavefront propagation – (Fig. 2).


    Fig 2
    Oscillating transverse propagating wavefront with magnetic and electric fields.

    The electromagnetic spectrum encompasses a wide range of wavelengths, from Gamma Rays
    on the short end, to Radio Waves on the long end, with the UV, Visible, and IR range in the middle. For purposes of this discussion, we will concentrate on the Near UV, Visible and Near IR regions (Fig. 3).


    Light may be characterized and measured in several ways:

    1. Measured “Intensity”:
      • Radiometric: Unweighted measures of optical radiation power, irrespective of wavelength in Watts (W) per unit area
      • Photometric: Perceived light power of radiometric measures – weighted to the human visual spectral response and confined to the “human visible” wavelengths in lux (lx = lumens / m2)
    2. Frequency: Hz (waves/sec)
    3. Wavelength: Expressed in nanometers (nm) or microns (um)

    In machine vision applications, we tend to express light in wavelength units (nm) preferentially over frequency; therefore, we will stress the following relationships among light wavelength, frequency, and photon energy with respect to wavelength. All three of these properties are also related to the speed of light as expressed by the following two equations:

    1. c = λƒ
    2. c = λE / h (Planck’s Equation), where:
      c = speed of light E = photon energy
      λ = wavelength h = Planck’s constant
      ƒ = frequency expressed in Hz

    Combining the two formulas by canceling c and solving for E, we arrive at the relationship known as the Planck – Einstein equation:

    E = hƒ

    From these manipulations we see there are two important relationships that we can use to our advantage when applying different wavelengths to solving lighting applications:

    1. wavelength and frequency are inversely proportional ( λ ~ 1 / ƒ )
    2. wavelength and photon energy are inversely proportional ( λ ~ 1 / E )

    From a practical application standpoint, we can best apply these two relationships to assist in creating feature-specific image contrast, particularly when we analyze how light of different wavelengths interacts with surfaces.

    Additionally, it is important to note that the human visual system and CCD/CMOS or film-based cameras differ widely in two important respects: Photon sensitivity and wavelength range detection (See Figs. 4, 5).


    Fig. 4: Relative human daytime vs. night-adapted vision sensitivity.


    Fig 5
    Human visual data superimposed on a typical NIR enhanced CCD camera.

    Vision Illumination Sources

    The following lighting sources are now used in machine vision:

    • Fluorescent
    • Quartz Halogen – Fiber Optics
    • LED – Light Emitting Diode
    • Metal Halide (Mercury)
    • Xenon (Strobe)

    LED, fluorescent, quartz-halogen and Xenon (Figs. 6a-j) are by far the most widely used lighting types in machine vision, particularly for small to medium-scale inspection stations, whereas metal halide and Xenon are more often deployed in large-scale applications, or applications requiring a very bright source. Metal halide, also known as mercury, is often used in microscopy because it offers many discrete wavelength peaks, which complements the use of filters for fluorescence studies.


    Fig. 6:  a – LED lights, b – Fluorescent ring, c – Fluorescent tubes, d – Quartz Halogen bulb source, e – Quartz Halogen System with Fiber Optic ring.  Xenon Strobe Source Firing Sequence:  f – Off, g thru I – Sequential power-up, j – Full power output.

    A Xenon source is useful for applications requiring a very bright, strobe light. Fig. 7 shows the advantages and disadvantages of Xenon, fluorescent, quartz halogen, and LED lighting sources, in accordance with relevant selection criteria, as applied to machine vision. For example, whereas LED lighting has a longer life expectancy, fluorescent lighting may be the most appropriate choice for a large-area inspection because of its lower cost per unit illumination area – depending on the relative merits of each to the application.

    image018 2

    Fig 7
    Comparison and contrast of common vision lighting sources.

    Historically, fluorescent and quartz halogen lighting sources were most often used for machine vision applications. However, over the last 15 years, LED technology has consistently improved in stability, intensity, efficiency, and cost-effectiveness to the extent that is now accepted as the de facto standard for almost all mainstream applications. On the other hand, fiber-coupled quartz halogen sources are still a go-to solution for many microscopy/lab applications. While LED sources are not yet providing the same levels of intensity and output/price performance of Xenon strobe sources, high-speed image capture is still possible with LED-based strobe lighting, as evidenced by the images in Fig. 8, showing a bullet breaking light bulbs and cutting a playing card.


    Fig. 8:  Air rifle pellet through a light bulb and playing card, ca.  2009

    It should also be noted at this time that many of the examples and results demonstrated and described in this document were generated using LED sources, unless otherwise indicated, rather than the other aforementioned source types; however, many of these results could also be adequately replicated with other sources.

    Understanding Radiometric and Photometric Measurement

    As elaborated earlier, light “intensity” is expressed Radiometrically or Photometrically. The Machine Vision Industry has often followed the commercial lighting practices of specifying light source-only power in Watts (radiometric) or lumens (photometric), whether it is white light or other monochromatic sources (red, green, blue,). The primary two pitfalls when evaluating lights based on specifications are:

    1. Source power only: No information with respect to “the amount of light” cast on an object.
    2. Comparing – on paper -white light intensity vs. that of monochromatic light in photometric, rather than radiometric values.

    From a practical viewpoint, a MV Engineer or Technician benefits the most, conceptually, when they can compare light intensities on an object in the real world, primarily at a known light working distance (WD) for front lighting and at an emitting surface for back lighting – this opposed to simple source-only power specification.

    The power specification of a lighting source (in W or lm), by definition, offers neither information about the light intensity at a distance nor light travel geometry – is it a spherical source, like the Sun, radiating light in a spherical front, or focused in a direction, like a flashlight? As we all know, how light is or is not focused from a source plays a major role in the intensity available on the surface of an object we might be inspecting – even if the source-only intensities are otherwise identical.

    It is advantageous, therefore, to consider light intensity specification that takes light travel geometry into account as Irradiance (W/m2) or Illuminance in Lux (lx – lm/m2) as a measure of radiant power. For these reasons we will use the more machine vision appropriate term, “radiant power” when referring to the “amount of light on a surface”.

    (Please see Appendix B – Extended Topic 1 – for a more technically detailed examination of lighting ‘intensity” involving both source and light travel geometry concepts).

    The key cause of confusion when understanding and applying radiometric vs. photometric radiant power specification, especially when comparing white against monochromatic sources or among the latter is related to the simple fact that photometric source output is weighted to the human eye response to color, as we saw from Fig. 4. For example, this means that because humans do not see IR or UV light, a photometric specification lists their intensities as “0” – not practical for comparison purposes.

    Consider the following comparison of radiometric vs. photometric light “intensity” in the table depicted in Fig 9:


    Fig. 9:
    Comparison of nominal 1W radiant power devices, listing radiometric vs. photometric measurements. Note that all three wavelengths offer the same source power (1W), but when weighted to the human visual response, the red and especially the blue appear to be low-power, compared with the yellow-green source, which corresponds to peak human eye daytime sensitivity.

    Clearly, whereas this information is not incorrect, it’s just easily misinterpreted without proper context and understanding.

    We can see a real-world example of how easy it is to be misled when selecting a “most-appropriate” wavelength for an application. Consider that a vision tech has been tasked with selecting an LED wavelength with the “brightest” output because the application is suspected of being light-starved. For example, the application could require very short exposure times in order to freeze motion due to high object speeds, which of course forces an increase in light intensity to compensate for the shorter light collection time.

    The vision tech then locates the following graphic (Fig. 10) of LEDs, based on a photometric (human vision weighted) specification. It’s very easy to select the green 565 nm LED based on the listed relative intensities.


    Fig. 10:  Photomterically specified LED intensity of several monochromatic wavelength LEDs.  Based on the available information, the green 565nm LED light would appear be a best choice to maximize light radiant power on the intended target.

    However, if we take note of the same LED intensities, but specified in radiometric (unweighted) terms, we potentially have quite a different selection outcome (Fig. 11). Clearly, from the standpoint of un-weighted radiant power output, the IR LEDs might be the obvious choice, but we must consider a camera sensor’s Quantum Efficiency (QE) curve: This is illustrated by the green line – the camera is not particularly IR sensitive, therefore, it is not a viable solution.


    Fig. 11:  The same LEDs from Fig. 10 radiometrically specified. Note the different relative radiant powers of these LEDs, including the IR compared with the photoimetric specification. 

    Additionally, the other relevant information from the data portrayed in Fig. 11 is the stark difference in actual radiant power of the shorter wavelengths (green 525nm and blue 470nm) specified radiometrically vs. photometrically. The camera is also the most sensitive to the blue 470 (with peak sensitivity a bit above that). Therefore, the obvious choice to offer the most radiant power that is also the most efficiently collected is the blue 470nm light.

    For these reasons, there are a few rules-of-thumb to consider when comparing specified intensities:

    1. Compare monochromatic vs. all wavelengths, including white light, by irradiance (radiometric).
    2. Compare white vs. white light intensity by illuminance or irradiance.
    3. Compare intensities in the same units and measured at the same WD.
    4. Consider your camera sensor’s QE when making wavelength selections.

    The Table in Fig. 12 graphically summarizes the above information.


    Fig 12
    Graphical depiction of suitable Light Intensity Specification

    The Light Emitting Diode

    A light emitting diode (LED) may be defined as a semiconductor-based device that converts electrons to light photons when a current is applied. The emitted wavelength (referred to as color in the visible range) is determined by the energy required to allow electrons to jump the materials’ valence to conduction band gap, thus producing wavelength-specific photons. The efficiency of this electron to photon conversion process is reported as source lumen efficacy, and is often expressed as lm/W.

    An important concept for specifying monochromatic LED performance, particularly in scientific disciplines is spectral full-width, half-max (FWHM). Basically, this is a measure of the spectral curve width, specified at the 50% intensity point after subtracting the noise floor values from the total spectral curve height (please see Appendix B – Extended Topic 2 for a more detailed discussion about spectral and image intensity FWHM).

    Available LED wavelengths, radiant power and lumen efficacy have increased rapidly in the last 20 years, including white LED development spurred on by the commercial lighting industry. It should be noted that there are no LED die that directly generate a visible white spectrum, but rather blue LEDs provide the excitation wavelength to create a secondary emission (fluorescence) of yellow phosphors under the LED lens. For this reason, the commercial introduction of “white” LEDs had to await the perfection of blue LED technology in the mid-1990s. For more background, see:

    Similarly, LED design has evolved of the years, particularly with respect to thermal management. Although LEDs are very efficient generators of light, they are not 100% efficient and hence as the radiant power has increased, so has the need for localized LED thermal management (see Fig. 13).

    image 1

    Fig 13
    From L to R: Early T1¾ epoxy package, surface mount “chip” LED (both courtesy of Sun LED); high-current LED with modern ceramic substrate, older “Power” LED with metallic thermal base. (Courtesy of Cree and Philips, respectively).

    It is important to consider not only a source’s brightness, but also its spectral content (Fig. 14). Fluorescence microscopy applications, for example, often use a full spectrum metal-halide (mercury) source, particularly when imaging in color; however, specific wavelength monochromatic LED sources are also useful for narrow-wavelength output biomedical requirements using either a color or B&W camera.


    Fig. 14: Light Source Relative Intensity vs. Spectral Content. Bar at bottom denotes approximate human visible wavelength range.

    In those applications requiring high light intensity, such as high-speed inspections, it may be useful to match the source’s spectral output with the spectral sensitivity of the proposed vision camera (Fig. 15) used.  For example, CMOS sensor-based cameras are more IR sensitive than their CCD counterparts, imparting a significant sensitivity advantage in light-starved inspection settings when using IR LED or IR-rich Tungsten sources.

    Additionally, the information in Figs. 14-15 illustrates several other relevant points to consider when selecting a camera and light source.

    • Attempt to match your camera sensor’s peak spectral efficiency with your lighting source’s peak wavelength to take the fullest advantage of its output, especially when in light-starved high part speed applications.
    • Narrow wavelength sources, such as monochromatic LEDs, or mercury are beneficial for passing strategic wavelengths when matched with pass filters. For example, a red 660nm band pass filter, when matched to a red 660nm LED light, is very effective at blocking ambient light on the plant floor from overhead fluorescent or mercury sources.
    • Ambient sunlight has the raw intensity and broadband spectral content to call into question any vision inspection result – use an opaque housing.
    • Even though our minds are very good at interpreting what our eyes see, the human visual system is woefully inadequate in terms of ultimate spectral sensitivity and dynamic range – let your eyes view the image as acquired with the vision camera.



    Fig. 15:  Camera sensor relative spectral response vs. wavelength (nm), compared to Human perception.  Dashed vertical lines are typical UV thru IR LED wavelengths for vision.


    LED Lifetime Specification

    As is clear from the discussion and graphic presented earlier, LEDs offer considerable advantages related to output and performance stability over their significant and useful lifetimes. When LEDs were still manufactured primarily as small indicators, rather than for effective machine vision illuminators, the LED manufacturers specified LED lifetime in half-life; as the industry matured and the high-brightness LEDs became commonplace for commercial and residential application, manufactures were pushed to specify a more practical and understandable measure of performance over their lifetimes, referred to as “lumen maintenance”. Please see Appendix B – Extended Topic 3 for a more detailed examination.

    White LED Correlated Color Temperature

    We are now familiar with commercial and residential white LED lights, where illuminator color temperature (expressed in degrees Kelvin – K) may be understood as the relative amount of red orange vs. blue hue in the light content. This can vary from warm 2000K – 4000K, neutral 4000K – 5500K, and cool 5500K – 8000K plus. With respect to machine vision applications, the amount of blue or red content in a white LED illuminator can have a significant effect on the inspection result, depending on the application, particularly on color applications. Color inspections may require accurate image color representation for the purposes of reproduction, identification, object matching/selection or quality control of registered colors. To be successful, it is necessary to understand two light measurement parameters: white light LED correlated color temperature (CCT) and color rendering index (CRI).

    For a more detailed examination of white light color temperature application in machine vision, please see Appendix B – Extended Topic 4.

    Photobiological Safety Considerations

    As LEDs have become more powerful – and more prevalent on the manufacturing floor, occasionally placing them near human operators, eye safety has become a priority, particularly in strobing operations. Initially, LEDs were classified under the laser safety categories by class. However recently the International Electrotechnical Commission (IEC) has reclassified LED light into its own set of categories, collectively known as, and detailed by, an IEC 62471 document. The commission subdivided the Near UV to Near IR wavelength ranges, including visible, into five definable hazard types (see left side vertical column of Fig. 16). A light “luminaire” is typically tested against a set of well-documented standards and is then assigned into 1 of 4 Risk Groups, ranging from “Exempt” Risk through High Risk (Groups 0-3, respectively). Additionally, a companion document IEC 62471-1 offers Guidance Control Measures for mitigation – See Fig. 16. All machine vision lighting vendors now offer light Safety Risk Group designations for their lights.

    There is some overlap in the Hazard areas, because of the mix of possible wavelength ranges for some lights. Further, it’s important to note the differences between dermal and eye contact hazards for IR. Upon contact to dermal areas, IR light is simply absorbed by the skin and therefore does not pose a hazard under normal exposure conditions. Whereas, upon penetration of the eye, IR light produces no eye response, and unlike a visible light reaction, the eye neither closes down the iris, nor produces an eye aversion response that generally protects the retina from damage as it does with visible light. If the IR light is sufficiently strong, it can produce heat as it’s absorbed into the back of the eyeball – and that can cause damage to the retina and perhaps the optic nerve. As of this writing, most of the near and short-wavelength IR lights used in machine vision do not offer the radiant power to induce retinal damage and are therefore classified as either Exempt Risk or Risk Group 1. Always consult the lighting manufacturer if any question as to their safety.


    Fig 16
    IEC 62471-1 Guidance Control Measures for each Safety Risk Group designation.

    The Standard Method in Machine Vision Lighting

    In the Introduction, we listed three relevant aspects necessary to develop a Standardized Lighting Method. They are:

    1. The four Image Contrast Enhancement Concepts (Cornerstones)
    2. Detailed inspection environment and light-object Interaction Analysis, including ambient light contributions
    3. Knowledge of Lighting Techniques/Types, and camera sensor QE

    These, along with the accumulated application, discovery process and testing results, when considered together, can lead to a lighting solution that produces feature-appropriate contrast consistently and robustly.

    The Four Cornerstones of Image Contrast

    These concepts were devised as a teaching tool for labelling and demonstrating four methods applied to enhance or even create feature–appropriate image contrast of parts vs. their backgrounds. The goal is to have effective, consistent, and robust features defined as best suited for a given inspection.

    The four Image Contrast Enhancement Concepts of vision illumination are:

    1. Geometry The spatial relationship among object, light, and camera:
      image 2
    2. Structure, or Pattern – The shape of the light projected onto the object:
      image 1 1
    3. Wavelength, or Color – How the light is differentially reflected or absorbed by the object and its immediate background:
    4. Filters – Differentially blocking and passing wavelengths and/or light directions:
      image 3 1

    A common question raised about the four Image Contrast Enhancement Concepts is priority of investigation. For example, is wavelength more important than geometry, and when to apply filtering? There is no easy answer to this question, and of course the priority of investigation is highly dependent on the part and the expected application-specific results. Light Geometry and Structure are more important when dealing with specular surfaces, whereas wavelength and filtering are more crucial for color and transparency applications.

    Understanding how manipulating and enhancing the image contrast from a part, or part feature of interest, against its immediate background, using the four Concepts is crucial for assessing the quality and robustness of the lighting system. It should be noted that it is not at all uncommon to utilize more than one Concept to solve an application, and in some cases they may all need to be used. In fact, the following description and examples from each Concept category show considerable overlap for just this reason.

    Cornerstones 1 & 2 Geometry: Pattern and Structure

    Although the term, Geometry, is used generically, it is sometimes useful to differentiate System Geometry vs. Light Ray Geometry. System Geometry is defined as that spatial relationship among the camera, light head and part or feature of interest – see Fig. 17. In general, there are two broadly defined System Geometries – Coaxial Lighting (on-axis) and Off-axis Front Lighting – we can consider back lighting as a coaxial variant as well. Coaxial implies the light is centered about the camera’s optic axis, but there is no definition or expectation of how the camera is positioned with respect to the part surface.

    Effecting contrast changes via Geometry involves moving the relative positions among object, light, and/or camera in space until a suitable configuration is found. This combination of moves in space is most amenable under partial brightfield, directional lighting (see later section – “Partial Bright Field Lighting”), however. Full brightfield, diffuse techniques tend to require fixed and coaxial alignment positions between the light and camera/lens, thus restricting the full degree of relative component movement.


    Fig 17
    a – Dome Coaxial lighting, b – Back Lighting showing collimation (left side) and standard back lighting,
    c – Off-axis Front lighting with Bright field and Dark Field modes

    Light Ray Geometry may be related to System Geometry – in the sense that certain System Geometries produce specific Light Ray Geometries, however, sometimes the same off-axis System Geometry may produce a different effect on the part or features of interest, particularly in the case of reflection geometry – as seen in Fig. 18. Specifically, the light and resulting image produced in a Coaxial System Geometry (camera and light), in an Off-axis orientation (Fig. 18a) will be considerably different from both the camera and light in off-axis, non-coaxial positions (Fig. 18b).


    Fig. 18
    a – Off-axis, but coaxial lighting, minimizing specular reflection from the surface, b – Off-axis, but non-coaxial lighting attempting to gather the reflectivity from the surface features specifically.

    The Light Ray Geometry illustrated in the left graphic in Fig. 18 is designed to mitigate surface specular reflection so relevant surface details are visible in an image, whereas that depicted in the right graphic is designed to gather, rather than mitigate reflections – usually because the features of interest are differentially more reflective in contrast to the rest of the surface.

    As can be surmised, the type of inspection under the System and Light Ray Geometries depicted in Fig. 18 is generally limited to Presence/Absence or perhaps general location, rather than any measurement for sizes, shapes, or spatial relationships, owing to the off-axis perspective of the camera with respect to the surface. In this instance, we can alleviate surface glare by keeping the camera perpendicular to the inspection surface and moving the light off-axis to some degree (Fig. 19), thus accomplishing the same surface glare mitigation without the perspective shift in the image.


    Fig. 19
    Coaxial camera alignment, off-axis lighting to reduce glare. Light Ray angle of reflection (dashed red arrow) is equal to the angle of incidence on the surface, b – Example of camera and light coaxial alignment illustrating glare reflection, c – Example of camera in coaxial alignment and light off-axis, mitigating source glare reflection, d – Image of the System Geometry to achieve the image in Fig. 19c.

    Contrast changes via Structure, or the shape of the light projected on the part is generally light head, or lighting technique specific (See later section on Illumination Techniques). Contrast changes via Color lighting are related to differential color absorbance vs. reflectance (See Object – Light Interaction).

    Figure 20 illustrates another example how crucial Geometry is for a consistent and robust inspection application on a specular cylinder.


    Fig. 20
    Lighting Geometry – Reading ink print on an inline fuel filter, a – Std coaxial geometry, note the hot spot re-flection directly over the print, b – Off axis – Acceptance Criteria 1&2 met, but what happens when the print is rotated slightly up or down? C – Off axis down the long axis of the cylinder – all 3 Criteria met, d – Same as in image “c”, but from a longer working distance.

    The application of some techniques requires a specific light and geometry, or relative placement of the camera, object, and light; others do not. For example, a standard bright field bar light may also be used in a dark field orientation, whereas a diffuse dome light is used exclusively in a coaxial mount orientation.

    Most manufacturers of vision lighting products offer lights that can produce various combinations of lighting effects, and in the case of LED-based products, each of the effects may be individually addressable. This allows for greater flexibility and reduces potential costs when numerous inspections can be accomplished in a single station, rather than two. If the application conditions and limitations of each of these lighting techniques, as well as the intricacies of the inspection environment and object–light interactions are well understood, it is possible to develop an effective lighting solution that meets the two Acceptance Criteria listed earlier.

    Illumination Techniques

    Illumination techniques comprise the following:

    • Back Lighting
    • Diffuse Lighting (also known as full bright field)
    • Bright Field (partial or directional)
    • Dark Field
    • Structured Lighting

    Back Lighting

    Back lighting generates instant image contrast by creating dark silhouettes against a bright background (Fig. 21). The most common uses are detecting presence/absence of holes and gaps, part placement or orientation, or gauging. It is often useful to use a monochrome light, such as red, green, or blue, with collimation film for more precise (subpixel) edge detection and high accuracy gauging. Back lighting is also beneficial for transmitting through transparent or semi-transparent parts, such as the glass bottle imaged in Fig. 21b using a red 660 nm light source.

    fig21 1

    Fig 24
    a – Back Lighting function diagram, b – Amber bottle imaged with a red 660 nm back light; note the lot code is clear-ly highlighted, but the light does not penetrate the label (left side of image)

    A variant of the back light is designed specifically for line scanning application deploying a high-speed linescan camera, typically on fast moving webs (see more detail in a subsequent section on linescan lighting). These linear back lights are commonly long, narrow, and are designed for extreme intensities that are necessary to handle the camera’s high line rates needed to freeze motion and are most often deployed to penetrate thin web materials. A good example is perforation detection in plastic bag stock before forming into bags, or dislocations in the weave of a semi-transparent textile web. Constant-on, rather than strobing is the rule in these cases.

    Partial Bright Field Lighting

    Partial (directional – see Fig. 25) bright field lighting is the most commonly used vision lighting technique, and is the most familiar lighting we use every day, including sunlight, lamps, flashlights, etc. It typically takes the form of a spot, ring, or bar light style. This type of lighting is distinguished from full bright field in that it is directional, typically from a point source, and because of its directional nature, it is a good choice for generating contrast and enhancing topographic detail. It is much less effective, however when used on-axis with specular surfaces, generating the familiar “hotspot” reflection (Fig. 25).

    image 8

    Fig. 25
    Directional Bright Field, a – Directional Bright Field Function Diagram, b – High-angle light reflecting from a
    specular surface, c – Off-axis lighting to improve the image for reading the 1-D bar code.

    Full Bright Field Lighting

    Diffuse, or full bright field lighting is commonly used on shiny specular, or mixed reflectivity parts where even, but multi-directional / multi angle light is needed. There are several implementations of diffuse lighting generally available, but 3 primary types, hemispherical dome / tunnel or on-axis (Figs. 30a-c) being the most common.

    Diffuse dome lights are very effective at lighting curved, specular surfaces, commonly found in the automotive industry, for example. Diffuse dome lights are most effective because they project light from multiple directions (360 degrees looking down the optic axis) as well as multiple angles (from low to high), which tends to normalize differential surface reflections on complex shape parts.

    image 9

    Fig 31
    a – On-axis (Coaxial) Diffuse function diagram, b – Blown pop bottle sealing surface under axial diffuse lighting – Clean and unblemished surface (white ring), c – Damaged surface – note the discontinuities in the reflectivity profile. Fig.
    31: a – On-axis (Coaxial) Diffuse function diagram, b – Blown pop bottle sealing surface under axial diffuse lighting – Clean
    and unblemished surface (white ring), c – Damaged surface – note the discontinuities in the reflectivity profile.
    Fig 30
    a – Diffuse dome light function diagram, b – Bottom of a concave soda can illustrating even illumination across the surface, enabling a read of the printing, c – Glass rod, character reading.

    On-axis (Coaxial) lights work in similar fashion as diffuse dome/tunnel lights for flat objects, and are particularly effective at enhancing differentially angled, textured, or topographic features on otherwise planer objects. A useful property of Coaxial diffuse lighting is that in this case, rather than mitigating or avoiding specular reflection from the source, we may rather take advantage of the it – if it can be isolated specifically to uniquely define the feature(s) of interest required for a consistent and robust inspection (see Fig. 31).

    image 10

    Fig 31
    On-axis (Coaxial) Diffuse function diagram, b – Blown pop bottle sealing surface under axial diffuse lighting – Clean and unblemished surface (white ring), c – Damaged surface – note the discontinuities in the reflectivity profile. Fig. 31: a – On-axis (Coaxial) Diffuse function diagram, b – Blown pop bottle sealing surface under axial diffuse lighting – Clean and unblemished surface (white ring), c – Damaged surface – note the discontinuities in the reflectivity profile.

    Flat diffuse lighting may be considered a hybrid of dome and Coaxial diffuse lighting. From a lighting geometry standpoint, it produces more off-axis light rays than a coaxial light, but fewer than a dome light. Because the flat diffuse light is direct lighting, rather than internally reflected from within a dome light to the object, this light can be deployed over a much wider range of light working distances, especially longer WD not possible with a dome light.


    In the image sequence in Fig. 32a, we see a titration tray of wells – approximately 4” x 5” in size. The base of each 5mm wide and tall well has a laser-etched 2-D code, where the contents of each well is stored and identified by the data contained in each 2-D code. The inspection goal was to read the codes in each well base. Clearly, a higher magnification was necessary to resolve the small code details, and a 2×3 size code area was utilized to unambiguously illustrate the well’s response to different lighting geometries.

    High angle, direct lighting (Figs. 32b-c) clearly produces unacceptable results in not meeting acceptable feature-specific part/background contrast – ion this case, the codes against their immediate background. Low angle light (Fig. 32d) improves the code contrast which may well be an acceptable solution. However, looking more closely at the upper right well, we do notice a “shadow” crescent. This is to be expected if we consider that the wells do have walls that are not otherwise conspicuous in this largely top-down lighting geometry sequence. The crescent is formed because the walls are vignetting the light somewhat, but not to the extent to otherwise block the view of the 2-D code. Nonetheless, we do have to consider whether this low angle ring light solution is robust enough to be effective for all part presentation situations and circumstances. For example, had the codes been offset sufficiently from the center, they may have been vignetted, precluding adequate reading.

    The diffuse dome light, as advertised, delivers a very even contrast image, but does not actually highlight the codes against their backgrounds (Fig. 32e), and we can see, the flat diffuse dome lighting offers the most effective and robust solution (Fig. 32f), and unlike the diffuse dome, its geometry does not require mounting the light very close to the parts.

    Clearly, this example is a classic candidate why it’s often important to test a wide variety of geometries. The author was convinced before testing the diffuse dome was the best solution, which turned out to not be the case!

    image 11

    Fig 32
    Flat Diffuse Lighting, a – Titration tray with wells (note each well bottom has a laser etched 2-D code), b – High-angle ring light, c – Coaxial light, d – Dark field ring light, e – Diffuse dome light, f – Flat diffuse light.

    Dark Field Lighting

    Dark field lighting (Fig. 26) is perhaps the least well understood of all the techniques, although we do use these techniques in everyday life. For example, the use of automobile headlights relies on light incident at low angles on the road surface, reflecting from the small surface imperfections, and other objects.

    image 12

    Dark field lighting can be subdivided into circular and linear (directional) types, the former requiring a specific light head geometry design. This type of lighting is characterized by low or medium angle of light incidence, typically requiring proximity, particularly for the circular light head varieties (Fig. 27b).

    Bright Field vs. Dark Field

    The following figures illustrate the differences in implementation and result of circular directional (partial bright field) and circular dark field lights, on a mirrored surface:

    image 13

    Fig 27
    a – Bright field image of a mirror, b – Dark field image of a mirror; note visible scratch.

    Effective application of dark field lighting relies on the fact that much of the low angle (<45 degrees) light incident on a mirrored surface that would otherwise flood the scene as a hot spot glare, is reflected away from, rather than toward the camera.

    The relatively small amount of light scattered back into the camera just happens to catch an edge of a small feature on the surface, satisfying the “angle of reflection equals the angle of incidence” equation (See Fig. 28 for another example).


    Fig 28
    Peanut Brittle Bag, a – Under a bright field ring light, b – Under a dark field ring light; note the seam and underlying contents are very visible.

    The seemingly complex light geometry distinctions between bright and dark field lighting can best be
    explained in terms of the classic, “W” concept (Fig. 29).


    Fig 29
    Bright field vs. dark field, a – BF and DF lighting geometry and light angles of incidence, b – Light function diagram showing how a scratch on an otherwise flat field, stands out in a dark-field lighting geometry image. The scratch reflects the light at its local angle of incidence back to the camera, in this case,

    To complicate matters, the standard, symmetric “W” pattern illustrating classic dark vs. bright field lighting can be considerably distorted as well. In the above illustrated classic “W” geometry, the camera is mounted such that its optic axis is perpendicular to the surface being imaged, which is of course typical to minimize surface image perspective shifts.

    However, imagine if the camera and integrated (built into the camera face around the lens mount) or attached ring light were mounted such that their optic axes were no longer perpendicular to a surface being imaged, but also not off-axis by 45 degrees or more, normally understood to be dark-field light angle of incidence – what would we expect to see in the resulting image?

    We could very well see an image of the dark-field dot-peen matrix (see Fig. 22b) even though the light and camera were in the normally classic bright-field application part of the “W” diagram.

    Another important aspect of dark field lighting is its flexibility. Many standard partial bright field lights can be used in a dark field geometry. This technique is also very good for detecting edges in topographic objects, and the directional variety can be used effectively if there is a known, standard or structured feature orientation in an object, not otherwise requiring 360 degrees of light direction to generate contrast. A good example of this is a scratch generated on continuous sheet steel caused by something on the conveyor belt, creating a continuous longitudinal scratch. In this instance, and low angle directional light pointing across the web/conveyor will highlight the scratch very easily and consistently.

    Line Lighting


    Fig 33
    Linescan Lighting, a – Line light function diagram for high-angle and low-angle (dark field) application, b – High-out- put line light, c – Small-footprint, Fresnel lens line light.

    Line lights represent a niche, but growing application envelope in machine vision lighting. Most line lights employ a rod lens or Fresnel lens to produce a focused line (Fig. 33). They are primarily utilized in moving web applications where continuous inspection is required in conjunction with a linescan camera. The primary benefit of a focused line light is that it produces a much higher radiant power per unit area (flux) in a narrow line (see Fig. 34) that is often needed for the high web speeds associated with applications such as textile, steel surface defect and printing label inspections. This is of course to freeze motion blur, compensating for the short exposure times and high line rates of linescan cameras in general.

    A much more thorough explanation of line lighting and linescan cameras is available here from Vision Systems Design:

    Not all linescan applications require hi-power output, however. A common application when early line lights used only relatively low power LEDs was unwrapping, particularly can labels. A can was typically rotated around the long axis under the light and linescan camera, creating a very large 2-D image that could then be “unwrapped” in 2-D to then perform an inspection without label curvature.

    Illumination Techniques Application Fields

    Fig. 35 illustrates potential application fields for the different lighting techniques, based on the 2 most prevalent gross surface characteristics:

    1. Surface Flatness and Texture
    2. Surface Reflectivity

    This diagram plots surface reflectivity, divided into 3 categories, matte, mirror, and mixed versus surface flatness and texture, or topography. As one moves right and downward on the diagram, more specialized lighting Geometries and Structured Lighting types are necessary.

    As might be expected, the “Geometry Independent” section implies that relatively flat and diffuse surfaces do not necessarily require specific lighting, but rather any light technique may be effective, provided it meets all the other criteria necessary, such as working distance, access, brightness, and projected pattern, for example.


    Fig 35
    Lighting Technique Application Fields – surface shape vs. surface reflectivity detail. Note that any light technique is generally effective in the “Geometry Independent” portion of the diagram – if it generates the necessary feature-appro- priate image contrast consistently.

    Cornerstone 3: Color / Wavelength

    Materials reflect and/or absorb various wavelengths of light differentially, an effect that is valid for both B&W and color imaging space. It is important to remember that we perceive an object as red, for example, because it preferentially reflects those wavelengths our minds interpret/perceive as red – the other colors in white light are absorbed, to a greater or lesser extent. As we all remember from grammar school, like colors reflect, and surfaces are brightened; conversely, opposing colors absorb, and surfaces are darkened.

    Fig 36
     Color Wheel

    Using a simple color wheel of Warm vs. Cool colors (Fig. 36), we can generate differential image contrast between a part and its background (Fig. 37), and even differentiate color parts, given a limited, known palette of colors, with a B&W camera (Fig. 38). Opposite colors on the wheel generate the most contrast differences, i.e. – green light suppresses red reflection more than blue or violet would. And this effect can be realized using actual red vs. green vs, blue colored light (sometimes referred to as narrow-band light), or via filters and a white light source (broad band source). What is critical to remember is we are evaluating how a part or feature responds to a specific color incident light – with respect to its background color and/or reflectivity profile. This point also begs the question: what about IR light for creating contrast? More on this in a later section.

    AdobeStock 236006330 1 2


    Fig 37
     a – Red mail stamp imaged under Red light, b – Green light, c – Blue light, generating less contrast than green, d – White light, generating less contrast than either Blue or Green light. White light will contrast all colors, but it may be a contrast compromise.

    Object Properties – Absorption, Reflection, Transmission and Emission

    Object composition can greatly affect how light interacts with objects. Some plastics may transmit light only of certain wavelength ranges, and are otherwise opaque; some may not transmit, but rather internally diffuse the light; and some may absorb the light only to re-emit it at a different wavelength (fluorescence).
    image 69

    Fig 38
    a – Candy pieces imaged under white light and a color CCD camera, b – White light and a B&W camera, c – Red light, lightening both the red & yellow and darkening the blue, d – Red & Green light, yielding yellow, lightening the yellow more than the red, e – Green light, lightening the green & blue and darkening the red, f – Blue light, lightening the blue and darkening the others.

    The properties of IR light can be useful in vision inspection for a variety of reasons. First, IR light is effective at neutralizing contrast differences based on color, primarily because reflection of IR light is based more on object composition and/or texture, rather than visible color differences. This property can be used when less contrast, normally based on color reflectance from white light, is the desired effect (See Fig. 40).


    Fig 38
    Motor oil bottle, a – Illuminated with a red 660 nm ring light, b – Illuminated with a 360 nm UV fluorescent light,
    c – Structural fibers emitting in blue under a UV 365 nm source.

    One obstacle to overcome when deploying UV fluorescence concerns the emission light’s overall intensity. Secondary emissions consist of lower energy photons, so the relatively weaker fluorescent yield can be easily contaminated and overwhelmed by the overall scene intensity, particularly when ambient light is involved. In addition to blocking ambient to prefer the projected visible light with which we illuminate an object, band-pass filters on the camera lens also provide critical ambient blocking function in fluorescence applications, but with added functionality: The filter can be selected to prefer the emission wavelength, rather than the source light wavelength projected on the part – and in fact it performs the dual function of blocking both the UV source and ambient contributions that combine to dilute the emission light signal (Fig. 39). This approach is effective because once the UV light has fluoresced the part or features of interest, it is then considered ambient noise.

    iamge 71

    Fig 39
     Nyloc Nuts. a – Imaged with a UV ring light, but flooded with red 660 nm “ambient” light. The goal is to determine nylon presence / absence. Given the large ambient contribution, it is difficult to get sufficient contrast from the relatively low-yield blue fluoresced light from the part, b – Graphical depiction of ambient diluting the emission wavelength, c – Same lighting, except a 510 nm short pass, applied, effectively blocking the red “ambient” light and reflected UV, allowing the blue 450 nm light to pass, d – Graphical depiction of application of a pass filter, blocking ambient and the UV source. Figs. 39b,d graphics courtesy of Midwest Optical, Palatine, IL.

    The properties of IR light can be useful in vision inspection for a variety of reasons. First, IR light is effective at neutralizing contrast differences based on color, primarily because reflection of IR light is based more on object composition and/or texture, rather than visible color differences. This property can be used when less contrast, normally based on color reflectance from white light, is the desired effect (See Fig. 40).

    image 91

    Unlike the image results depicted in Fig. 40, where the NIR light actually changes the reproduced content of the image, the following example diminishes color contrast differences on a line-up of crayons (Fig. 41) – while simultaneously increasing contrast of the hard-to-read print crayon. The black print on 1 crayon is more difficult to distinguish from the colored background under white light. By replacing the white light with an 850 nm NIR source, we now provide consistent contrast to read the black print on any color crayon paper, thus producing a robust lighting solution.

    image 92

    Fig 41
    Color crayons, a – Under white light, b – Under NIR light. Note the evening out of contrast differences among the color pa- pers to allow the blue crayon print to be more easily read/verified. Images courtesy of Northeast Robotics, ca. 2009.

    NIR light may provide another advantage in that it is considerably more effective at penetrating polymer materials than the short wavelengths, such as UV or blue, and even red in some cases (See Fig. 42).


    Fig 42
    Populated PCB, a – Penetration of red 660 nm, b – IR 880 nm light. Notice the better penetration of IR despite the red blooming out from the hole in the top center of the board.

    Here is another example of how light transmission can be affected by material composition under back lighting. In contrast to the above example depicted in Fig. 42, the example images from Fig. 43 demonstrate how certain light wavelengths are also better at penetrating materials based more on their composition, irrespective of the light power. In this instance the goal was to create a lighting technique that would measure the liquid fill level in a bottle.


    Fig 43
    Bottle fill level inspection with back lighting, a – 660 nm red, b – 880 nm IR, c – 470 nm blue.

    What makes this example so instructive is that the bottle glass is a deep blue color. So, of course it would follow that blue glass transmits blue light preferentially, right?

    Wrong! It just so happens that the glass was blue, but it was the composition of the glass (and perhaps additives) that allowed the light to penetrate the bottle. There are 2 important concepts to take away from this example: Transmitted light does not tend to respond the same way as reflected light and longer wavelengths do not always penetrate materials preferentially, such as illustrated in the example from Fig. 43.

    Conversely, it is this lack of penetration depth, however, that makes blue light more useful for imaging shallow surface features of black rubber compounds or laser etchings, for instance (Fig. 44). The amount of surface scattering of shorter wavelengths is proportional to the 4th power of the frequency, but recall that shorter wavelengths have higher frequencies.


    Fig 44
    Gear face, laser etch – limited to high-angle of incidence, a – No response from the etch with red 660 nm light, b – Significant light scattering from the shorter wavelength blue 470 nm.

    DSCF9162 1 2

    Cornerstone 4: Filters – Additional Considerations

    Immediate Inspection Environment

    Fully understanding the immediate inspection area’s physical requirements and limitations in space is critical. Depending on the specific inspection requirements, the use of robotic pick & place machines, or pre-existing, but necessary support and/or conveyance structures may severely limit the choice of effective lighting solutions, by forcing a compromise in not only the type of lighting, but its geometry, working distance, intensity, and pattern – even the illuminator size/shape as well. For example, it may be determined that a diffuse light source best creates the feature-appropriate contrast, but cannot be implemented because of limited close-up, top-down access. Inspection on high-speed lines may require intense continuous or strobed light to freeze motion, and of course large objects present an altogether different challenge for lighting. Additionally, consistent part placement and presentation are also important, particularly depending on which features are being highlighted; however, lighting for inconsistencies in part placement and presentation can be developed, as a last resort, if both are fully elaborated and tested.

    image 100

    Fig 21
     a – Light interaction on surfaces, b – Specular surface, angle of reflection = angle of incidence (Phi 1 = Phi 2), c – Diffuse (non-specular) surface reflection.

    image 101

    Fig 22
    2-D dot peen matrix code, a – Illuminated by bright field ring light, b – Imaged with a low angle linear dark field
    light. A simple change in light pattern created a more effective and robust inspection.

    image 102

    Fig 23
    Bottom of a soda can, a – Illuminated with a bright field ring light, but shows poor contrast, uneven lighting, and specular reflections, b – Imaged with diffuse light, creating an even background allowing the code to be read.

    AdobeStock 505649322 1 2

    Ambient Lighting Contamination and Mitigation

    How task-specific and ambient light interacts with a part’s surface is related to many factors, including the gross surface shape, geometry, and reflectivity, as well as its composition, topography and color. Some combination of these factors will determine how much light, and in what manner, is reflected to the camera, and subsequently available for image acquisition, processing, and measurement/analysis. The incident light may reflect harshly or diffusely, be transmitted, absorbed and/or be re-emitted as a secondary fluorescence, or behave with some combination of all the above (see Fig. 21). An important principle to remember is that when dealing with specular surfaces light reflects from these surfaces at the angle of incidence – this is a useful property to apply for use with dark field lighting applications (see Fig. 22, right image for example).

    image 065

    Fig 45
    Typical spectral transmission curves for pass filters, a – Long pass, b – Short pass, c – Band pass, d – Typical red 660
    nm threaded band pass filter. Figs. 45c-c graphics Courtesy of Midwest Optical, Palatine, IL.

    Additionally, a curved, specular surface, such as the bottom of a soda can (Fig. 23), will reflect a directional light source differently from a flat, diffuse surface, such as copy paper. Similarly, a topographic surface, such as a populated PCB, will reflect differently from a flat, but finely textured or dimpled (Fig. 22) surface depending on the light type and geometry.

    High-power strobing simply overwhelms and washes out the ambient contribution, but has disadvantages in ergonomics, cost, implementation effort, and not all sources can be strobed, e.g. – fluorescent or quartz-halogen. If strobing cannot be employed, or if the application calls for using a color camera, full spectrum white light is necessary for accurate color reproduction and balance. Therefore, in this circumstance a narrow wavelength pass filter is ineffective, as it will block a major portion of the spectral white light contribution, and of course the only choice left is the use of enclosure acting as a shield.

    Figure 46 illustrates an effective use of a pass filter to block ambient light and to effectively increase the image contrast feature of interest of a nyloc nut. In this instance the ambient contribution has washed out the relatively weak fluorescent emission light generated when the nlyon ring was illuminated under UV light (as detailed in an earlier section).

    image 08

    Fig 46
    2 nylocc nuts, a – One with nylon, one without under UV light and a strong ambient source, b – Same parts with the simple addition of a pass filter on the camera lens to block the ambient contribution and enhance the blue emission light from the nylon ring. A useful example of creating Feature-Appropriate Image Contrast.

    Machine Vision Special Topics

    Strobing LEDs

    The term, strobing, has been variously understood in the commercial photography field as simply flashing a light in response to some external event. Machine vision has adopted that general definition, but with one important caveat: Short duty cycle overdrive.

    When deploying an LED light head to solve a vision application, we tend to consider the maximum radiant power the light can output on the target while running in constant-on mode – in other words – 100% duty cycle. This constant-on maximum current value for the entire light head is determined in part via the LED manufacturers measured data, but most often based on the vision lighting manufacturer’s testing experience and trial and error experimentation. The limit is determined by the desire to optimize the output power, and yet maintain the light head’s long term stability and lifetime with respect to LED-destroying heat build-up. When LEDs can adequately dissipate junction heat, they can survive a wide variety of applied currents; however, line voltage must remain constant and for the string wiring and forward voltage needs of the LEDs (as detailed in the previous section, “Powering and Controlling LED Lighting for Vision”).

    Strobe overdriving LED illuminators takes advantage of the fact that we can push more current through LEDs when their duty cycle (calculated as):

    can be kept to a much smaller percent (typically from < 0.1% to 2%), the LEDs can then dissipate the excess heat generated and continue to function normally. There are two illuminator operational parameters that contribute to the duty cycle: light on-time per flash (a.k.a. light output PW) and flash rate.

    During strobing, light on-time (PW) can be set via a GUI – if available – in the current source controller, or more commonly by following an external trigger input pulse width. The hardware- only type of strobing controller (a.k.a. driver) without a GUI is typically actuated by external signals, meaning the light output PW follows that of the internal trigger input PW, minus any timing latency in the electronics and LED ramp up periods. These types of controllers will overdrive to some pre-determined time to safeguard the LEDs. Whereas GUI based controllers often allow for more complicated strobing parameter control, but they can also be “triggered” by an external signal. They may allow the light output PW value to be different from the input trigger PW, or they may also allow pass-thru. Flash rate is typically measured in Hz – the output pulse
    width multiplied by the flash rate in Hz totals the on-time for the cycle.trigger PW, or they may also allow pass-thru. Flash rate is typically measured in Hz – the output pulse width multiplied by the flash rate in Hz totals the on-time for the cycle.

    DSCF8561 1 2

    Machine vision light output pulse widths range from as short as 1-2 uSec to constant-on, but the typical values to get the most overdrive capacity is from 50 to 500 uSec, assuming the flash rate keeps the duty cycle under the max limit of 2%. Some controllers automatically calculate this duty cycle, balancing their current output against their output pulse widths and in some cases even allow optimizing of 1 parameter vs. the other. Others are set via hardware limits or entries, usually requiring the operator to first calculate this value and set the limits in hardware or software – at their own risk.

    Advantages for strobing LED illuminators:

    • Freeze motion
    • Generate more intensity (with caveats)
    • Singulated moving or indexed parts
    • Minimize heat build-up
    • Maximize lamp life-time
    • Overwhelm ambient light contamination

    Chief among the advantages is creating a brighter strobe flash, usually in conjunction with moving parts inspections. The brighter flash is caused by allowing much higher current over a vastly lowered duty cycle to not damage the LEDs, as elaborated above. To freeze motion sufficient for an inspection, it often becomes necessary to also shorten the camera sensor exposure time to minimize blur.

    Please see Appendix B – Extended Topics 5 – for an example of how to calculate your camera’s exposure and corresponding, matched controller light output PW. There is also elaboration on calculating lens focal length – with caveats.

    Of course, this operation also correspondingly shortens the time during which the sensor can collect light, so to compensate the shortage, a light must become proportionally brighter. Strobe overdrive can create this circumstance, again assuming we can keep the light duty cycle low enough.

    Disadvantages / limitations for strobing LED illuminators:

    • Complexity: The strobe flashes and camera must be sync’d for grabs
    • Cost: Include a high-performance strobing controller and trigger devices
    • Inspections must generally be discontinuous – lighting is not constant-on –
      unless disco
    • Stringent duty-cycle limit imposed on the amount of light power possible

    The last 2 listed strobing limitations bear some elaboration. Strobe overdriving is best suited for high-speed inspections of singulated, moving parts. It can be applied to subsample sections of a continuous web, such as textiles, paper or steel, or even grabbing adjacent area scan images and digitally stitching them together to form a continuous image.

    Before we explore the potential intensity gains during strobe overdriving, it’s important to understand how LEDs behave under increased current input during low duty cycle strobing operations. This relationship is determined by testing the LEDs, typically by both the LED manufacturer, as well as the vision illuminator manufacturers. A typical response curve looks like the following (Fig. 61):


    Fig 61
    Strobedoutput example of a HB LED light bars with different LEDs.Test current vs. re- sulting light radiant power output. Note how the light output intensity response is not linear with respect to current in, and the output tops out. Additional cur- rent in just creates more heat.

    It’s clear from the response curve that different LEDs are capable of different strobing profiles and ultimate radiant light power output, and it’s also important to understand where in that curve it’s best to not offer more current – while the LEDs may hold up for some time, the operation is just creating heat at the expense of LED longevity – without any visible intensity gains.

    Graphically, we can see how light is collected by flashing an illuminator (at constant-on current levels) vs. running a light in constant-on mode vs. strobe overdriving under a low duty cycle, increased current operation (Fig. 62):

    image 190

    Fig 62
    a – Constant-on lighting, b – Flashing a light at constant-on current, c – Overdrive strobing. Note the difference in actual amount of light collected (hatched are) of both the relative collected amounts of “signal” (blue) vs. “noise” (ambient – red) with each lighting mode.

    We can see that the amount of light output intensity from the light control scenarios depicted in Figs. 62a-b is the same (normalized to 1x), whereas that depicted in Fig. 62c shows an 8x more intensity created. Signal to noise ratio (source light vs. ambient) is significantly higher in the overdrive strobe mode as well. We can equate the hatched area in each diagram to a well with various volumes of water – the hatched area is equivalent to total relative amount of light collected by a camera.

    There is an important point to consider when strobe overdriving, as illustrated in Fig. 62c, irrespective of which type of strobing controller is deployed.

    To summarize what was alluded to earlier, the exact amount of strobe-overdrive light intensity one can expect varies greatly depending on the following:

    • The LED type, manufacturer
    • The number of LEDs in an illuminator
    • Power available from a strobing controller
    • The illuminator strobe response curve to PW and current required
    • The expected Duty Cycle (PW x the number of flashes in Hz)

    And by extension, the above factors also govern which strobe controller needs to be selected – some devices are simple, low power strobing devices, while others have large power output and/ or very short PW capabilities. Therefore, every situation is unique and needs to be evaluated for the proper controller. There is no one controller capable of handling all performance / price points.

    Strobe Overdrive Example

    The machine vision applications group is developing a vision inspection routine to read 2-D QR codes on pill bottles at a rate of 10 Hz. They need to strobe overdrive their illuminator to compensate shorter camera exposure times to freeze the motion and minimize image blur. Through a combination of calculation and testing, Engineers determine the illuminator output strobe pulse width to be ~300 uSec per flash (see exposure / output pulse width calculation example in Appendix B, Extended Topic 5). Recall that the duty cycle is calculated as:

    on-time / (on-time + off-time) x 100

    Note: Flash rate is in Hz, assume total cycle time (on-time + off-time) is 1 sec A quick calculation shows that the duty cycle will be:

    (10 flashes / s x 0.000300 s) / (10 flashes / sec x 0.000300 s + 0.997 s) x 100  0.003 / (0.003 + 0.997) x 100 = 0.3%

    When strobe overdrive is required, it is also beneficial if the illuminator has been tested and thoroughly characterized to generate strobe profiles similar to the following:
    Light Output PW vs. current (Fig. 63)
    Current vs. duty cycle (Fig. 64)


    Fig 63
    Characterized illuminator strobe profile showing the current available at specific PW.


    Fig 64
    Characterized illuminator strobe profile showing max current available at various duty cycles.

    From Fig. 63, we see that for this specifically characterized bar light in white, we can push up to 12 A current at the required PW and corresponding camera exposure time, to freeze motion, but with the limitation that the controller also must operate at 38 volts DC output potential to get that 12A! Not every controller can provide a voltage potential above the input line voltage, typically 24 volts DC – unless they have internal buck and boost circuits. By referring to the curve in Fig. 64, we also see that at 12A current, we can push to about 10% duty cycle (vs. 0.3% calculated), therefore in this application window, there is plenty of thermal overhead to reach the desired output power at the necessary part speeds and feeds – without causing damage to the LEDs.

    A few words of caution are necessary:

    1. Unless specifically designed in, some strobe controllers have little or no ability to limit their power output to protect the illuminator, so testing is important under these circumstances – at the operator’s risk.
    2. Just because it appears that there might be sufficient current to push 10x or more than the constant-on current levels, we must remember from the power / current curve depicted in Fig. 61 that the output brightness as a function of input current is not linear, depending on the part of the curve envelope under which we are operating. Controller voltage output potential is a critical limiting factor in achieving full illuminator power output.For example, if the controller in the above example application is limited to 24 volts DC output potential, the current is necessarily limited to no more than 5A max. If we next refer to the orange curve in Fig. 61, which is specific to this light head, we see that with the use of a controller capped at 24 volts DC output and the resulting 5A current would indeed strobe overdrive the light, albeit at the radiant output closer to 4x of constant-on, but far short of the 7-8x full potential. Figure 63 indicates 38 volts DC potential allowing the full 12A current are necessary.
    3. To best take advantage of a brighter flash, the light output PW and the camera exposure times should be approximately the same duration. Often, as in the example of moving parts above, the camera exposure time is determined by the need to freeze motion, but then the input trigger PW (if pass thru type) or light output PW (if set in the GUI software) should still match, including compensating for any system latencies involved. Please refer to Appendix B, Extended Topic 5 for an example of calculating camera exposure times and corresponding light output PW times to freeze motion.

    The following images of a set of boxed pharmaceutical ampules illustrate the differences in acquired images using the lighting controller styles demonstrated earlier as depicted in Fig. 62:

    image 111 2

    Fig 65
    a – Grab with ambient light only, 200 us exposure, b – Grab at constant-on power, 500 us exposure, c – Strobe over- drive 4x, 200 us exposure. Note the difference in actual amount of light in the right image vs. constant-on current level, even though it has a 2.5x longer camera exposure time.

    Additionally, there is discussion in the machine vision field about strobe-overdriving line lights at high frequencies, such as 80K Hz, for example to create a continuous, but strobe overdriven high resolution images using a line-scan camera. Whereas this may be possible to flash a line light at these frequencies – if the controller will support the necessary frequencies and can be sync’d to the line scan camera line rate – the same duty cycle limitations still apply, or the LEDs can be destroyed.

    Light Polarization and Collimation

    It is important to understand and differentiate between 2 important light property contrast enhancement techniques – polarization and collimation. Whereas both techniques typically utilize polymer film sheet stock, they produce entirely different effects and associated information. Both are often applied in a backlighting geometry, although light polarization can be used in any front-light application. Prism film collimation is usually confined to back lighting applications, but lensed, optical collimation can be used in any geometry as well.


    Unlike microscopy applications, light polarization in machine vision has been employed primarily to block specular glare reflections from surfaces that would otherwise preclude a successful feature identification and inspection. Normally, 2 pieces of linear polarizing film, applied in pairs with one placed between the light and object (polarizer) and the other is placed between the object and camera (analyzer – Fig. 49). It is common for the polarizer to be affixed to the source light and the analyzer to be mounted in a filter ring and affixed to the lens via screw threads or a slip fit mechanism if no threads are present, allowing the analyzer to be freely rotated.

    However, it’s first important to comprehend the nature of unpolarized light propagation through space and its behavior with respect to this polarizer/analyzer pair. As indicated earlier, light is a propagating transverse electromagnetic wave, meaning the electric field fluctuations, modeled and depicted as a sine wave, “oscillate” in random planes perpendicular to the light propagation direction – meaning it is unpolarized (Fig. 49). Further, the wave magnitude is related to the amount (or intensity) of light.


    Fig 49
    Relative optical path positions of the polarizer and analyzer in a front-lighting geometry, b – Light oscillation planes through a linear polarizer and resulting single wave oscillation.

    In the following graphics, for clarity of demonstration, we illustrate only 2 perpendicular oscillating light waves to demonstrate how they respond to polarization.

    Typical iodine acetate based linear polarization film sheets are composed of roughly parallel lines of long-chain polymers (Fig. 50). This structure allows us to define a Polarization (Transmission for an analyzer) Axis and an Absorption Axis, oriented at right angles. In looking at the film on a molecular level with respect to the 2 perpendicular light wave fronts (Fig. 51), we see that it is these parallel strands of polymer chains that block (absorb) all but 1 plane of oscillation.

    image 13 1

    Fig 50
    Idealized polarizer demonstrating the transmission / polarization axis (blue), absorption axis (red) and partial transmission axis, oriented at 45 degrees to the polarizer (black).

    However, it is important to note that the film’s long-chain polymers are oriented perpendicular, rather than parallel to the transmission and absorption axes – unlike the picket-fence analogy commonly depicted in the literature, which can be misleading if interpreted literally. This analogy is not incorrect so long as we conflate the “pickets” in a fence with a light polarization or transmission axis and not a physical grate of chains, oriented parallel to the wave amplitudes. What is most important to understand is that the long-chain molecules absorb the electric field oscillation component whose amplitude is parallel to the polymer chains but passes the perpendicular component more readily.

    To understand how unpolarized and polarized light are affected when they pass through a succession of polarizing films, we look to Malus’ Law. Briefly stated – the intensity of plane polarized light that passes through an analyzer varies as the square of the cosine of the angle between the polarizer polarization axis and analyzer transmission axis. We can then infer that the plane polarized light may be fully or partially transmitted or blocked completely (Figs. 51a-b).

    image 11 1

    Fig 51
    a – Unpolarized light vibrating in the horizontal plane (depicted in blue) passing through the po- larizer (P) and blocked (absorbed)by analyzer A1, b – Light passing through the same initial path as that in diagram a, but through analyzer A1 and A2 (rotated @ 45 degrees, blocking some of the plane polarized light. Note the light radiant power drops considerably with each P or A pass-through (if not blocked).

    The mathematical relationship is described by the following equation:

    I = I0 cos2 Θ, where:

    I0 = Original pre-analyzer light intensity

    I = Post-analyzer light intensity

    Θ = Anglular difference between the polarizer polarization & analyzer transmission axes

    For example, a simple calculation applying basic trigonometry: if the polarizer and analyzer transmission axes are parallel (Θ = 0 degrees), cosine of 0 = 1, meaning the plane polarized light passes 100% through, whereas if Θ = 90 degrees, cosine of 90 = 0, that plane polarized light is 0%
    transmitted. Finally, if Θ = 45 degrees, we would be correct in our supposition that 1⁄2 of the plane
    polarized light is transmitted.

    Another important aspect of plane polarized light is that its intensity is 1⁄2 that of the original unpolarized light incident on the first polarizer. This is of importance to vision users if the application is already starved for light – any use of a single or especially a pair or more of polarizers may produce a considerable loss of image intensity. We will describe and illustrate these points in the following sections.

    As stated earlier, machine vision has applied light polarization/analyzer pairs primarily to block reflective glare from parts – this glare reflection may be caused by the dedicated lighting used in the inspection and/or from ambient sources. These 2 cases may be treated differently:

    Nonmetallic and transparent surfaces tend to partially polarize ambient incident unpolarized light, preferentially polarizing it in the horizontal plane (or more accurately in the plane parallel to the incident surface and perpendicular to the incident light plane), and hence only an analyzer, whose transmission axis is oriented at 90 degrees is needed to block it. This process is known as reflection polarization. An example of this phenomenon is reflected glare from a road or other smooth surface, such as a lake.

    However, polarization by reflection isn’t always as complete as using film because photons with other oscillation directions can also be reflected, if not refracted by the part surface. This phenomenon of partial polarization explains why when rotating sunglasses (or turning your head while wearing them), the scene can get a bit brighter or darker, but not go to extinction – it all can’t be dialed out with an analyzer (vertically oriented transmission axis in this case). Metallic surfaces, on the other hand, typically reflect most, if not all the incident unpolarized light (no refraction into the material), so different strategies are often needed when it is not practical to polarize the ambient light before it is incident on a part’s surface.

    However, dedicated light applied to the inspection area usually can be first polarized, then the offending light reflecting off the parts into the camera can be similarly dialed out using the analyzer. The very effective use of light polarization demonstrated by the image pairs in Figs. 52d e does come with inherent compromises, however. Most notably, in this instance, the lens aperture had to be opened 2 1⁄2 f-stops to create the same scene intensity, for example. Therefore, there is a lot less light to work with in those application situations requiring a considerable amount of light intensity, such as hi-speed inspections.

    Whereas the application illustrated in Figs. 52d-e illustrates very effective polarization, the images in Figs. 52a-c demonstrate only moderately effective glare reflection reduction, but also points to an alternative to using polarization filters altogether, where possible.

    In Figs. 52a-b, we see that glare reflected from a curved surface, such as this personal care product bottle, can be controlled, but not eliminated (Fig.52b – center area). This is true because there are multiple reflection directions produced on the curved surface from a directional light source, and polarization filters cannot block all vibration directions simultaneously, thus always leaving some areas washed out. In this case, a more effective approach to glare control, given the flexibility to do so, is to reconsider the lighting geometry. By simply moving the light from a coaxial position around the lens to a relatively high angle, but off-axis position, we can eliminate all specular reflection created by our light source.

    image 123

    Fig 52
    A change in “lighting – object – camera” geometry or type may be more effective than applying polarizers to stop glare, a – Coaxial RingLight w/o Polarizers, b – Coaxial Ring Light w/ Polarizers (note: 2 1⁄2 f-stop opening), c – Coaxial Ring Light w/o Polarizers, d – Coaxial Ring Light w/ Polarizers (note some residual glare), e – Off-axis (light axis parallel to the object long axis)

    A more effective use of light polarization is illustrated in Fig. 53, namely for detecting stress-induced structural lattice damage in otherwise transparent, but birefringent materials, typically plastics. Recall that plane polarized light has wave oscillations in only 1 plane, unlike unpolarized light. When plane polarized light is transmitted through a stress-induced birefringent material, it resolves into 2 principal stress directions, each with a different refractive index, and thus the 2 component waves are said to be out of phase. They then destructively and constructively interfere, creating the alternating dark and light bands, respectively we see illustrated in Fig. 53b.

    image 01

    Fig 53
    Transparent plastic 6-pack can holder, a – With a red back light, b – Same, except for the addition of a polarizer
    pair, showing stress fields in the polymer.

    Polarization References for further reading:


    Whereas the physics of light collimation, via film materials, is not nearly so complex as that for light polarization or lensed collimation, it nonetheless can play an important role in assisting Vision Engineers in developing an accurate and robust inspection program. We will be demonstrating the use of film collimation, applied primarily in a back lighting geometry, where is it is most effective.

    Collimation film is essentially a polymer sheet with lines of parallel prisms (a grate) cut into its upper surface. Because we need collimation in X&Y, we must apply 2 pieces of film whose lines of prisms are oriented at right angles. We see this idealized shape in cross-sectional view in Fig. 54. Optically, the film passes and concentrates vertical rays, via exit refraction, recycling some of the off-axis light that is initially internally reflected. It also collects and may pass the otherwise lost low-angle light that naturally escapes a randomly diffused back light surface, for example, enhancing the light output intensity. Perhaps not coincidentally, this material is termed, Brightness Enhancement film by the manufacturer.


    Fig 54
    Simplified light ray tracing through collimation film via refraction and internal reflection. Light that does not get refracted out the surface is recycled internally until it meets the angular criteria to “escape”, oriented vertically. Prism pitchis 50 um and 90 degree angle.

    A convenient side-effect of brightness enhancement film used for light collimation is that it improves the actual object edge location as represented in an image, owing to the vertical ray preference – see Fig. 55 – which makes it ideal for use in high accuracy gauging in a back lighting geometry application. This effect is best understood in the context of one of the fundamental properties of light and solid interactions – diffraction, or bending, around objects.


    Fig 55
    a – Idealized light ray tracing from collimated, left side of object on a back light vs. the more random ray exitance angles of the non collimated back light to the right, b – Curved part on a non-collimated back light, c – Same part on a film collimated back light showing less light interference along the true maximum part projection.

    Shorter wavelength light – blue, for example – will diffract slightly less than longer wavelength light, such as red (Fig. 56). It should be noted that the actual amount is much less than the exaggerated depictions in Fig. 56, however. If we take this information a bit further, we can imagine how white light might behave, recalling that white light is composed of all visible wavelengths. We might expect that each wavelength will diffract differently, and this is in effect, how chromatic aberrations are created. These might be seen in a color image as halos or shadows around the edges, which may effectively increase the uncertainty in recreating an actual edge location in an image


    One final point about applying collimation film that is important to know: The film is not a perfect collimator, unlike true lensed optical collimation. Consider an idealized scenario where we are viewing a live feed from a camera with a telecentric lens mounted above a lensed collimated back light: if the camera’s optic axis is perpendicular to the back light surface, under true optical collimation we would see full light intensity. However, if we move the camera just a few degrees off-axis, our view should now be dark – in other words our camera is seeing very little of the vertical rays that are emitted from the collimated back light.

    However, under typical vision inspection scenarios, with a non-telecentric lens and collimation film, we have a “window” of off-axis light typically around +- 25-30 degrees that is still visible. In practical terms this implies that we are not preferring only the vertical rays, but some off-axis components as well, hence we cannot expect perfect optical collimation results.

    Powering and Controlling LED Lighting for Vision

    A summary description of LED electrical specifications and illuminator circuit design, as well as understanding the 2 types of control / drive options (voltage vs. current sourcing) and their inherent limitations, is crucial to know when and how each approach might be best applied. We will start by summarizing how LEDs are powered.

    LEDs are solid state diode devices – to produce light they require direct current (DC) applied across their P-N junctions of a specific LED forward voltage (Vf) potential (also known as voltage drop) and forward current (If). Each LED type and wavelength has a specific Vf based on the die chemistry and exact P-N junction design. Vf and If values are related by the general formula, Ohm’s Law (V=IR).

    Whereas LED forward voltage and forward current are directly proportional, however, unlike other diodes that relationship is non-linear (see Fig. 57). We can also surmise from this relationship that any change in the forward voltage of an LED will produce a non-linear proportional change in forward current. Further, because forward current and LED radiant power are also directly related, even a minor change in forward voltage can thus create a large output difference in LED radiant power. What, then, are the implications, for the quality of machine vision lighting we might ask? We first must understand how multiple LEDs are wired into illuminators.

    image 12 1

    Fig 57
    Forward Current vs. Forward Voltage of a high-power LED – courtesy of Cree, Durham, NC.

    Mandy Color 1 2

    To build a typical machine vision multi-LED illuminator, LEDs are wired in strings that maximize the applied line voltage potential supplied by DC power supplies, nominally 24 volts DC. As noted earlier, each LED has a specific forward voltage drop (Vf), so ideally the sum of these voltage drops per string total to the exact line voltage applied to the illuminator – in this case, 24 volts. For example, a 6-LED string, each LED with 4 volts drop would total 24 volts. However, there are 2 factors that complicate this ideal wiring scheme:

    1. Each LED type and wavelength, as discussed earlier, may have a different Vf
    2. The range of Vf per LED of the same LED model and production lot, from the same
      manufacturer can vary considerably.

    If we review the manufacturer’s specified range of Vf values in the LED model shown in Fig. 57, we see that this LED can vary as much as a typical voltage drop of 2.9v to a max of 3.5v per LED. This fact suggests that rather than assuming a standard value, we are forced to use an average value to calculate exact string voltage and thus layout. If that total is less than 24 volts nominal supply voltage, we have a potential overvoltage condition, which even if not severe enough to damage the LEDs, it can potentially cause some of the LEDs to be over-powered, and thus brighter, or conversely, an undervoltage for some LEDs causing them to be dimmer than their neighbors. Neither circumstance is ideal for output uniformity when we are considering multiple parallel / serial strings in a larger illuminator.

    This circumstance is commonly addressed by adding load-balancing resistors to handle over- voltage situations (Fig. 58) – commonly referred to as voltage sourcing, or voltage drive.


    Fig 58
    Ex-ample of a HBLED voltage source drive wiring circuit; line supply voltage is 24 volts DC and the current requirements per LED string is 350 mA.

    Closer examination of the example high-brightness LED wiring circuit in Fig. 58 illustrates parallel/ serial strings, each with a 3.6 volts drop per LED, with a line power of 24 volts DC power @ 350 mA current per string. We can see that the total voltage drop over each string is approximately 21.6 V (6 x 3.6 V), hence 2.4 V less than the optimal 24 volts line voltage. The additional voltage is compensated by applying an appropriately sized load-balancing resistor to each string, with the load dissipated as excess heat.

    Based on the previous notation regarding the forward voltage range of LEDs – and the Vf / If curve, even from the same manufacturing batch, it’s clear that the 3.6 volt value applied here is an average, or typical value, and that this usage does create some uncertainty in the exact power each LED is getting, and hence its radiant power output – and long-term stability and life-time.

    Conversely, simple current sourcing, or drive does not depend on load-balancing resistors because a controller is involved that applies the exact voltage potential required for the designed string length and power requirements – see Fig. 59.


    Fig 59
    Example of a HB LED current source drive wiring circuit; line supply voltage is 24 volts DC and the current requirements per LED string is 350 mA, but no load-balancing resistors are needed.

    There are tangible advantages to current source drive of LEDs, not the least of which is the ability to control the performance of the light, specifically, dimming, gating on/off without interrupting power, and strobe overdrive capabilities (see a later section on strobing). However, it’s still apparent that current source drive does not address the previously noted shortcoming of having to use the manufacturer supplied typical per LED voltage drop value in setting a controller power output to a light head. Ideally, to address this, each LED would have its own manufacturer- supplied forward voltage value – and it would have its own current source driver, but clearly this is not a viable solution, both in terms of complexity and certainly cost. The good news is LED manufacturers have recently resorted to tighter radiant power “binning” for LEDs, which has decreased, but not eliminated, output differences among LEDs.

    From the above discussion, we must understand that all LED illuminators need some level of power protection, either as current-limiting resistors for voltage drive lights or the use of a current source controller that outputs the exact power requirements. Further, we must not confuse an AC to DC power supply as a current source controller – they are not the same, although some current source controllers may house a power supply as well.

    Controller vs. No Controller

    It is instructive to briefly summarize the styles and types of current-source controllers and then elaborate the rationale for what level of controller, if any, might be the most beneficial in any application.

    As in most Engineering applications, balancing performance and cost is critical to the success of any machine vision system development effort. From the previous lighting voltage vs. current control discussion, we have already seen the advantages and drawbacks of each. Clearly, those non-controller lighting applications represent the least deployment expense and complexity path, but they are also limited in control flexibility. Conversely, the more control options required, the more cost and complexity are generally incurred.

    Current Source Controllers are available in a variety of types, ranging from smaller units with fewer features and power output and control capabilities to those full-featured, high-power types. As we have also seen from the above discussion, required control features and controller performance generally drive cost. Controller types comprise the following (Fig. 60):

    Embedded: Also known as “in-head” or “on-board” control. Located inside the illuminator housing (Fig. 60a).

    Cable In-line: Permanently fixed or quick disconnects in the cable (Fig. 60b).

    External Box: Fully disconnectable, table-top or panel/DIN rail mount (Fig. 60c).

    image 09

    Fig 60
    a – Embedded controller in the illuminator, b – Cable inline controller, c – External “box” controller. Note the controller size increase from left to right.

    Embedded controllers represent the smallest footprint and perhaps the easiest plug and play operation primarily because they require a simple cable, often a 4 or 5-pin M12 that can handle power, return and trigger/gating, dimming functions and/or strobe overdrive functions. This comes at the expense, however, of performance both in available power and thermal dissipation considerations. As we know from the discussion about LEDs they create their own heat and adding in the heat generated by a controller and associated electronics can add to the heat dissipation burden, depending on the application and environment. The small footprint then restricts the thermal dissipation routes, but also does not allow another important consideration: Powering and strobing high-power lights also requires more real estate for all the components, particularly strobe electronics that generally require boost or buck converters and capacitors, as well as other specialty components, such as microprocessors. It’s also useful to note that embedded controller light heads (whether the controllers are on daughter boards or mounted on the same single board housing the LEDs), may cost more to repair in that they cannot easily be separated from the LED board.

    Conversely, external “box” controllers are generally preferred for high-power and performance applications because they have the footprint sizes to house the larger and more intricate components required, and they can also have their own thermal management strategies, as well as be located remotely. Of course, this all comes at the “expense” of complexity and cost.

    The cable in-line controllers are essentially a compromise between the complexity and cost on one end and the performance limitations on the other, but they have the advantage of removing any heat contribution from the LEDs and boards. Controller housing sizes can vary from illuminator manufacturer to manufacturer, depending on the performance required and pricing goals.

    Lighting Ingress Protection (IP)

    As machine vision applications have expanded over the last 25 years, so has the need for some enhanced level of Ingress Protection for vision components that contain electronics and/or sensitive optical elements – and lighting is no exception. IP Code classifications and ratings have been defined by IEC standard 60529 (European equivalent EN 60529) for general use across all applicable industries and disciplines and neither restricted to nor necessarily defined for, machine vision use exclusively. IP code designations comprise 2 aspects: solid and liquid (typically defined as water) intrusion potential and are denoted by IPxy, where “x” represents the solid and “y” represents the liquid intrusion levels. The original standard defined 6 solid (1-6) and 8 liquid (1-8) levels, with each progressively higher number up the scale representing more complete protection compared with those levels listed below it (Fig.47). A zero (0) entry suggests no protection as defined for the other levels, and a recent addition to the definition includes IP69K, which is defined as complete IP protection under high temperature and high-pressure (K) water jet spray.

    We can further define each “x” (solids) scale value by particulate size blocking level, with IP5y and IP6y specific to dust size protection. Similarly, the “y” scale levels are defined by the volume and velocity of impacting liquids, with the IPx7 and IPx8 levels specific to depth and time of immersion in water.

    Historically, standard IP ratings practice among machine vision component manufacturers has been to self-assign values based on interpretation of the above defined levels, but more recently, some manufacturers have enabled the services of accredited, independent testing laboratories, and hence those tested and passed products may be assigned “IPxy Certified”, rather than the subjective “IPxy Rated” assignment and label. For some customers, formal certification is a necessity.

    Food Contact and Aseptic Environment Considerations

    An additional variation on standard particulate and liquid ingress protected machine vision lighting is related to food and beverage and some biomedical applications – hygienic and aseptic offerings. In these types of applications, it is not necessarily enough to provide ingress protection alone, but rather there is a requirement for smooth, crevice-free surfaces and/or chemical inertness – the latter protecting against potentially damaging chemicals, either in gaseous or liquid form.

    Food and Beverage applications typically rely on high-IP rated lighting, but also may require all vision


    Fig 47
    Ingress Protection (IP) Ratings Table based on IEC 60529 and later additions.

    components to be chemically inert in the case of food contact and/or the necessity for harsh chemical solvents and caustic cleaning solutions to provide a hygienic processing environment. An additional consideration for food contact applications is the need to prevent food or any other particulates from collecting on vision components and subsequently dropping into the food processing stream – or even to assist in efficient cleaning. The best approach in the latter case is to offer completely smooth lighting products that minimize any chance for particulate trapping (See Fig. 48. It is also worth noting that:

    1. Not all IP69K lights are designed to provide both chemical and hygienic performance
    2. All exposed parts must also comply fully. For example, if the housing “tub” is chemically
      inert and smooth, so must the diffuser/cover, the cable strain relief, the cable, and even
      the sealant around the cover.
    Fig 48
    IP69K rated, crevice free, corrosion-resistant vision lighting: Left – Back Lighting; Others – Front lighting.

    image 1166


    Sequence of Lighting Analysis and Development

    The following “Sequence of Lighting Analysis” assumes a working knowledge of Lighting Types, camera sensitivities, optics, and familiarity with the Illumination Techniques and the 4 Image Contrast Enhancement Concepts of Vision Illumination. It can be used as a checklist to follow and it is by no means comprehensive, but it does provide a good working foundation for a standardized method that can be modified and/or expanded for the inspection’s requirements.

    1. Immediate Inspection Physical Environment
      • Physical Constraints
        • Access for camera, lens, and lighting in 3-D space (working volume)
        • The size and shape of the working volume
        • Min and max camera, lighting working distance and field-of-view
      • Part Characteristics
        • Part stationary, moving, or indexed?
        • If moving or indexed, speeds, feeds, and expected cycle time?
        • Strobing? Expected pulse rate, on-time, and duty cycle?
        • Are there any continuous or shock vibrations?
        • Is the part presented consistently in orientation and position?
        • Any potential for ambient light contamination?
      • Ergonomics and safety
        • Person-in-the-loop for operator interaction?
        • Safety related to strobing or intense lighting applications?
    2. Object – Light Interactions
      • Part Surface
        • Reflectivity – Diffuse, specular, or mixed?
        • Overall Geometry – Flat, curved, or mixed?
        • Texture – Smooth, polished, rough, irregular, multiple?
        • Topography – Flat, multiple elevations, angles?
        • Light Intensity needed?
      • Composition and Color
        • Metallic, non-metallic, mixed, polymer?
        • Part color vs. background color
        • Transparent, semi-transparent, or opaque – IR transmission?
        • UV dye, or fluorescent polymer?
      • Light Contamination
        • Ambient contribution from overhead or operator station lighting?
        • Light contamination from another inspection station?
        • Light contamination from the same inspection station?
    3. What are the features of interest?
      In other words, what specifically is the inspection goal, related to the features of interest?
    4. Applying the 4 Image Contrast Enhancement Concepts of Lighting
      • Light – Camera – Object Geometry issues
      • Light pattern issues
      • Color differences between parts and background
      • Filters for short, long, or band pass applications, polarization, collimation
        or extra diffusion
    5. Applying the Lighting Techniques and Types Knowledge, including Intensity
      1. Fluorescent vs. Quartz-Halogen vs. LED vs. others
      2. Bright field, dark field, diffuse, back lighting
      3. Vision camera and sensor quantum efficiency and spectral range

    DSCF9190 1 1 2


    It is important to understand that this level of in-depth analysis can and often does result in seemingly contradictory directions, and multiple levels of compromise are often the rule, rather than the exception. For example, detailed object – light interaction analysis might point to the use of the dark field lighting technique, but the inspection environment analysis indicates that the light must be remote from the part. In this instance, then a more intense linear bar light(s), oriented in dark field configuration may create the desired image contrast, but perhaps require more image post processing, or other system changes to accommodate.

    Finally, no matter the level of analysis and understanding, there is quite often no substitute for actual testing the 2 or 3 light types and techniques first on the bench, then in actual production floor implementation whenever possible. And it is advantageous, if not seemingly a bit counter-intuitive, when designing the vision inspection and parts handling / presentation from scratch, to get the lighting solution in place first, then design and build the remainder of the inspection and parts handling / presentation around the lighting requirements.

    The ultimate objective of this form of detailed analysis and application of what might be termed a “tool box” of lighting types, techniques, tips, and often acquired “tricks” is simply to arrive at an optimal lighting solution – one that takes into account and balances issues of ergonomics, cost, efficiency, and consistent application. This frees the integrator and developer to better direct their time, effort, and resources – items better used in other critical aspects of vision system design, testing, and implementation.


    Appendix A – Further Reading for Machine Vision

    A3 Power Point Class (2020):

    The-Fundamentals-of-Machine-Vision.pdf, by David Dechow


    Narrated Video Power Points by Microscan (2012):

    Introduction to Machine Vision – Part 1 of 3

    Why Use Machine Vision? – Part 2 of 3

    Key Parts of a Vision System – Part 3 of 3

    Scholarly Book (2012):

    Machine Vision Handbook, by Bruce G. Batchelor, Springer

    Appendix B – Select Topic Extended Examination

    1: Lighting “Intensity” and Power

    As applied to visible light, the term, “luminous intensity” has been formally defined as 1 of the 7 System International (SI) base units of measure. It is a photometric value, and in commercial and some scientific literature, is expressed as candela (cd – lm / sr). See Units/units.html for more detail about SI base and derived measures.

    As applied to visible light, the term, “luminous intensity” has been formally defined as 1 of the 7 System International (SI) base units of measure. It is a photometric value, and in commercial and some scientific literature, is expressed as candela (cd – lm / sr). See Units/units.html for more detail about SI base and derived measures.

    with respect to its source (Fig. B1). In fact, there is some controversy in using the term, intensity for other than as part of the formal, yet narrow SI definition and classification.


    Fig B1
    Light power and geometry expression. A sphere has 4π steradians (sr) of solid angle. SI base unit of light intensity is the candela (cd) or lm / sr. Modified from Ref:

    As alluded to in an earlier section, light “intensity” is conceptualized in 2 ways:

    1. Source Power (a.k.a. flux): Rate of energy flow from a source only – there is no provision for light travel geometry
    2. Geometric Units (directionality and light spreading implied):
      Luminous / Radiant Intensity
      – Amount of projected light on a sphere (lm / sr or W / sr)
      Illuminance / Irradiance
      – Amount of light falling onto a surface – a.k.a. Flux Density (lm / m2 or W / m2 – at a common light-to-part WD

    As started in the earlier section, working with light illuminance and irradiance is the most practical and intuitive measure for comparing usable light on the object of interest because it incorporates the illuminator and optics radiant power and beam spread plus the WD into one value.

    It is recommended to use the terms, power for source-only specification and radiant power for “intensity” at a surface; continuing the practice of using “intensity” generically when simply referring to the “brightness” of a light head is acceptable. For more details, see:

    2: A Note About Full-Width Half Max (FWHM)

    As applied in many scientific endeavors, but also in machine vision lighting, it is useful to understand “Full-Width Half-Max” (FWHM) specification. In general, there are 2 use cases for FWHM in vision lighting: spectral peaks and 2-D intensity maps (Fig. B2). Examining Figure 14 (in the body above) for example, we see that there are 3 different light source spectral curve “shapes”: wide and broad (Sun, Xenon); multiple peak (white LED, fluorescent) and singular, steep-sided peaks (mercury lamp and red LED).

    FWHM characterization is best suited for that 3rd category – namely tall, somewhat narrow peaks, such as the mercury peaks or monochromatic (non-white) LEDs. Figure B2a illustrates the concept and calculation for a blue LED spectrum. These characterizations are most useful to compare various spectral outputs, mainly for overlaps in 2 closely spaced spectral curves. Additionally, it is useful to have full spectral curves for the starting and ending wavelengths output for some LED, such as UV LEDs in the case they cannot have visible (violet) light on a part, or in the case of white LEDs that they may output a small amount of non-visible light, both in the UV and IR.

    With respect to 2-D light projection intensity maps, FWHM characterization is more practical for vision applications (See Fig. B2b). It provides a consistent and standard way of specifying a projection width (@ 50% intensity) at specified working distances and is useful when comparing lighting types and families for pattern spread and intensities for suitability, especially when also tied to a measured intensity at the same working distance (Fig. B3).


    Fig B2
    a – Spectral FWHM, blue 455nm peak LED, b – 2-D light intensity FWHM (middle green = 50%), high-power spot light @ 1800mm WD.


    Fig B3
    Measured intensity (300mm FWHM – middle green), high-power spot light @ 1800mm WD.

    3: LED Lifetime

    Another important characteristic of LEDs worth noting is their performance over their lifetime, specifically related to heat build-up and dissipation. Red and Near IR LEDs have very well characterized intensity degradation profiles over their lifetimes, primarily because they have been in service for the longest time. Following development and introduction of red and NIR, we have witnessed progressive development from longer to shorter wavelengths, from yellow to green, blue, and of course white LEDs. UV LEDs, from longer to shorter wavelengths, have also matured in terms of lifetimes, now measured in 10’s of thousands of hours, from literally hundreds.

    Initially, LED lifetimes were specified in half-life, t1/2 which is best elaborated as after each successive half-life time, there is 1⁄2 half of the original intensity left. For example, a white LED might be specified as t1/2 = 50,000 h, so that at 50,000 hours of use, its new intensity if 50% that of when it was first powered, and then after another 50,000 hrs on-time it’s intensity would be half of the previous. For commercial purposes, this definition, while scientifically accurate, was not entirely clear to technicians and end users alike, so the LED industry has largely switched to the more practical spec, Lumen Maintenance Life (L). The same white LED may be spec’d at an L70 lumen maintenance Life of 50,000 h, which means after said time, the light is ~ 70% as bright as when new (see Fig. B4).


    Fig B4
    Luminous flux vs. time (log scale) of an LED – extrapolated after 10,000 hrs. Courtesy of Cree.

    Excessive heat, particularly at the LED / pad junction, is the primary destroyer of LEDs. It affects both the lifetime and performance. Fig. B5 from Cree illustrates a typical output intensity vs. junction temperature for several wavelengths of one of their visible range HB LED families. We also know that junction temperature is largely a function of the LED chemistry, current supplied, and ambient temperature additive effects. Please refer to the earlier strobing section to review LED radiant power performance as a function of current (see Figs. 61-65).


    Fig B5
    Luminous flux vs. junction temperature of visible LEDs – IR, Red, Orange and Yellow LEDs have different chemis- tries from green, blue, and of course white. Courtesy of Cree.

    4: White LED Light Color Temperature

    Light color temperature, now successfully applied to LED lighting as well, was initially defined by the Commission International on Illumination (CIE) in 1931 and best illustrated by the Tristimulus ternary Chromaticity diagram as represented by red, green and blue values\ with known coordinates in X&Y (Fig. B6). The response is patterned after the 3 color receptors in the human eye. Some combination of these 3 colors – and their respective intensities, intersecting will produce white light, and this is the basis for color temperature, expressed in Kelvins (K). That expression is modeled after an ideal black body radiator that produces all light frequencies. High color temperature white is toward the blue end of the diagram, whereas low color temp is represented toward the red end – along the curve.

    Fig B6
    a – CIE 1931 Tristimulus Chromaticity R,G,B diagram of color illumination, b – Combined effect of additive R,G,B colors reproducing white. Courtesy of Wikimedia Commons.

    It is important to clarify the difference between light Color Temperature and Correlated Color Temperature (CCT). As stated previously, the concept of Color Temperature, properly defined and expressed is based on an ideal black body radiator that emits a color based on its thermal temperature – in other words as a black body radiator heats up, the color it emits changes from red through yellow and white to blue. This definition lends itself well to the standard incandescent tungsten light bulb filament that works similarly – it emits a specific color temperature related to its resistive heating state.

    However, strictly speaking, LED and fluorescent lights do not emit light based on their thermal heating properties, and therefore the concept of a Correlated Color Temperature (CCT) was defined and proffered. It expresses a color temperature that is based on human perception that best matches the illuminator light output. Thus, if a broadband light source (i.e. – white light) emits close to the black body Planckian locus it can be modelled and expressed by the CCT, hence the correct expression of LED white light. An excellent summary of light and color is located here:

    As seen in Fig. 14 (in the body above), we know that white LEDs are actually blue LEDs that output a broad-band spectrum, that to the human visual perception, appears white. This is accomplished via a secondary emission inside the LED lens, which is coated with a white phosphor. Early attempts at creating white LED light showed considerable blue content in the spectrum, effectively creating the purplish, cold color temperature most often associated with surgical rooms. As phosphor chemistry improved, we now have multiple color temperature white LEDs, each with a specific profile and relative height of the blue peak and red-orange bulge (Fig. B7).


    Fig B7
    Spectral profiles of white LED light of varying color temperatures. Courtesy of Cree.

    Additionally, we can take a closer look at the white intersection area along the Planckian locus (see black line in Fig. B6) illustrating the color temperature bins an LED manufacturer specifies for one of their white LED models (Fig. B8). LEDs are sorted after the manufacturing process into “bins” that are labelled and based on a range of power and color temperatures.


    Fig B8
    (denoted by isotemperature lines). Dashed line represents the Planckian locus derived from a black body perfect frequency generator. Courtesy of Cree.

    There are 2 important light specifications that assist in proper color reproduction:

    Color Rendering Index (CRI)
    Correlated Color Temperature (CCT)

    For example, a common Quality Control inspection process in the automotive industry is differentiating / matching interior plastic panels. In fact, it is common for many Tier 1 or 2 automotive suppliers to produce specific grey color interior parts for multiple auto manufacturers, often each with subtly different “shades” and hues of gray. Suppliers will verify the gray scale of these panels to accurately match them with the corresponding auto manufacturer or model. Fig. B9 illustrates a generic example of the challenges involved with accurately reproducing the gray scale for identification / matching purposes, based on differing illumination CCT.


    Fig B9
    How CCT of lighting can affect the accurate reproduction of grey objects.

    We can see how much a particular color temperature light may affect the image reproduction of flat gray (Fig. B9 – middle column under neutral light) and hence skew the differentiation, identification and matching between subtly differing gray to brown panels. With a substantial amount of preparation and testing, it is possible, however, to effectively calibrate a particular light’s CCT with a known part, thus creating a relative reference and correction factor. This approach usually involves testing all known part variations, and that can be difficult and time-consuming depending on the part source.

    Additionally, for true color reproduction of non-gray objects and especially multiple color objects, specifying a particular CCT value lighting may not be sufficient. In this instance an additional concept must be introduced, Color Rendering Index (CRI). CRI is best described as how well an object’s color is reproduced accurately in acquired images – compared to a standard. The higher the CRI, the better that light is at an accurate color rendering. Most white LED lights have a CRI ranging from 70 to 95.

    A typical request to machine vision LED lighting manufacturers regarding white LED color temperature (CCT) and CRI is usually can we offer a specific color temperature range or CRI value LED illuminator. There is good news and bad news along these lines: The good news is LED manufacturers now offer a wide selection of both CCT and CRI; the bad news is oftentimes it is impossible to specify a narrow range in CCT values. This is a limitation placed by the LED manufacturer, and is related to the LED manufacturing process itself. In any process of this type, there is a certain variation in both CCT and CRI over the large sample size from a large batch manufacturing run. LED yields are tested and “binned” according to certain ranges, and of course to optimize the salable yield, the manufacturer will opt to combine several adjacent sub bins into 1 larger “bin” for sale, meaning that one cannot typically purchase sub-bins. In practical terms, this implies that there is a range of CCT values withing which vision lighting manufacturers must accept. We also see in Fig. B8 binning and sub-binning around the ideal black body Planckian locus. In this instance, the combined sub-bins labelled as 1B + 1C + 1A + 1D, or 2B + 2C + 2A + 2D are only available at regular pricing and small volumes. A tighter binning, when even offered usually requires premium pricing and large volumes.

    5: Output Pulse Width Calculation for Strobing

    There are a few bits of information required to calculate the approx. light output pulse width and corresponding camera sensor exposure times. It is critically important to keep track of unit conversions to end up with a camera exposure time in uSec and light controller output PW, commonly used in machine vision. Fig. B10 shows sensor dimension standards used in these calculations for a set of conditions outlined below:

    image 00

    Fig 10
    Sensor width and height conventions. For rectangular sensor formats, the width is usually defined as the largest dimension.

    Gathered Critical Information*:
        Sensor width pixel count: 1000 pixels
        Target width field of view: 90 mm (90,000 um)
        Parts per minute: 600 PPM (10Hz or 0.00001 parts / uSec)

       Line speed: 0.65 m / Sec (0.65 um / uSec)
       Image pixel blur required: 2 pixels
      Optional Lens Focal Length Calculation:
      Actual Sensor Width Size: 6.4 mm
      Desired Lens WD to object: 400 mm

    *Note: Because controller output PW and camera exposure times typically are measured in uSec, we convert values above to the same units for ease of calculation. With the above information:

          Image Pixel Size: 90 um
              Target Width FoV / Sensor Width * 1000
              [90,000 um / 1000 pixels]
          Light Flash Rate: 10 Hz (0.0001 parts / uSec)
             [600 PPM / 60 Sec / 1,000,000 uSec]
             Controller output PW (uSec): 277 uSec (round up to 300 uSec)^^
              1 / (Line Speed / Image Pixel Size) * Image Pixel Blur

           [1 / (0.65um / uSec / 90um) *2 Image Pixel Blur]

              Duty Cycle: 0.3 %
              Light Flash Rate * Controller PW output to Light *100%
              [0.00001 parts / uSec * 300uSec * 100%]
           Lens Focal Length: 28.4 mm (round to 25 or 30)
              Sensor Width Dimension * Lens WD / Target Width FoV

              [6.4mm * 400mm / 90mm]

    ^^ Assumes 2 pixel blur is fine based on 95um image pixel size, relative to resolution needed for reading the code image.

    5a: Lens Focal Length Calculation

    With respect to lens focal length calculations, there are several caveats to consider, unique to all lenses – and those values can vary widely from different lens manufacturers and model to model – refer to the detail specs for the specific manufacturer and model:

    1. Consider that lenses have Minimum Object (focus) Distances (MOD). Be sure your lens MOD is less than your required lens WD, otherwise the object cannot be focused.
    2. Consider the lens MTF in Line Pairs / mm and resolution are suitable for your application, specifically high-resolution edge detection and gauging with megapixel amera sensors.
    3. Lenses must be paired with specific size (format sensors). For example, a 1/2″ format sensor requires a 1/2″ or larger format lens or to avoid vignetting (see Fig. B11).


    Fig B11
    Sensor format and size versus lens format, a – 1/3” format lens aperture does not completely fill the sensor, leaving a port-hole effect in the image, c – 2/3” format lens allows the entire sensor to be filled, but we are only using the center part of the lens, b – 1⁄2” format lens and sensor allows optimal use of the lens area and sensor fill – and it does not affect the magnification unlike the over and underfill scenarios above.

    For further information, write us at Advanced illumination, 440 State Garage Road, P.O. Box 237, Rochester, VT 05767 USA; call at 802-767 3830; fax at 802-767-3831; or visit our web site,

    Ai Building 2