A guide to the different lenses used in industrial vision systems.

Optical lenses are used to view objects in imaging systems. Vision system lenses are imaging components used in image processing systems to focus an image of an examined object onto a camera sensor. Depending on the lens, they can be used to remove parallax or perspective error or provide adjustable magnifications, field of views, or focal lengths. Imaging lenses allow an object to be viewed in several ways to illustrate certain features or characteristics that may be desirable in certain applications. Alongside lighting, lens selection is paramount to creating the highest level of contrast between the features of interest and the surrounding background. This is fundamental to achieving the greatest possible image quality when working with vision systems.

In this respect the lens plays an important role in gathering light from the part being inspected and projecting an image onto the camera sensor, referred to as primary magnification (PMAG). The amount of light gathered is controlled by the aperture, which is open or closed to allow more or less light into the lens and the exposure time, which determines how long the image is imposed onto the sensor.

The precision of the image depends on the relationship between the field of view (FOV) and working distance (WD) of the lens, and the number of physical pixels in the camera’s sensor.

FOV is the size of the area you want to capture.

WD is the approximate distance from the front of the camera to the part being inspected with a more exact definition taking into account the lens structure.

The focal length, a common way to specify lenses, is determined by understanding these measurements and the camera/sensor specifications.

The performance of a lens is analysed in reference to its modulation transfer function (MTF) which evaluates resolution and contrast performance at a variety of frequencies.

Types of lens

In order to achieve the highest resolution from any lens/ sensor combination the choice of the former has become an increasingly important factor in machine vision. This is because of the trend to increasingly incorporate smaller pixels on larger sensor sizes.

An ideal situation would see the creation of an individual lens working for each specific field of view, working distance and sensor combination but this highly custom approach is impractical from a cost perspective.

As a result there are a multitude of optical/lens configurations available with some of the key varieties shown below:

Lens Type Characteristics Applications
Standard resolution lenses
  • sensor resolution of less than a megapixel
  • fixed focal lengths from 4.5 – 100 mm
  • MTF of 70 – 90 lp/mm
  • low distortion
  • most widely used
High resolution lenses
  • focal lengths up to 75 mm
  • MTF in excess of 120 lp/mm
  • very low distortion (<0.1%)
  • cameras with a small pixel size
  • precise measurement
Macro lenses
  • small fields of view approximately equal to camera’s sensor size
  • very good MTF characteristics
  • negligible distortion
  • lack flexibility – not possible to change iris or working distance
  • optimised for ‘close-up’ focusing
Large format lenses
  • required when camera sensor exceeds that which can be accommodated with C-mount lenses
  • often modular in construction including adapters, helical mounts and spacers
  • most commonly used in line scan applications
Telecentric lenses
  • collimated light eliminates dimensional and geometric image variations
  • no distortion
  • images with constant magnification and without perspective distortion whatever the object distance
  • to enable collimation front aperture of lens needs to be at least as large as the field of view meaning lenses for large fields of view are comparatively expensive
  • specialist metrology
Liquid lenses
  • Change shape within milliseconds
  • enables design of faster and more compact systems without complex mechanics
  • MTBF in excess of 1 billion movements
  • Longer working life due to minimal moving parts
  • where there is a need to rapidly change lens focus due to object size or distance changes
  • view every surface of object with as few cameras as possible
  • complex image shapes
360˚ – pericentric lenses
  • specific path of light rays through the lens means that a single image can show detail from the top and sides of an object simultaneously
  • cylindrical objects
360˚ – Catadioptric lenses
  • sides of inspected object observed over wide viewing angle, up to 45° at max., making it possible to inspect complex geometries under convenient perspective
  • small objects down to 7.5 mm diameter
360˚ – Hole inspection optics
  • a large viewing angle >82°
  • compatible with a wide range of object diameters and thicknesses
  • viewing objects containing holes, cavities and containers
  • imaging both the bottom of a hole and its vertical walls
360˚ – Polyview lenses
  • provide eight different views of side and top surfaces of object
  • enables inspection of object features that would otherwise be impossible to acquire with a single camera

You need enough resolution to see the defect or measure to an appropriate tolerance. Modern vision systems use sub-pixel interpolation, but nothing is better than having whole pixels available for the measurement required. Match the right lens to the right camera. If in doubt, over specify the camera resolution, most machine vision systems can cut down the image size from the sensor anyway.

The 7 elements of a machine vision system.

For today’s post we thought we’d take you back to the beginning. Not all customers have used machine vision or vision systems in their production process before, many will be new to machine vision. So it’s important to understand the basics of a vision inspection system and what the fundamentals of the overall system look like. This helps to understand how a vision inspection machine operates at a rudimentary level.

The components of a vision system include the following basic seven elements. Although each of these components serves its own individual function and can be found in many other systems, when working together they each have a distinct role to play. To work reliably and generate repeatable results it is important that these critical components interact effectively.

  • The machine vision process starts with the part or product being inspected.
  • When the part is in the correct place a sensor will trigger the acquisition of the digital image.
  • Structured lighting is used to ensure that the image captured is of optimum quality.
  • The optical lens focuses the image onto the camera sensor.
  • Depending on capabilities this digitizing sensor may perform some pre-processing to ensure the correct image features stand out
  • The image is then sent to the processor for analysis against the set of pre-programmed rules.
  • Communication devices are then used to report and trigger automatic events such as part acceptance or rejection.

It all starts with the part or product being inspected. This is because it is the part size, specified tolerances and other parameters which will help to inform the required machine vision solution. To achieve desired results the system will need to be designed so that part placement and orientation is consistent and repeatable.

A sensor, which is often optical or magnetic, is used to detect the part and trigger:

  • the light source to highlight key features and
  • the camera to capture the image

This part of the process may also include what is often referred to as ‘staging’. Imagine a theatre and this is the equivalent of putting the actor centre stage in the best possible place for the audience to see. Staging is often mechanical and is required to:

  • Ensure the correct part surface is facing the camera. This may require rotation if several surfaces need inspecting
  • Hold the part still for the moment that the camera or lens captures the image
  • Consistently put the part in the same place within the overall image ‘scene’ to make it easy for the processor to analyse.

Lighting is critical because it enables the camera to see necessary details. In fact poor lighting is one of the major causes of failure. For every application there are common lighting goals:

  • Maximising feature contrast of the part or object to be inspected
  • Minimising contrast on features not of interest
  • Removing distractions and variations to achieve consistency

In this respect the positioning and type of lighting is key to maximise contrast of features being inspected and minimise everything else.

Of course, an integrated inspection machine will have all of these aspects already designed and taken care of within the scope of the quality inspection unit, but these are some just some of the basic elements which make up the guts of a machine vision system.

Human Vision v Machine Vision (and what the operational benefits really are).

We get asked this all the time! What are the differences between using a human inspector and what can we expect from an intelligent vision system. Advances in artificial intelligence, in particular deep learning, are enabling computers to learn for themselves and so gaps in this area continue to decrease. But it’s still safe to say that vision systems work from logic, don’t get tired and don’t have an “off” day. And of course, some production processes are too high speed for an operator to inspect (such as in medical device and pharmaceutical manufacturing).

Some of the key characteristics in comparison between human and machine vision can be seen in the table below:

Human Vision Machine Vision
Speed The human visual system can process 10 to 12 images per second. High speed – hundreds to thousands of parts per minute (PPM)
Resolution High image resolution High resolution & magnification.
Interpretation Complex information. Best for qualitative interpretation of unstructured scene Follows program precisely. Best for quantitative and numerical analysis of a structured scene
Light spectrum Visible light – Human eyes are sensitive to electromagnetic wavelengths ranging from 390 to 770 nanometres (nm) requires additional lighting to highlight parts being inspected, but can record light beyond human visible spectrum. Some machine-vision systems function at infrared (IR), ultraviolet (UV), or X-ray wavelengths.
Consistency, reliability & safety Impaired by boredom, distraction & fatigue Continuous repeatable performance – 24/7
100% accuracy

As we can see in the table, machine vision offers some real benefits over human vision, and this is what an increasing number of industries are recognising and exploiting.

The operational benefits of machine vision

So, for the forward-thinking companies exploiting this technology, there are a number of operational benefits of machine vision including:

  • Increased competitiveness – higher productivity and output
  • Lower costs – Less downtime and waste, increased speed and error correction. Automatically identify and correct manufacturing problems on-line with machine vision forming part of the factory control network
  • Improved product quality – 100% quality check for maximum product quality
  • Improved brand reputation management – More stringent compliance with industry regulations, leading to fewer product recalls and a reduced number of complaints
  • Customer complaint handling process improvements – Image archiving and recording throughout the whole process
  • Increased safety – machine vision solutions contribute to a positive, safe working environment
  • Improvements to sustainable working practices – machine vision can improve the use of energy and resources, make material flow smoother, prevent system jams, reduce defects/waste and even save on space. It can also help to enable Just In Time processes through tracking and tracing products and components throughout the production process, avoiding component shortages, reducing inventory and shortening delivery time.
  • Higher innovation – Releasing staff from manual, repetitive tasks for higher value work can lead to greater creativity and problem-solving.


Whilst it’s hopefully clear that there are many benefits to machine vision, there are also some limitations. Machine vision systems are able to deliver results equal to and often above human performance with greater speed, continuity and reliability over a longer time period.

However, we should remember that the way the machine ‘sees’ can only be as good as the conditions and rules that are created to enable it to do so. In this respect, it is important that the following basic rules are observed:

  • The inspection task has been described precisely and in detail, in a way appropriate for the special characteristics of machine ‘‘vision’’.
  • All permissible variants of test pieces (with regard to shape, colour, surface etc.) and all types of errors have been taken into account.
  • The environmental conditions (illumination, image capturing, mechanics etc.) have been designed in such a way that the objects or defects to be recognised stand out in an automatically identifiable way.
  • These environmental conditions are kept stable.

In a related point, machine vision can be limited by the relationship between the amount of data required to help train it and the processing power needed to efficiently process the data. However, deep learning, which enables machines to learn rather than train, requires much less data and so provides an opportunity for progress in this respect.

Top 6 medical device manufacturing defects that a vision system must reject

What does every medical device manufacturer crave? Well, the delivery of the perfect part to their pharmaceutical customer, 100% of the time. One of the major stumbling blocks to that perfect score and getting a Parts Per Million (PPM) rate down to zero are the scratches, cracks and inclusions which can appear (almost randomly) in injection moulded parts. Most of the time, the body in question is a syringe, vial, pen delivery, pipette or injectable product which is transparent or opaque in nature, which means the crack, inclusion, or scratch is even more apparent to the end customer. Here we drill down on the top six defects you can expect to see in your moulded medical device and how vision systems are used to automatically find and reject the failure before it goes out of the factory door.

1. Shorts, flash and incomplete junctions.

Some of the failures are within the tube surface and are due to a moulding fault in production. For example, incomplete injections in mouldings create a noticeable short shot in the plastic ends of the product. The forming of junctions at the bottom end of the tube creates a deformation that must be analysed and rejected. Flash is characterised as additional material which creates an out of tolerance part.

2. Scratches and weld lines

These can be present on the main body, thread area or top shaft. They often have a slight depth to them and can be evident from some angles and less so from others. These could also be confused with weld lines which can run down the shaft or body of a syringe product, but this is also a defect that is not acceptable in production.

3. Cracks

Cracks in a medical device can have severe consequences to the patient and consumer; this could allow the pharmaceutical product to leak from the device, change the dosage or provide an unclean environment for the drug delivery device. Cracks could be in the main body or sometimes in the thread area, or around the sprue.

4. Bubbles, Black Spots and Foreign Materials

Bubbles and black spots are inclusions and foreign material, which are not acceptable on any product, but on a drug delivery device that is opaque or clear in nature, they stand out a lot! These sort of particulates are typically introduced through the moulding process and need to be automatically rejected. Often, the cosmetic nature for these sorts of inclusions is more the issue than the device’s functional ability based on the reject.

5. Sink marks, depressions and distortions

Sink marks, pits, and depressions can cause distortion in the components, thus putting the device out of tolerance or off-specification from the drawing. These again are caused by the moulding process and should be automatically inspected out from the batch.

6. Chips

Chips on the device have a similar appearance to short shots but could be caused on exit from the moulding or via physical damage through movement and storage. Chips on the body can cause out of tolerance parts and potential failure of the device in use.

But how does this automatic inspection work for all the above faults?

Well, a combination of optics, lighting and filters is the trick to identifying all the defects in medical plastic products. This combined with the ability to rotate the product (if it’s cylindrical in nature) or move it in front of many camera stations. Some of the critical lighting techniques used for the automated surface inspection of medical devices are:

Absorbing – the backlight is used to create contrasts of particulates, bubbles and cosmetic defects.
Reflecting – Illumination on-axis to the products creates reflections of fragments, oils and crystallisation.
Scattering – Low angle lighting highlights defects such as cracks and glass fragments.
Polarised – Polarised light highlights fibres and impurities.

These, combined with the use of telecentric optics (an optic with no magnification or perspective distortion), allows the product to be inspected to the extremities, i.e. all the way to the top and to the bottom. Thus, the whole medical device body can be examined in one shot.

100% automated inspection has come a long way in medical device and pharmaceutical manufacturing. Utilising a precision vision system for production is now a prerequisite for all medical device manufacturers.

What are false failures in machine vision (and the art of reducing them.)

Over the last twenty plus years in the machine vision industry we’ve learnt a lot about customer expectation and the need to provide solutions which are robust, traceable and provide good value for money. One of the key elements of any installation or inspection machine is the mystical area of false failures of the vision system, and making sure this is reduced to the point of virtual elimination. A small change in the false failure rate in a positive direction can have a massive impact on yield, especially when the inspection takes place at 500 parts per minute!

For customer’s new to machine vision, it’s a concept they aren’t used to. The idea that you have to have a few false rejects events to happen in order to ensure that no false accepts take place. During the design phase of the vision set-up and inspection routine structure, we review a collection of “Good” parts as well as a pool of “Bad” parts. Statistically, this would imply an infinite number of each in order to calculate the set points for each measured feature. It is impossible to create an effective vision solution without this collection of components. We use various vision software methods to analyse all of the parts and features to create a collection of data sets using these good and bad parts.

This is where the system’s true feasibility will be determined. Once a vision tool has been created for any given feature, a population of data points will usually have some degree of uniqueness across the good and bad parts. If the data does not have two mutually exclusive distributions of results, some good parts must be rejected to ensure that no faulty parts are passed and allowed to continue to the customer. In other words, there is a cost to rejecting some good parts in order to ensure that a failed part is never sent to the customer.

You’ll notice that this chart explains the reasons for false rejections. More importantly, it emphasises that if a system is not rejecting good parts, it may be passing bad ones.

Of course, with all our years of experience in machine vision, we understand this issue – so all our systems and machines are designed so that there is enough difference between good and bad that the distributions are as distinct as possible, that’s the art. That is not so difficult to achieve in presence/absence type applications. It’s more difficult in gauging and metrology vision, where there’s some measurement uncertainty, and extremely difficult in surface flaw detection. Experience in the field of machine vision is everything when it comes to false failure reduction, and that’s what we provide you with our long-standing knowledge in vision systems.