Machine vision lighting – Proven techniques to boost inspection accuracy and performance

This month, we are discussing one of the main, and sometimes underutilised, elements of vision systems – lighting. Lighting in machine vision quality control is perhaps one of the most important aspects to get right. It’s critical because it enables the camera to see necessary details. In fact, poor lighting is one of the major causes of failure of machine vision system installations and poor yield performance of a vision system. Therefore, it should be a top priority in determining the correct lighting approach for a machine vision application. While the latest generation AI deep learning vision systems can compensate to some extent for fluctuations in lighting, nothing will replace the ability to light a product accurately to allow defective areas to be seen by the camera. Getting these fundamentals right makes the job of the software image processing much more manageable.

For every vision quality control application, there are common lighting goals:

  1. Maximising feature contrast of the part or object to be inspected.
  2. Minimising contrast on features not of interest.
  3. Removing distractions and variations to achieve consistency.

In this respect, the positioning and type of lighting are key to maximising the contrast of inspected features and minimising everything else. Think of how a manual operator would inspect a part for quality. Typically, they would turn it in front of the light while moving their head to see how the light reflects off of specific areas, slowly covering the whole part or necessary detail as required. Sometimes, even holding it up to the light to create a background to see contamination or features. A human does all this instinctively, but for a digital vision system, this somehow has to be replicated – and at high speed!

Lighting position

The first thing to consider is the lighting position. The position of the lighting will determine the brightness, shadowing, contrast, and what element of light will be seen by the camera system. The overall classification of machine vision lighting positions can be seen below.

Description Characteristics Features
Front illumination – the light is positioned on the same side as the camera/optics Useful for inspecting defects or features on the surface of an object
Back illumination – the object being illuminated blocks light on its way to the camera Useful for highlighting the object silhouette, measuring outline shape and detecting presence/absence of holes or gaps, as well as enhancing defects on transparent objects

Types of machine vision lighting

Within machine vision, the two most common types of lighting used are fibre optics and LED. Fibre optics allows the lighting source to be positioned away from the scene, with a fibre optic cable delivering the light to the vision system scene. This can also allow a higher lux light level to be concentrated into a smaller area.

Fibre Optic Deliver light from halogen, tungsten‐halogen or xenon source. Bright, shapeable & focused.
LED lights By far the most commonly used because:

  • High speed
  • Pulsing – Switched on and off in sequence, only using when necessary
  • Strobing useful for freezing the motion of fast moving objects, to eliminate ambient light influence
  • Flexible positioning into variety of shapes. Directional or diffused
  • Long lifetime
  • Higher Stable Output
  • Colours – Available in visible and non visible
Dome lights
  • Designed to provide illumination coming from virtually any direction
  • Avoid reflection and provide even light when inspecting surface features on an object with complex curved geometry
Telecentric lighting
  • Accurate edge detection and defect analysis through silhouette imaging
  • Reflective objects
  • High accuracy & precise measurement
Diffused light
  • Filters create contrast to lighten or darken object features
  • Avoid uneven illumination on reflective surfaces
Direct light
  • Delivers lighting on the same optical path as the camera

Other key elements to consider when designing lighting for a machine vision environment include the direction, brightness and colour/wavelength compared to the target’s colour.

Lighting angle/direction

The position and angle of the LEDs have a direct effect on the ability of the vision system camera to see contrast in certain areas. This allows lighting setups to be designed to provide the optimum lighting conditions to highlight key areas of the product. This makes the image processing of the machine vision system image more consistent and makes it easier to provide a robust solution. There are several different techniques to think about in this area.

Bright field – Light reflected by the object being illuminated is inside of the optics/camera range Brightfield front lighting – Light reflected by the flat object being illuminated is collected by the optics/camera. Produces dark characteristics on a bright background
Brightfield back lighting – light is either stopped or if the material is transparent or opaque it will be transmitted through the surface
Dark field – Light reflected by the object being illuminated is outside of the optics/camera range Dark field front lighting – reflected light is not collected by the optics/camera. Enhances the non-planar features of the surface as brighter characteristics on a dark background
Dark field backlighting – light transmitted by the object and scattered by non-flat features will be enhanced as bright on a dark background
Co-axial illumination – The front light is positioned to hit the object surface at 90° Can additionally be collimated so that rays are parallel to the optics/camera axis and is useful for achieving high levels of contrast when searching for
defects on highly reflective surfaces

Colour/wavelength

Another aspect to think about when designing machine vision lighting is the filters which can be deployed on the front of the lens. These typically screw onto the front of the lens. They can limit the transmission of specific wavelengths of light, allowing the image to be changed to suit a particular condition that is being identified in the machine vision process.

Filters
  • Create contrast to lighten or darken object features.
  • Like colour filters lighten (i.e. red light makes red features brighter) and opposite colour filters darken (i.e. red light makes green features darker).
  • Polarizers are types of filters used to reduce glare or hot spots. They enhance contrast, so entire objects can be recognised.

Of course, the parameters outlined in the preceding tables are used in varying combinations and are key to achieving the optimal lighting solution. However, we also need to consider the immediate inspection environment. In this respect, the choice of effective lighting solutions can be compromised by access to the part or object to be inspected. Ambient lighting such as factory lights or sunlight can also have a significant impact on the quality and reliability of inspection and must be factored into the ultimate solution.

Finally, the interaction between the lighting and the object to be inspected must be considered. The object’s shape, composition, geometry, reflectivity, topography and colour will all help determine how light is reflected to the camera and the subsequent impact on image acquisition, processing, and measurement.

Due to the obvious complexities, there is often no substitute other than to test various techniques and solutions. It is imperative to get the best lighting solution in place before starting the development of the image processing requirements in any machine vision quality control application.

Your quick guide to machine vision

Choosing the most appropriate combination of components needed for a machine vision system requires expert understanding of application requirements and technology capabilities. From this perspective in this month’s blog post we thought we would provide you with enough knowledge and understanding to have richer more rewarding conversations about machine vision and vision systems technology. This quick guide will help you understand the basics of machine vision and vision systems, including components, applications, return on investment and potential for use in your business.

The first thing to understand are the basic elements of a machine vision system. We’ve covered this before in this post, but it’s good to re-iterate it here. The basics of a machine vision system are as follows:

  • The machine vision process starts with the part or product being inspected.
    When the part is in the correct place, a sensor will trigger the acquisition of the digital image.
  • Structured lighting is used to ensure that the image captured is of optimum quality.
  • The optical lens focuses the image onto the camera sensor.
  • Depending on capabilities, this digitising sensor may perform some pre-processing to ensure the correct image features stand out
    The image is then sent to the processor for analysis against the set of pre-programmed rules.
  • Communication devices are then used to report and trigger automatic events, such as part acceptance or rejection.
  • What can machine vision be used for?

The applications for machine vision vary widely between industries and product environments. However, to aid understanding, they can be broken down into the following five major categories. As outlined previously, in the section on ‘processor’, these usually combine a number of algorithms and software tools to achieve the desired result.

1. Location, guidance & positioning

When combined with robotics, machine vision offers powerful application options. By accurately determining the position of parts to direct robots, find fiducials, align products and provide positional adjustment feedback, guided robots enable a flexible approach to feeding and product changes. This eliminates the need for costly precision fixtures, allowing different parts to be processed without changing tools. Some typical applications include:

  • Locating and orientation of parts to compare them to specific tolerances
  • Finding or aligning parts so that they are positioned correctly for assembly with other components
  • Picking and placing parts from bins
  • Packaging parts on a conveyor belt
  • Reporting part positioning to a robot or machine controller to ensure correct alignment

 

2. Recognition, identification, coding & sorting

Using state-of-the-art neural network pattern recognition to recognize arbitrary patterns, shapes, features and logos, vision systems enable the comparison of trained features and patterns within fractions of a second. Geometric features are also used to find objects and easily identify models that are translated, rotated and scaled to sub-pixel accuracy. Typical applications include:

  • Decoding 1D & 2D symbols, codes and characters printed on parts, labels, and packages.
  • Direct Part Marking (DPM) a character, code or symbol onto a part or packaging
  • Identifying parts by locating a unique pattern or based on colour, shape, or size.
  • Optical character recognition (OCR) and optical character verification (OCV) systems read alphanumeric characters and confirms the presence of a character string.

3. Gauging & measuring

Measuring in machine vision is used to confirm the dimensional accuracy of components, parts and sub-assemblies automatically, without the need for manual intervention. A key benefit with machine vision is that it is non-contact and so does not contaminate or damage the part being inspected. Many machine vision systems can also measure object features to within 0.0254 millimetres and therefore provide additional benefits over the contact gauge equivalent.

4. Inspection, detection & verification

Inspection is used to detect and verify functional errors, defects, contaminants and other product irregularities.

Typical applications include:

  • Verifying the presence/absence of parts or components
  • Counting to confirm quantities
  • Detecting flaws
  • Conformance checking to inspect products for completeness.

5. Archiving & Tracking

Finally, machine vision can play an important role in archiving images and tracking parts and products through the whole production process. This can be particularly valuable in tracking the genealogy of parts in a sub-assembly that make up a finished product. Data recorded can be used to drive better customer support, improve production processes and protect brands against costly product recalls.

What are the benefits of machine vision?

The economic case for investing in machine vision systems is usually strong due to the two key following areas:

1. Cost savings through reducing labour, re-work/testing, removing more expensive capital expenditure, material and packaging costs and removing waste

2. Increased productivity through process improvements, greater flexibility, increased volume of parts produced, less downtime, errors and rejections

However, just viewing the benefits from an economic perspective does not do justice to the true value of your investment. Machine vision systems can add value in all the additional following ways. Unfortunately due to the intangible nature of some of these contributors it can be difficult to put an actual figure on the value but that shouldn’t stop attempts to include them.

  • Intellectually
    • By freeing staff from repetitive, boring tasks they are able to focus thinking in ways that add more value and contribute to increasing innovation. This is good for mental health and good for the business.
    • By reducing customer complaints, product recalls and potential fines machine vision can help to build and protect your brand image in the minds of customers
    • Building a strong image in the minds of potential business customers through demonstrating adoption of the latest technology, particularly when they come and visit your factory!
    • Through the collection of better data and improved tracking, machine vision can help you develop a deeper understanding of your processes
  • Physically
    • The adoption of machine vision can help to complement and even improve health and safety practice
    • Removing operators from hazardous environments or strenuous activity reduces exposure to sickness, absence, healthcare costs or insurance claims
  • Culturally
    • Machine vision can contribute and even accelerate a culture of continuous improvement and lean manufacturing
    • Through increased competitiveness and improving service levels machine vision helps build a business your people can be proud of
  • Environmentally
    • Contributing to a positive, safe working environment for staff
    • Through better use of energy and resources, smoother material flow and reduced waste machine vision systems can help reduce your impact on the environment

The costs of machine vision systems?

Costs can range from several hundred for smart sensors and cameras, up to half a million for complex systems. Of course this will depend on the size and scope of your operations and may be more or less.

However, even in the case of high levels of capital investment it should be obvious, from the potential benefits outlined above, that a machine vision system can quickly pay for itself.

How to deploy an effective vision system for automated inspection

When we start developing a new vision system project or are asked to quote for a specific project, we return to the first principles of good practice for vision systems design. We have created these first principles from our millions of hours of experience developing machine vision solutions for Original Equipment Manufacturers (OEM) and end-user customers. It’s a holistic approach to a project review, from the initial understanding of the vision system elements to assessment for automation, an understanding of the process and production environment, and finally, control and factory information connectivity.

The key to this is being driven by the specification and development of a machine vision solution that will work and be robust in the long term. This might sound obvious, as why wouldn’t you design something which will work for the customer, provide long-term stability and make support (and everyone’s life) easier? But we sometimes hear of customers who have supposedly found a solution for a lower price or specification, which, when looked at in detail, proves to be a solution which won’t work, provides the customer with a lot of hassle and ends up being a quality control white elephant in their production. Vision systems have to be adequately thought out and understood. First and foremost, it must work and be robust so that the industrial automation in the line continues to run 24/7, without interruption.

So, how do we approach a machine vision project to ensure its effectiveness, I hear you say? Well, as we said, we look at the sequence of steps that will make for an effective vision system solution. This is best practice for vision system deployment and successful delivery. Where do you start:

1. User Requirement Specification (URS)/Specification/Detailed Information. The first thing to have is a very defined URS or specification to work with. Again, this might sound strange, but some customers have no specific written specifications, only an idea of a system. This is fine in the early stages, but to quote and ultimately deploy effectively, a specification is needed detailing the product, the variants, speeds, communication, quality checks required, tolerances, production information, process control and all elements of the production interaction (and rejection) that are required. For a formal medical device/pharma production URS, this will be spelt out in more detail on an individually numbered sentence-by-sentence or paragraph-by-paragraph basis, based on GAMP (Good Automated Manufacturing Practice). However, for others, it could be a few pages of written detail defining the requirements for the vision system. Whichever approach, the project needs to be defined from the get-go.

2. Samples Testing. Testing of samples is always a good idea. These are real-world samples from the production line showing the product variation on a shift-by-shift, week-by-week basis. Samples that show both good and actual reject samples are needed. Samples of varying defect styles are required to understand how such a rejection will manifest itself on the product. Depending on the project’s difference from those already known algorithms, a formal Proof of Principle may be suggested to determine how the vision system can effectively identify the rejects.

3. Test Documentation. Our proven and reliable test format drives our engineering to document the sample testing results. This allows us to record all the information on the project and provide tangible data to review in the future if the project goes ahead, so it’s a key document in the fight for effectiveness in the vision system deployment. The sheet is divided into specific areas and completed with the sample testing. It includes all details on the job, including:
-Application type: for example presence verification, gauging, code reading or Optical Character Recognition (OCR)
-Inspected Part: is the sample indicative of the real “in-production” type, how many variants are there?
-Material: is the part metallic, opaque, transparent, etc?
-Surface and Form: is it reflective, painted rough etc?
-Environment: what environment will the vision system operate in. For example, if surface detection of contamination is required, a clean room will help to reduce the false failures or debris on the surface.
-Background: how is the product presented on a fixture, and what does that look like?
-Vision System specifics: what camera head type is needed, resolution, pixel size, frame rate, spatial resolution, field of view, object distance and optics.
-In-process specifics: How fast is the product moving, what cycle time is needed, and what wavelength of light is used?
-Image processing: once all the above elements are confirmed the actual machine vision image processing needs to be reviewed. Can traditional algorithms be used or is it an AI application? How repeatable is the processing based on the product deviation over many samples? Sample images should be saved showing the test conditions and every sample reviewed.
-Process views and comms: What will we display on the production line HMI, what stats and yields need to be seen, and finally how will the system communicate with the Programmable Logic Controller (PLC) or factory information system.

4. Project Review. A multi-disciplinary team, including vision engineering, controls, mechanical engineering, and automation engineering carry out a review of the results. The team reviews testing results and assesses how this applies to the production environment, automation and interaction with other production processes. The test documentation will provide some specific angles, lighting and filters required for the vision set-up. Can the product be presented consistently to allow the defined fields of view to be seen? The whole team must agree that the project is viable and the vision testing was successful for us to progress – again, another gate to make sure the vision system deployment is effective.

5. Commercial Review. The commercial team will become involved in the project review to understand all elements of the vision system, the specific development needed, and how this translates into a complete system or machine, including all aspects of automation and validation. The commercials are agreed upon for the project.

6. Inspection Specification. Once a project is live, all of the work done in advance is transferred to the engineering team. This will lead to the development and publication of an Inspection Specification that will run through the whole project. This specification defines all elements of the vision system and the approach for inspection. It is agreed with the customer and becomes the guiding document through the project’s completion. For OEM customers who will be purchasing multiple systems, it also provides the ability to provide a consistent machine vision system from project to project.

7. Installation and Commissioning. As the project progresses, competent and experienced IVS vision engineers perform installation and commissioning. They are guided by all the information already collated and assessed, therefore reducing any guesswork about the vision system’s operation as it’s installed. This measured approach means there are no surprises or unknowns once this stage is reached, derisking the project from both sides.

8. Validation FAT and SAT. Prior to final confirmation of operation, a Factory Acceptance Test (FAT) and Site Acceptance Test (SAT) are conducted based on the vision engineering and inspection specification requirements. This rigid document tests all fail conditions of the machine vision system, robust operation over a long period, and confirmation of the calibration of the complete system.

How to deploy an effective vision system for automated inspection

Overall, these steps create a practical framework for the orderly specification and deployment of a robust and fit-for-purpose vision system. Even for our OEM customers who require multiple machines or vision systems over a long contractual period, it pays to follow the standard procedure for effective vision system deployment. The process is designed to minimise risk and provide a robust and long-service vision system that can easily be supported and maintained.

How to boost AI machine vision systems

Artificial Intelligence (AI) machine vision continues to develop, and using AI deep learning (DL) in automated machine vision inspection has become a valid option in those applications where clear trends, learning, and data are available for process inspection. We continue to see strong growth in the use of AI vision systems. But at what point is a decision made on which machine vision process is best for the application deployment, and when should you boost your machine vision inspection to include an element of 100% AI deep learning inspection? And how do you improve and continue to turbo-charge your AI inspection.

Let’s drill down on what we need to apply AI machine vision. To capture training data, we need a consistent setup with known samples to capture a whole series of images. High-resolution images can capture more details, which can significantly improve the accuracy of the vision system. We need images for training the AI classifier engine, images to test the classifier within the training process, and finally, a set of unseen images to confirm the trained classifier is working as it should be. Some applications are naturally more akin to AI machine vision – such as surface inspection, surface anomalies, or subtle changes to a product’s appearance.

This differs from applications that require specific data to be calculated, such as automated gauging and metrology, where particular parts need to be measured with a vision system to an exact tolerance; this is not an application suited or achievable with AI vision systems (as no such data is available from the AI classifier).

We should also be mindful of how the production systems can be deployed. As AI requires a dataset to train and work with in many cases, the system will need to be installed and images captured from the live system before a determination can be made if AI machine vision is applicable in this case. For example, in medical device and pharmaceutical applications, this is difficult, as the solution needs to be proven and validation paperwork completed before the final installation qualification, and it’s hard to validate an AI model. But in other application areas cameras can be installed and images captured as part of the installation process. Always remember the Pros and Cons of Artificial Intelligence in machine vision.

How can we boost the AI machine vision algorithm?

The first step in boosting the AI machine vision algorithm involves showing new images, more deviations from the normal, and selecting the ability to account for skew, changes in size, and overall deviation of the image. So, it’s important to make the most out of the available data by using data augmentation techniques. This includes rotating, flipping, scaling, and adding noise to the images. Data augmentation increases the diversity of the training data without needing to collect new images, which helps prevent overfitting and improves the generalisation capabilities of the AI model.

It’s important to ensure that your dataset represents the real-world scenarios your machine vision system will encounter. A balanced dataset that includes various angles, lighting conditions, and object variations is crucial for training a robust AI model. Including diverse examples helps avoid biases and makes the system more adaptable and accurate.

It’s important to ensure all elements are handled, so all versions of the reject are known and seen. The classifier will still flag those parts with deviations it has not seen before, but you can boost the AI by providing it with more data to work with. However, there is a trade-off between the time needed to retrain the classifier on a decent GPU PC and the boost you can give to the AI machine vision calculation. It’s best to utilise hardware accelerators like GPUs, TPUs, or FPGAs to speed up the training and inference of the machine vision AI models.

In general, the more data the AI has and the larger the network, the more accurate the analysis will be. Implementing a methodology for continuous integration and deployment (CI/CD) to streamline updating models in production is crucial.

Another way to boost the AI machine vision system is to combine its functionality with traditional machine vision algorithms. We do this often, knowing AI is not a panacea for all application needs. So, use an algorithm, for example, to find specific error states, and then use an AI classification to drill down on ambiguous or hard-to-spot surface/defect deviations.

Continuous Monitoring and Updating of the AI vision system

It’s extremely important to regularly monitor the performance of a machine vision system in production. We generally set up automated systems to track key performance metrics and detect any degradation in performance over time. Continuous monitoring allows timely interventions and updates to maintain the AI model’s high accuracy and reliability. Remember, the environment in which machine vision systems operate can change over time. Periodically update your models with new data to stay accurate and relevant.

Boosting AI machine vision systems involves a holistic approach that includes enhancing data quality, utilising advanced algorithms, ensuring real-time processing capabilities, and implementing robust preprocessing and postprocessing techniques. Additionally, focusing on robustness, generalisation, and continuous monitoring ensures that the system remains accurate and reliable. By following these strategies, you can significantly improve and boost the performance and applicability of your AI machine vision systems, unlocking their full potential across many industry sectors and strengthening their use in industrial automation.

Artificial Intelligence (AI) machine vision

The magic behind machine vision: Clever use of optics and mirrors

Sometimes, we’re tasked with some of the most complex machine vision and automated inspection tasks. In addition to our standard off-the-shelf solutions in the medical device, pharmaceutical, and automotive markets, we often get involved with the development of new applications for machine vision inspection. Usually, this consists of adapting our standard solutions to a client’s specific application requirement. A standard machine vision camera or system cannot be applied due to either the complexity of the inspection requirements, the mechanical handling, part surface condition, cycle time needs or simply the limitation to how the part can be presented to the vision system. Something new is needed.

This complex issue is often discussed in customer and engineering meetings, and the IVS engineering team often has to think outside the box and perhaps use some “magic”! This is when the optics (machine vision lenses) need to be used in a unique way to achieve the desired result. This can be done in several ways, typically combining optical components, mirrors, and lighting.

So what optics and lenses are used in machine vision?
The primary function of any lens is to collect light scattered by an object and reproduce a picture of the object on a light-sensitive ‘sensor’ (often CCD or CMOS-based); this is the machine vision camera head. The image captured is then used for high-speed industrial image processing. When selecting optics, several parameters must be considered, including the area to be imaged (field of view), the thickness of the object or features of interest (depth of field), the lens-to-object distance (working distance), the intensity of light, the optics type (e.g. telecentric), and so on.

Therefore, the following fundamental parameters must be evaluated in optics.
Field of View (FoV) is the entire area the lens can see and image on the camera sensor.
Working distance (WD) is the distance between the object and the lens at which the image is in the sharpest focus.
Depth of Field (DoF) is the maximum range over which an object seems to be in acceptable focus.
In addition, the vision system magnification (ratio of sensor size to field-of-view) and resolution (the shortest distance between two points that can still be distinguished as separate points) must also be appraised.

But what if the direct line of the site cannot be seen due to a limitation in the presentation or space available? This is when the magic happens. Optical path planning is used to understand the camera position (or cameras) relative to the part and angle of view required. Combining a set of mirrors in the line of site makes it possible to see over a longer working distance than possible without. The same parameters apply (FoV, WD, DoF etc), but a long optical path can be folded with mirrors to fit into a more compact space. This optical path can be folded multiple times using several mirrors to provide one continuous view through a long path. Cameras can also be mounted closely with several mirrored views to offer a 360-degree view of the product. This is especially true in smaller product inspections like pill or tablet vision inspection checks. Multiple mirrors allow all sides and angles of the pill to be seen, providing a comprehensive automated inspection for surface defects and imperfections for such applications. This technique is also applied in the subtle art of needle tip inspection (in medical syringes) using vision systems, with mirror systems to allow for a complete view of the tip from all angles, sometimes providing a single image with two views of the same product. The optical paths, the effect of specific machine vision lighting and the mechanical design must all be considered to create the most robust solution.

So when our engineers say they will be using “magic” optics, I no longer look for a wand to be waved but know that a very clever application of optics and mirrors will be designed to provide a robust solution for the automated vision system task at hand.

How to create data-driven manufacturing using automated vision metrology systems

Applying automated vision metrology technologies for process control before, during and after assembly and machining/moulding is now a prerequisite in any production environment, especially in the orthopaedic and additive manufacturing industries. This month, we’re drilling down on the data and knowledge you can create for your manufacturing process when deploying vision metrology systems for production quality control. Let’s face it – we all want to know where we are versus the production schedule!

What are “vision metrology” systems and machines?

It’s the use of non-contact vision systems and machines using metrology grade optics, lighting and software algorithms to allow increased throughput of quality control inspection and data collection in real-time for the production process.

What are the benefits of vision metrology systems?

Before we look at the data available on such systems and how they can be utilised to create data-driven production decisions, we should look at the benefits of automated vision metrology systems. This, combined with the data output, clearly focuses on why such systems should be deployed in today’s production environments.

Increased throughput. One of the main drivers and benefits of using automated vision metrology systems is the higher throughput compared to older contact and probing CMM approaches. Coupled with integration into autoload and autotending options, the higher production rates of deploying automated vision metrology allow for faster production and increased output from the factory door.

Better yields. As the systems are non-contact, there is no chance of damage or marking of the product, which is the risk with probing and contact inspection solutions. Checking quality through automated vision allows yields to improve across the production process.

Faster reaction to manufacturing issues. Data is king in the fast-moving production environment. Real-time defect detection, seeing spikes in quality and identifying quality issues quicker, allows production managers to react faster to changes or faults in manufacturing faults. Vision metrology allows immediate review and analysis of statistical process control on a batch, shift and ongoing basis. Monitor live progress of work orders as they happen.

Less downtime. With no contact points and fewer moving parts, maintenance downtime is minimal when applying automated vision metrology machines. Maintenance can be used for other tasks and increases productivity across the manufacturing floor.

Guarantees your quality level. Ultimately, the automated vision metrology machine is a goal-keeper. This allows the quality and production team to sleep easy, knowing that parts are not only being inspected, but also providing warranty protection via a photo save of every product through production.

How do we get data from an automated vision metrology quality control system?

This is where the use of such vision technology gets interesting. The central aspect is the quality control and automated checking of crucial measurements (usually critical to quality CTQ characteristics). Still, the data driving the quality decision is paramount for the manager. The overall data covers a range of data objects that could potentially be needed when running a modern manufacturing plant. These include OEE data, SPC information, shift records, KPI metrics, continuous improvement information, root cause analysis, and data visualisation.

OEE Data (Overall Equipment Effectiveness). The simplest way to calculate OEE is the ratio of fully productive time to planned production time. Fully productive time is just another way of saying manufacturing only good parts as quickly as possible (at the ideal cycle time) with no stop time. A complete OEE analysis includes availability (run time/planned production time), performance ((ideal cycle time x total count)/run time) and quality (good count/total count). The vision metrology data set allows this data to be easily calculated. The data is stored within the automated vision metrology device or sent directly to the factory information system.

Statistical Process Control (SPC) is used in industrial manufacturing quality control to manage, monitor, and maintain production processes. The formal term is “the use of statistical techniques to control a process or production method”. The idea is to make a process as efficient as possible while producing products within conformance specifications with as little scrap as possible. Vision metrology allows vital characteristics to be analysed at speed on 100% of product, compared to the slow, manual load CMM route. SPC data can be displayed on the vision system HMI, and immediate decisions can be made on the process performance and data presentation.

Shift records are part of the validation and access control system for the vision system used in vision metrology. This stops operators from accessing certain features and logging all information against a specific shift operator or team. Data analytics allow trends in operator handling and contribute to the data provided by the system. You may see a spike in quality concerns from a specific shift or operator; these trends can easily be tracked and traced.

KPI metrics are Key Performance Indicators normally specific to the manufacturing site. However, these invariably include monitoring the quality statistics, OEE, process parameters, environmental factors (e.g. temperature and humidity), shift data, downtime and metrology measurements. This is all part of understanding how certain situations impact the output from processes and how levels can be maintained.

Continuous Improvement is the process to improve a manufacturing facility’s product and service quality. Therefore, the data and images from any automated vision metrology machine play an important role in providing live data related to Six Sigma and real-time monitoring of the ppm (parts per million) failure rate within the facility.

Root cause analysis of manufacturing issues is challenging without tangible data from a quality inspection source. The ability to review product images at the point of inspection (with the date and time stamped information), coupled with the specific measurement data, allows easier and quicker root cause analysis when problems crop up.

Data visualisation is now built into modern automated vision metrology systems. The production manager and operators can see quality concerns, spikes in rejects, counts of good and bad, shift data, and live SPC all in a single screen. This can be displayed locally or sent immediately to the factory information system.

By utilising the latest-generation automated vision metrology systems manufacturers can now build a fully-integrated data driven production environment. They are allowing easier control based on accurate, immediate information and photos of products running through production.

The ultimate guide to Unique Device Identification (UDI) directives for medical devices (and how to comply!)

This month, we’re talking about the print you see on medical devices and in-vitro products used for the traceability and serialisation of the product – called the UDI or Unique Device Identification mark. In the ever-evolving world of healthcare, new niche manufacturers and big medical device industrial corporations find themselves at a crossroads. Change has arrived within the Medical Device Regulation, and many large and small companies have yet to strike the right chord, grappling with the choices and steps essential to orchestrate compliance to the latest specifications. Automated vision systems are required at every stage of the production and packaging stages to confirm compliance and adherence to the standards for UDI. Vision systems ensure quality and tracking and provide a visible record (through photo save) of every product stage during production, safeguarding medical devices, orthopaedic and in-vitro product producers.

UDI

What is UDI?

The UDI system identifies medical devices uniformly and consistently throughout their distribution and use by healthcare providers and consumers. Most medical devices are required to have a UDI on their label and packaging, as well as on the product itself for select devices.

Each medical device must have an identification code: the UDI is a one-of-a-kind code that serves as the “access key” to the product information stored on EUDAMED. The UDI code can be numeric or alphanumeric and is divided into two parts:

DI (Device Identifier): A unique, fixed code (maximum of 25 digits) that identifies a device’s version or model and its maker.

Production Identifier (PI): It is a variable code associated with the device’s production data, such as batch number, expiry date, manufacturing date, etc. UDI-DI codes are supplied by authorised agencies such as GS1, HIBCC, ICCBBA, or IFA and can be freely selected by the MD maker.

UDI

UDIs have been phased in over time, beginning with the most dangerous equipment, such as heart valves and pacemakers. So, what’s the latest directive on this?

Navigating the Terrain of MDR and IVDR: Anticipating Challenges

The significant challenges presented by the European Medical Devices Regulation (MDR) 2017/745 have come to the fore since its enactment on May 26th, 2021. This regulatory overhaul supersedes the older EU directives, namely MDD and AIMDD. A pivotal aspect of the MDR is its stringent oversight of medical device manufacturing, compelling the display of a unique code for traceability throughout the supply chain – this allows full traceability through the process from the manufacture of the medical device to the final consumer use.

Adding to the regulatory landscape, the In-Vitro Diagnostic Regulation (IVDR) 2017/746 debuted on May 26th, 2022. This regulation seamlessly replaced the previous In-Vitro Diagnostic Medical Devices (IVDMD) regulation, further shaping the intricate framework governing in-vitro diagnostics. The concurrent implementation of MDR and IVDR ushers in a new era of compliance and adaptability for medical devices and diagnostics stakeholders.

What are the different classes of UDI?

Risk Profile Notify or Self-Assessment Medical Devices In-Vitro Diagnostic Medical Devices
High Risk to Low Risk Notified Body Approval Required Pacemakers, Heart Valves, Implanted cerebral simulators Class III Class D Hepatitis B blood-donor screening, ABO blood grouping
Condoms, Lung ventilators, Bone fixation plate Class IIb Class C Blood glucose self-testing, PSA screening, HLA typing
Dental fillings, Surgical clamps, Tracheomotoy Tubes Class IIa Class B Pregnancy self-testing, urine test strips, cholesterol self-testing
Self-Assessment Wheelchairs, spectacles, stethoscopes Class I Class A Clinical chemical analysers, specimen receptacles, prepared selective culture media

Deadlines for the classification to come into force

Implementing the new laws substantially influences many medical devices and in-vitro device companies, which is why the specified compliance dates have been prioritised based on the risk class to which the devices belong. Medical device manufacturers must adopt automated inspection into their processes to track and confirm that the codes are on their products in time for the impending deadlines.

The Act recognises four types of medical devices and four categories of in-vitro devices, which are classified in ascending order based on the degree of risk they provide to public health or the patient.

2023 2025 2027
UDI marking on DM Class III Class I
Direct UDI marking on reusable DM Class III Class II Class I
UDI marking on in-vitro devices (IVD) Class D Class C & B Class A

The term “unique” does not imply that each medical device must be serialised individually but must display a reference to the product placed on the market. In fact, unlike in the pharmaceutical industry, serialisation of devices in the EU is only required for active implantable devices such as pacemakers and defibrillators.

EUDAMED is the European Commission’s IT system for implementing Regulation (EU) 2017/745 on medical devices (MDR) and Regulation (EU) 2017/746 on in-vitro diagnostic medical devices (IVDR). The EUDAMED system contains a central database for medical and in-vitro devices, which is made up of six modules.

EUDAMED is currently used on a voluntary basis for modules that have been made available. The connection to EUDAMED must take place within six months of the release of all platform modules that are fully functioning.

Data can be exchanged with EUDAMED in three methods, depending on the amount of data to be loaded and the required level of automation.

The “visible” format of the UDI is the UDI Carrier, which must contain legible characters in both HRI and AIDC formats. The UDI must be displayed on the device’s label, primary packaging, and all upper packaging layers representing a sealable unit.

These rules mean medical device manufacturers need reliable, traceable automated vision inspection to provide the track and trace capability for automatic aggregation and confirmation of the UDI process.

The UDI is a critical element of the medical device manufacturing process, and therefore, applying vision systems with automated Optical Character Recognition (OCR) and Optical Character Verification (OCV) in conjunction with Print Quality Inspection (PQI) is required for manufacturers to guarantee that they comply with the legislation.

But what are the differences between Optical Character Recognition (OCR) vs Optical Character Verification (OCV) vs Print Quality Inspection (PQI)?
We get asked this often, and how you apply the vision system technology is critical for traceability. To comply with UDI legislation, medical device manufacturers must check that their products comply with the requirements and track them through the production process. Whether you use OCR, OCV, or PQI depends on the manufacturer stage and where you are in the manufacturing process.

UDI

Optical Character Recognition (OCR)

Optical character recognition is based on the need to identify a character from the image presented, effectively recognise the character, and output a string based on the characters “read”. The vision system has no prior knowledge of the string. It therefore mimics the human behaviour of reading from left to right (or any other orientation) the sequence it sees.
You can find more information on OCR here.

Optical Character Verification (OCV)
Optical character verification differs from recognition as the vision system has been told in advance what string it should expect to “read”, and usually, with a class of characters it should expect in a given location. Most UDI vision systems will utilise OCV, as opposed to OCR, as the master database, line PLC (Programmable Logic Controller), cell PLC and laser/label printer should all know what will be marked. Therefore, the UDI vision system checks that the characters marked are verified and readable in that correct location (and have been marked). This then allows for the traceability through the process through nodes of OCV systems throughout the production line.
You can find more information on OCV here.

Print Quality Inspection (PQI)

Print quality inspection methods differ considerably from the identification methods discussed so far. The idea is straightforward: an ideal template of the print to be verified is stored; this template does not necessarily have to be taken from a physically existing ‘‘masterpiece’’, it could also be computer-generated. Next, an image containing all the differences between the ideal template and the current test piece is created. Simply, this can be done by subtracting the reference image from the current image. But applying this method directly in a real-world context will show very quickly that it will always detect considerable deviations between the reference template and test piece, regardless of the actual print quality. This is a consequence of inevitable position variations and image capture and transfer inaccuracies. A change in position by a single pixel will result in conspicuous print defects along every edge of the image. So, print quality inspection is looking for the difference between the print trained and the print found. Therefore, PQI will pick up missing parts of characters, breaks in characters, streaks, lines across the pack and general gross defects of the print.

You can find more information on PQI here.

UDI
UDI

What about Artificial Intelligence (AI) with OCR/OCV for UDI?

Where we’ve been discussing OCR and OCV, we’ve assumed that the training class is based either on a standard OCR font (a unique sans-serif font created in the early days of vision inspection to provide a consistent font, recognisable by humans and vision systems), or a trainable, repeatable fixed font. OCR with AI is a comprehensive deep-learning-based OCR (or OCV) method. This innovative technology advances machine vision towards human reading. AI OCR can localise characters far more robustly than conventional algorithms, regardless of orientation, font type, or polarity. The capacity to automatically arrange characters enables the recognition of entire words. This significantly improves recognition performance because misinterpreting characters with similar appearances is avoided. Large images can be handled more robustly with this technique, and the AI OCR result offers a list of character candidates with matching confidence ratings, which can be utilised to improve the recognition results further. Users benefit from enhanced overall stability and the opportunity to address a broader range of potential applications due to expanded character support. Sometimes, this approach can be used. However, it should be noted that it can make validation difficult due to the need for a data set prior to learning, compared to a traditional setup.

Do all my UDI vision inspection systems require validation?

Yes, all the vision systems for UDI inspection must be validated to GAMP/ISPE levels. Having the systems in place is one thing, but the formal testing and validation paperwork is needed to comply fully with the legislation. You can find more information on validation in one of our blog articles here.

UDI
UDI

Industrial vision systems allow for the 100% inspection of medical devices at high production speeds, providing a final gatekeeper to prevent any rogue product from exiting the factory gates. These systems, in combination with factory information communication and track & trace capability, allow the complete tracking of products to UDI standards through the factory process.

IVS helps global contact lens manufacturers achieve error-free lenses (at speed!)

This month, we are talking about automated visual inspection of cosmetic defects in contact lens inspection, a major area of expertise in the IVS portfolio. Contact lens production falls between medical device manufacturing and pharmaceuticals manufacturing, but it does mean that it falls within the GAMP/ISPE/FDA validation requirements when it comes to manufacturing quality and line validation.

The use of vision technology for the inspection of contact lenses is intricate. Due to the wide variety of lens profiles, powers, and types (such as spherical, toric, aspherical etc.), manufacturers historically relied on human operators to visually verify each lens before it was shipped to a customer. Cosmetic defects in lenses range from scratches and inclusions, through to edge defects, mould polymerisation problems and overall deformity in the contact lens.

The Challenge: Manual Checks are Prone to Human-Error and are Slow!

The global contact lens industry must maintain the highest standards of quality due to the specialist nature of its products. Errors or faults in contact lenses can lead to negative recalls, financial losses, or a drop in brand loyalty and integrity – all issues that could otherwise be very difficult to rectify. However, manually checking lenses has become extremely time-consuming, inefficient, and error-prone. Due to the brain’s tendency to rectify errors naturally, human error is a common cause of oversights. This makes it next to impossible to spot faults by hand, therefore many errors are missed, and contact lens defects can be passed to the consumer.

Coupled with this, many global contact lens manufacturing brands run their production at high speed, which means using human inspectors simply becomes untenable based on the volume and speed of production; so in the last twenty-five years, contact lens manufacturers have moved to high-speed, high-accuracy automated visual inspection, negating the need for manual checks.

To ensure their products’ quality, accuracy, and consistency, many of the world’s top contact lens manufacturing brands have turned to IVS to solve their automated inspection and verification problems. This can be either wet or dry contact lens inspection, depending on the specific nature of the production process.

The Solution: Minimize Errors and Ensure Quality Through IVS Automated Visual Inspection

IVS contact lens inspection solutions have been developed over many years based on a deep and long heritage in contact lens inspection and the required FDA/GAMP validation. The vision system drills down on the specific contact lens defects, allowing finite, highly accurate quality checks and inspection coupled with the latest generation AI vision inspection techniques. This allows the maximum yield achievable, coupled with full reporting and statistical analysis, to give the data feedback to the factory information system, providing real-time, reliable information to the contact lens manufacturer. Failed lenses are rejected, allowing 100% quality inspected contact lenses to be packed and shipped.

IVS have inspection systems for manual load, automated and high-speed contact lens inspection, allowing our solutions to cover the full range of contact lens manufacturers – from small start-ups to the major high-level production volume producers. Whatever the size of your contact lens production, we have validated solutions to FDA/ISPE/GAMP for automated contact lens inspection.

Contact lens manufacturers now have no requirement for manual visual inspection by operators; this can all be achieved through validated automated visual inspection of cosmetic defects, providing high-yield, high-throughput quality inspection results.

More details: https://www.industrialvision.co.uk/wp-content/uploads/2023/05/IVS-Lens-Inspection-Solutions.pdf

Connecting your vision system to the factory, everything you need to know

There has been a rapid move over the last few years from using vision systems not just for stand-alone, in-line quality control processes, but to drill down and use the information which is created, assessed and archived through the automated vision system.

When vision systems first started being used in numbers in the early 1990’s, the only real option of connection to the production line PLC (programmable logic controller) was through hardwired digital I/O (PNP/NPN), simply acting like a relay to make a decision on product quality and then to trigger a reject arm or air blast to remove the rogue product. The next evolution from there was to start to save some of the basic data at the same time as the inspection process took place. Simple statistics on how many products had passed the vision system inspection and how many had been rejected. This data could then either be displayed on the PLC HMI in simple form or via rudimentary displays on the vision system. From there, communication started with the RS-232 serial line, and this morphed into USB and onto TCP/IP.

The Fieldbus protocol was the next evolution of communication with vision systems. From the standpoint of the user, the fieldbus looked to be state-based, similar to digital I/O. In reality, the data was exchanged serially through a network. Because these messages were sent on a clock cycle, the information was delayed. The benefit of Fieldbus over digital I/O was that each data exchange package of several hundred to 1,000 bytes was exchanged. PROFIBUS, CC-Link, CANopen, and DeviceNet were among the first protocols to be created. While fieldbuses of the first generation utilised serial connections to exchange data, fieldbuses of the second generation used Ethernet. As a result, the technology was sometimes referred to as “Industrial Ethernet” as it evolved, but the term’s meaning is slightly ambiguous.

When compared to serial data transfer, Ethernet enables substantially more data to be sent. However, using Ethernet as a fieldbus medium has the problem of having non-deterministic transmission timings. To obtain enough real-time performance, several Industrial Ethernet protocols such as PROFINET, EtherNet/IP, Ethernet Powerlink, or EtherCAT typically expanded the Ethernet standard (to the Common Industrial Protocol, CIP). For PC-based vision systems, some of these expansions are implemented in hardware, which necessitates the insertion of an expansion card in the vision controller unit.

So, the big evolution came with the support of this “industrial ethernet” communication, so vision systems now use a unified standard supporting real-time Ethernet and Fieldbus systems for PC-based automation. This allows for data to be transferred and exchanged with the PLC through ethernet protocols on a single connection. The benefit is the reduction in complex and costly cabling, easy integration and fast deployment of vision systems. Ethernet is characterised by the large amount of data that can be transferred.

This allows for the seamless and simple integration with all standard PLCs, such as Allen-Bradley, Siemens, Schneider, Omron and others – directly to the vision systems controller unit. Vision system data can be displayed on the PLC HMI, along with the vision HMI, where required.

The most common protocols are:
Profinet — This industrial communications protocol is defined by Profibus International and allows vision systems to communicate with Siemens PLCs and other factory automation devices which support the protocol.

EtherNet/IP — This Rockwell-defined protocol enables a vision system to connect to Allen-Bradley PLCs and other devices.

ModBus/TCP — This industrial network protocol is defined by Schneider Electric and permits direct connectivity to PLCs and other devices over Ethernet.

This progress was and is being accompanied by an increase in the resolutions of industrial cameras. Industrial cameras for machine vision inspection are not always up to date with the commercial world of smartphones. This is largely because the quality necessary for automatic vision assessment is considerably greater than the quality required for merely seeing a photo (i.e. you don’t worry about a dead pixel when looking at your holiday pics!). As a preserved picture archive for end-of-line photo archiving in an industrial environment, high-resolution image quality is now important. In addition to connecting to the line PLC, vision systems must now link to the whole factory network environment and industrial information systems. Vision systems may now interact directly with production databases, with every image of every product leaving the factory gates being taken, recorded and time-stamped, and even linked to a batch or component number via serial number tracking. Now, vision systems are not only used as a goalkeeper to stop bad products from going out the door, but also as a warranty protection providing tangible image data for historic record keeping.

In conclusion, the industrial ethernet connection is now the easiest and fastest way to integrate vision system devices into an automated environment. The ease of connection, the lower cost of cabling, and the use of standard protocols make it a simple and effective method for high-speed communication, coupled with database connections for further image and data collection from the vision system. Vision systems are now linked to complete factory line control and command centres for fully immersive data collection.

How vision systems are deployed for 100% cosmetic defect detection (with examples)

Cosmetic defects are the bane of manufacturers across all industries. Producing a product with the correct quality in terms of measurement, assembly, form, fit, and function is one thing – but cosmetic defects are more easily noticed and flagged by the customer and end-user of the product. Most of the time, they have no detrimental effect on the use or function of the product, component or sub-assembly, they just don’t look good! They are notoriously difficult for an inspection operator to spot and are therefore harder to check using traditional quality control methods. The parts can be checked on a batch or 100% basis for measurement and metrology checks, but you can’t batch-check for a single cosmetic defect in a high-speed production process. So for mass production medical device manufacturing at 300-500 parts per minute, using vision systems for automated cosmetic defect inspection is the only approach.

From medical device manufacturers, cosmetic defects might manifest themselves as inclusions, black spots, scratches, marks, chips, dents or a foreign body on the part. For automotive manufacturers, it could be the wrong position of a stitch on a seat body, defects on a component, a mark on a sub-assembly, or the gap and flush of the panels on a vehicle.

Using vision systems for automated cosmetic inspection is especially important for products going to the Japanese and the Asian markets. These markets are particularly sensitive to cosmetic issues on products, and medical device manufacturers need to use vision systems to 100% inspect every product produced to confirm that no cosmetic defects are present on their products. This could be syringe bodies, vials, needle tips, ampules, injector pen parts, contact lenses, medical components and wound care products. All need checking at speed during the manufacturing and high-speed assembly processes.

How are vision systems applied for automated cosmetic checks?

The key to applying vision systems for cosmetic visual inspection is the combination of camera resolutions, optics (normally telecentric in nature) and filters to bring out the specific defect to provide the ability to segment it from the natural variation and background of the product. Depending on the type of cosmetic defect that is being checked, it may also require a combination of either linescan, areascan or 3D camera acquisition. For example, syringe bodies or vials can be spun in front of a linescan camera to provide an “unrolled” view of the entire surface of the product.

Let’s drill down on some of the specific elements which are used, using the syringe body as a reference (though these methods can be applied to other product groups):

Absorbing defects

These defects are typically impurities, particulates, fibers and bubble. This needs a lighting technique with linescan to provide a contrasting element for these defects:

Hidden defects

These defects are typically “hidden” from view via the linescan method and so using a direct on-axis approach with multiple captures from an areascan sensor will provide the ability to notice surface defects such as white marks (on clear and glass products) and bubbles.

Cracks and scratch defects

A number of approaches can be taken for this, but typically applying an axial illuminiation with a combination of linescan rotation will provide the ability to identify cracks, scratches and clear fragments.

Why do we use telecentric optics in cosmetic defect detection?

Telecentric lenses are optics that only gather collimated light ray bundles (those that are parallel to the optical axis), hence avoiding perspective distortions. Because only rays parallel to the optical axis are admitted, telecentric lens magnification is independent of object position. Due to this distinguishing property, telecentric lenses are ideal for measuring and cosmetic inspection applications where perspective problems and variations in magnification might result in inconsistent measurements. The front element of a telecentric lens must be at least as large as the intended field of view due to its construction, rendering telecentric lenses insufficient for imaging very large components.

Fixed focal length lenses are entocentric, catching rays that diverge from the optical axis. This allows them to span wide fields of view, but because magnification varies with working distance, these lenses are not suitable for determining the real size of an item. Therefore, telecentric optics are well suited for cosmetic and surface defect detection, often combined with collimated lighting to provide the perfect silhouette of a product.

In conclusion, machine vision systems can be deployed effectively for 100% automated cosmetic defect detection. By combining the correct optics, lighting, filters and camera technology, manufacturers can use a robust method for 100% automated visual inspection of cosmetic and surface defects.

How to calibrate optical metrology systems to ensure precise measurements

Optical metrology systems play a crucial role in various industries, enabling accurate and reliable measurements for quality control, inspection, and manufacturing processes. To ensure precise and consistent results, it is essential to calibrate these systems meticulously. By understanding the significance of calibration, considering key factors, and implementing regular calibration practices, organizations can optimise the performance of these systems. Accurate measurements obtained through properly calibrated optical metrology systems empower industries to maintain quality control, enhance efficiency, and drive continuous improvement.

How to calibrate your industrial vision system.

We’re often asked about the process for carrying out calibration on our automated vision inspection machines. After all, vision systems are based on pixels whose size is arbitrarily dependent on the sensor resolutions, fields-of-view and optical quality. It’s important that any measurements are validated and confirmed from shift-to-shift, day-to-day. Many vision inspection machines are replacing stand-alone slow probing metrology-based systems, or if it’s an in-line system, it will be performing multiple measurements on a device at high speed. Automating metrology measurement helps reduce cycle time and boost productivity in medical device manufacturing; therefore, accuracy and repeatability are critical.

In the realm of optical metrology, the utilisation of machine vision technology has brought about a transformative shift. However, to ensure that the results of the vision equipment have a meaning that everyone understands, the automated checks must be calibrated against recognised standards, facilitating compliance with industry regulations, certifications, and customer requirements.

The foundational science that instils confidence in the interpretation and accuracy of measurements is known as metrology. It encompasses all aspects of measurement, from meticulous planning and execution to meticulous record-keeping and evaluation, along with the computation of measurement uncertainties. The objectives of metrology extend beyond the mere execution of measurements; they encompass the establishment and maintenance of measurement standards, the development of innovative and efficient techniques, and the promotion of widespread recognition and acceptance of measurements.

For metrology measurements, we need a correlation between the pixels and the real-world environment. For those of you who want the technical detail, we’re going to dive down into the basis for the creation of individual pixels and how the base pixel data is created. In reality, the users of our vision systems need not know this level of technical detail, as the main aspect is the calibration process itself – so you can skip the next few paragraphs if you want to!

The camera sensor is split into individual pixels in machine vision. Each pixel represents a tiny light-sensitive region that, when exposed to light, creates an electric charge. When an image is acquired, the vision system captures the amount of charge produced at each pixel position and stores it in a memory array. A protocol is used to produce a uniform reference system for pixel positions. The top left corner of the picture is regarded the origin, and a coordinate system with the X-axis running horizontally across the rows of the sensor and the Y-axis running vertically down the columns is utilised.

The pixel positions inside the image may be described using the X and Y coordinates in this coordinate system. For example, the top left pixel is called (0,0), the pixel to its right is called (1,0), the pixel below it is called (0,1), and so on. This convention enables the image’s pixels to be referred to in a consistent and standardised manner. So this method provides a way (in combination with sub-pixel processing) to provide a standard reference coordinate position for a set of pixels combined to create an edge, feature or “blob”.

In machine vision, metrology calibration involves the mapping of the pixel coordinates of the vision system sensor back to real-world coordinates. This “mapping” ties the distances measured back to the real-world measurements, i.e., back to millimetres, microns or other defined measurement system. In the absence of a calibration, any edges, lines or points measured would all be relative to pixel measurements only. But quality engineers need real, tangible measurements to validate the production process – so all systems must be pre-calibrated to known measurements.

The process of calibration ensures traceability of measurements to recognised standards, facilitating compliance with industry regulations, certifications, and customer requirements. It enhances the credibility and trustworthiness of measurement data.

How do you calibrate the vision system in real-world applications?

Utilising certified calibration artifacts or reference objects with known dimensions and properties is essential. These standards serve as benchmarks for system calibration, enabling the establishment of accurate measurement references. Proper calibration guarantees that measurements are free from systematic errors, ensuring the reliability and consistency of the data collected. The method of calibration will depend on if the system is a 2D or 3D vision system.

For a 2D vision system, common practice is to use slides with scales carved or printed on them, referred to as micrometre slides, stage graticules, and calibration slides. There is a diverse range of sizes, subdivision accuracy, and patterns of the scales. These slides are used to measure the calibration of vision systems. These calibration pieces are traceable back to national standards, which is key to calibrating the vision inspection machine effectively. A machined piece can also be used with traceability certification, this is convenient when you need a specific measurement to calibrate from for your vision metrology inspection.

One form of optical distortion is called linear perspective distortion. This type of distortion occurs when the optical axis is not perpendicular to the object being imaged. A chart can be used with a pre-defined pattern to compensate for this distortion through software. The calibration system does not compensate for spherical distortions and aberrations introduced by the lens, so this is something to be aware of.

For 3D vision, you no longer need pricey bespoke artefacts or specifically prepared calibration sheets. Either the sensor is factory calibrated or a single round item in the shape of a sphere suffices. Calibrate and synchronise the vision sensor by sending calibrated component measurements to the IVS machine. You will instantly receive visible feedback that you may check and assess during the calibration process.

How often do I need to re-calibrate a vision system?

We often get asked this question, since optimal performance and accuracy of optical metrology systems can diminish over time due to factors like wear and tear or component degradation. Establishing a regular calibration schedule ensures consistent and reliable measurements. However, it’s often down to the application requirements, and the customer needs based on their validation procedures. This can take place once a shift, every two weeks, one a month or even once a year.

One final aspect is storage, all our vision inspection machines come with calibration storage built into the machine itself. It’s important to store the calibration piece carefully, use it as determined from the validation process, and keep it clean and free from dirt.

Overall, calibration is often automatic, and the user need never know this level of detail on how the calibration procedure operates. But it’s useful to have an understanding of the pixel to real-world data and to know that all systems are calibrated back to traceable national standards.

Clinical insight: 3 ways to minimise automated inspection errors in your medical device production line

Your vision system monitoring your production quality is finally validated and running 24/7, with the PQ behind you it’s time to relax. Or so you thought! It’s important that vision systems are not simply consigned to maintenance without an understanding of the potential automated inspection errors and what to look out for once your machine vision system is installed and running. These are three immediate ways to minimise inspection errors in your medical device, pharma or life science vision inspection solution.

1. Vision system light monitor.
As part of the installation of the vision system, most modern machine vision solutions in medical device manufacturing will have light controllers and the ability to compensate for any changes in the light degradation over time. Fading is a slow process but needs to be monitored. LEDs are made up of various materials such as semiconductors, substrates, encapsulants, and connectors. Over time, these materials can degrade, leading to reduced light output and colour shift. It’s important in a validated solution to understand these issues and have either an automated approach to the changing light condition through close loop control of grey level monitoring, or a manual assessment on a regular interval. It’s something to definitely keep an eye on.

2. Calibration pieces.
For a vision machines preforming automated metrology inspection the calibration is an important aspect. The calibration process will have been defined as part of the validation and production plan. Typically the calibration of a vision system will normally in the form of a calibrated slide with graticules, a datum sphere or a machined piece with traceability certification. Following from this would have been the MSA Type 1 Gauge Studies, this the starting point prior to a G R&R to determine the difference between an average set of measurements and a reference value (bias) of the vision system. Finally the system would be validated with the a Gauge R&R, which is an industry-standard methodology used to investigate the repeatability and reproducibility of the measurement system. So following this the calibration piece will be a critical part of the automated inspection calibration process. It’s important to store the calibration piece carefully, use it as determined from the validation process and keep it clean and free from debris. Make sure your calibration pieces are protected.

3. Preventative maintenance.
Vision system preventative maintenance is essential in the manufacturing of medical devices because it helps to ensure that the vision systems performs effectively over the intended lifespan. Medical devices are used to diagnose, treat, and monitor patients, and they play an important part in the delivery of healthcare. If these devices fail, malfunction, or are not properly calibrated, substantial consequences can occur, including patient damage, higher healthcare costs, and legal culpability for the manufacturer. Therefore, any automated machine vision system which is making a call on the quality of the product (at speed), must be maintained and checked regularly. Preventative maintenance of the vision systems involves inspecting, testing, cleaning, and calibrating the system on a regular basis, as well as replacing old or damaged parts.

Medical device makers benefit from implementing a preventative maintenance for the vision system in the following ways –
Continued reliability: Regular maintenance of the machine vision system can help identify and address potential issues before they become serious problems, reducing the risk of device failure and increasing device reliability.
Extend operational lifespan: Regular maintenance can help extend the lifespan of the vision system, reducing the need for costly repairs and replacements.
Ensure regulatory compliance: Medical device manufacturers are required to comply with strict regulatory standards (FDS, GAMP, ISPE) and regular maintenance is an important part of meeting these standards.

These three steps will ultimately help to lessen the exposure of the manufacture to production faults, and stop errors being introduced into the medical devices vision system production process. By reducing errors in the machine vision system the manufacturer can keep production running smoothly, increase yield and reduce downtime.