Machine vision lighting – Proven techniques to boost inspection accuracy and performance

This month, we are discussing one of the main, and sometimes underutilised, elements of vision systems – lighting. Lighting in machine vision quality control is perhaps one of the most important aspects to get right. It’s critical because it enables the camera to see necessary details. In fact, poor lighting is one of the major causes of failure of machine vision system installations and poor yield performance of a vision system. Therefore, it should be a top priority in determining the correct lighting approach for a machine vision application. While the latest generation AI deep learning vision systems can compensate to some extent for fluctuations in lighting, nothing will replace the ability to light a product accurately to allow defective areas to be seen by the camera. Getting these fundamentals right makes the job of the software image processing much more manageable.

For every vision quality control application, there are common lighting goals:

  1. Maximising feature contrast of the part or object to be inspected.
  2. Minimising contrast on features not of interest.
  3. Removing distractions and variations to achieve consistency.

In this respect, the positioning and type of lighting are key to maximising the contrast of inspected features and minimising everything else. Think of how a manual operator would inspect a part for quality. Typically, they would turn it in front of the light while moving their head to see how the light reflects off of specific areas, slowly covering the whole part or necessary detail as required. Sometimes, even holding it up to the light to create a background to see contamination or features. A human does all this instinctively, but for a digital vision system, this somehow has to be replicated – and at high speed!

Lighting position

The first thing to consider is the lighting position. The position of the lighting will determine the brightness, shadowing, contrast, and what element of light will be seen by the camera system. The overall classification of machine vision lighting positions can be seen below.

Description Characteristics Features
Front illumination – the light is positioned on the same side as the camera/optics Useful for inspecting defects or features on the surface of an object
Back illumination – the object being illuminated blocks light on its way to the camera Useful for highlighting the object silhouette, measuring outline shape and detecting presence/absence of holes or gaps, as well as enhancing defects on transparent objects

Types of machine vision lighting

Within machine vision, the two most common types of lighting used are fibre optics and LED. Fibre optics allows the lighting source to be positioned away from the scene, with a fibre optic cable delivering the light to the vision system scene. This can also allow a higher lux light level to be concentrated into a smaller area.

Fibre Optic Deliver light from halogen, tungsten‐halogen or xenon source. Bright, shapeable & focused.
LED lights By far the most commonly used because:

  • High speed
  • Pulsing – Switched on and off in sequence, only using when necessary
  • Strobing useful for freezing the motion of fast moving objects, to eliminate ambient light influence
  • Flexible positioning into variety of shapes. Directional or diffused
  • Long lifetime
  • Higher Stable Output
  • Colours – Available in visible and non visible
Dome lights
  • Designed to provide illumination coming from virtually any direction
  • Avoid reflection and provide even light when inspecting surface features on an object with complex curved geometry
Telecentric lighting
  • Accurate edge detection and defect analysis through silhouette imaging
  • Reflective objects
  • High accuracy & precise measurement
Diffused light
  • Filters create contrast to lighten or darken object features
  • Avoid uneven illumination on reflective surfaces
Direct light
  • Delivers lighting on the same optical path as the camera

Other key elements to consider when designing lighting for a machine vision environment include the direction, brightness and colour/wavelength compared to the target’s colour.

Lighting angle/direction

The position and angle of the LEDs have a direct effect on the ability of the vision system camera to see contrast in certain areas. This allows lighting setups to be designed to provide the optimum lighting conditions to highlight key areas of the product. This makes the image processing of the machine vision system image more consistent and makes it easier to provide a robust solution. There are several different techniques to think about in this area.

Bright field – Light reflected by the object being illuminated is inside of the optics/camera range Brightfield front lighting – Light reflected by the flat object being illuminated is collected by the optics/camera. Produces dark characteristics on a bright background
Brightfield back lighting – light is either stopped or if the material is transparent or opaque it will be transmitted through the surface
Dark field – Light reflected by the object being illuminated is outside of the optics/camera range Dark field front lighting – reflected light is not collected by the optics/camera. Enhances the non-planar features of the surface as brighter characteristics on a dark background
Dark field backlighting – light transmitted by the object and scattered by non-flat features will be enhanced as bright on a dark background
Co-axial illumination – The front light is positioned to hit the object surface at 90° Can additionally be collimated so that rays are parallel to the optics/camera axis and is useful for achieving high levels of contrast when searching for
defects on highly reflective surfaces

Colour/wavelength

Another aspect to think about when designing machine vision lighting is the filters which can be deployed on the front of the lens. These typically screw onto the front of the lens. They can limit the transmission of specific wavelengths of light, allowing the image to be changed to suit a particular condition that is being identified in the machine vision process.

Filters
  • Create contrast to lighten or darken object features.
  • Like colour filters lighten (i.e. red light makes red features brighter) and opposite colour filters darken (i.e. red light makes green features darker).
  • Polarizers are types of filters used to reduce glare or hot spots. They enhance contrast, so entire objects can be recognised.

Of course, the parameters outlined in the preceding tables are used in varying combinations and are key to achieving the optimal lighting solution. However, we also need to consider the immediate inspection environment. In this respect, the choice of effective lighting solutions can be compromised by access to the part or object to be inspected. Ambient lighting such as factory lights or sunlight can also have a significant impact on the quality and reliability of inspection and must be factored into the ultimate solution.

Finally, the interaction between the lighting and the object to be inspected must be considered. The object’s shape, composition, geometry, reflectivity, topography and colour will all help determine how light is reflected to the camera and the subsequent impact on image acquisition, processing, and measurement.

Due to the obvious complexities, there is often no substitute other than to test various techniques and solutions. It is imperative to get the best lighting solution in place before starting the development of the image processing requirements in any machine vision quality control application.

The future of AI-powered vision systems in medical device and pharmaceutical inspection

Quality control comes first in the highly regulated fields of medical device and pharmaceutical manufacture. Not only is it a question of efficiency; patient safety depends on every product meeting strict industry requirements. Long the backbone of quality control in this industry, traditional inspection techniques depend on human monitoring and standard rule-based vision systems. But the limits of traditional methods are clear as production sizes rise and manufacturing techniques get more complicated. Artificial intelligence’s inclusion into machine vision systems is changing the inspection scene and allowing more precise, flexible, effective quality control.

Vision systems driven by artificial intelligence are significantly altering how producers handle production efficiency, compliance verification, and defect discovery. Unlike rule-based systems, which run on pre-programmed criteria to find flaws, artificial intelligence models—especially those using deep learning—can spot minute, hitherto undetectable variations in items. In the medical and pharmaceutical sectors, where variances can be minute but nevertheless noteworthy enough to affect product integrity, this is extremely helpful.

Detecting discrepancies in high-volume manufacturing lines is one of the important domains where artificial intelligence-powered visual systems shine. Medical equipment including syringes, catheters, and surgical tools must to be exactly perfect; even small flaws can affect their efficacy. Conventional machine vision systems must be extensively manually programmed to distinguish between good and bad products. By means of large datasets of images, artificial intelligence models can be trained to identify acceptable deviations while indicating abnormalities that might have escaped attention under a rule-based approach. This guarantees that only really faulty goods are taken off the supply chain by greatly lowering false positives and negatives.

AI-enhanced vision technologies are proving absolutely essential is pharmaceutical packaging and labelling. Regulatory compliance depends critically on batch code verification, expiration date printing, and label placement precision. Although efficient, standard OCR (optical character recognition) techniques can have difficulty with variances in font clarity, small distortions, or package material. Even in demanding environments like curved surfaces or low-contrast printing, AI models educated on thousands of packaging samples may learn to spot mistakes with significantly more dependability. These systems’ self-learning character also helps them to adjust to new materials and packaging designs without calling for significant reprogramming.

Another area where visual inspection driven by artificial intelligence is making great progress is the sterility and integrity of medical packaging. Combining human inspectors with static rule-based visual checks has long been the basis for seal validation in blister packs, vials, and ampoules. These techniques can, however, have trouble with borderline situations such minor punctures compromising sterility or somewhat uneven heat sealing. Combining artificial intelligence-based vision inspection with hyperspectral imaging or X-ray analysis will help to identify hidden flaws missed by conventional techniques, therefore preventing the release of contaminated goods into use. In the pharmaceutical industry especially, where even little packaging flaws could cause contamination hazards, this is especially important.

The capacity of AI-powered vision systems to improve traceability and compliance with strict legal criteria adds still another benefit. Manufacturers of medical devices have to follow EU MDR, ISO 13485, and FDA 21 CFR Part 820 among other international guidelines. By means of factory execution systems (MES) and enterprise resource planning (ERP), AI vision systems can interact to generate real-time inspection data, therefore guaranteeing thorough documentation of quality control methods. For recalls and regulatory checks, this degree of traceability is absolutely helpful since it lets producers monitor and separate faulty batches with until unheard-of speed and accuracy.

Visual inspection is being used to lower industrial waste and increase productivity. Conservative flaw detection thresholds of traditional inspection techniques might cause too high rejection rates. With their capacity to recognise subtle differences in product quality, artificial intelligence models can separate between actual flaws and benign deviations. Over time, this lowers needless waste without sacrificing safety criteria, therefore saving significant costs. Moreover, vision systems driven by artificial intelligence function nonstop without tiredness, enabling constant inspection quality across protracted manufacturing cycles.

Extensive data training is one of the difficulties that artificial intelligence-based visual systems in pharmaceutical and medical production presents. Unlike rule-based systems, which may be used with specified criteria, artificial intelligence models need vast amounts of high-quality photos for training. To guarantee strong model performance, this entails selecting varied datasets considering changes in illumination, angles, and material qualities. Synthetic data creation and transfer learning, however, are solving these issues and increasing the viability of artificial intelligence implementation—even for companies with limited access to big datasets.

Another factor is how well current manufacturing infrastructure integrates. Many manufacturing plants run antiquated machinery not meant to fit vision systems driven by artificial intelligence. Modern artificial intelligence solutions, on the other hand, can be used as modular updates that complement current technology to improve its capacity instead of totally replacement. Emerging as a good alternative are cloud-based artificial intelligence vision systems, which let companies use strong computer capacity without making large on-site hardware purchases.

As artificial intelligence-powered vision systems develop, their possible uses in pharmaceutical and medical production will only grow. Real-time adaptive learning will probably be included into the future generation of vision technology so that systems may dynamically change their flaw detection models as they handle fresh input. Furthermore, predictive analytics driven by artificial intelligence can help companies foresee possible quality problems before they materialise, therefore transforming quality management from a reactive to a proactive activity.

Adoption of artificial intelligence-driven visual inspection in pharmaceutical and medical industry marks a basic change in quality control strategy. AI vision systems are redefining the sector by raising accuracy, cutting waste, boosting compliance, and raising efficiency. Although data needs and execution still present difficulties, the long-term advantages much exceed the initial outlay. Manufacturers trying to satisfy the highest standards of quality and safety in an increasingly complicated regulatory environment will find that as the technology develops an invaluable tool.

The Ultimate Guide to Operator Guidance Systems

In today’s rapidly evolving manufacturing landscape, the role of operator guidance systems has taken centre stage. As businesses strive to enhance productivity, minimise errors, and maintain workforce engagement, innovative technologies such as augmented reality (AR) and digital work instructions are transforming the way operators interact with their tasks. This guide explores the pivotal elements of operator guidance, addressing common challenges and offering actionable insights to optimise manufacturing processes.

1. Understanding Operator Guidance

Operator guidance refers to the methods and tools designed to assist workers in executing their tasks with precision and efficiency. Whether through augmented reality, vision systems, digital instructions, or automated feedback systems, these tools aim to:

• Simplify complex tasks
• Reduce errors and rework
• Shorten training durations
• Enhance workforce versatility

The integration of such systems into manufacturing environments transforms workstations into interactive, digital ecosystems that support operators in real time. By offering real-time feedback, operators are less likely to make errors, ensuring consistency and quality throughout the manufacturing process. It also allows new operators to understand a task immediately.

Additionally, operator guidance systems improve communication on the shop floor. By aligning visual aids, auditory signals, and data analytics, they ensure that workers have access to clear and concise information. This holistic approach boosts efficiency and fosters a culture of continuous improvement.

2. The Shift Toward Digital Work Instructions

Digital work instructions have emerged as a game-changer in manufacturing, replacing traditional paper manuals with dynamic, interactive platforms. The benefits include:

Cost Efficiency: Eliminating printing and distributing manuals saves resources while promoting sustainability.
Real-Time Updates: Digital platforms ensure operators have access to the latest procedures, reducing the risk of outdated information causing errors.
Interactive Learning: Multimedia elements such as videos and interactive checkpoints enhance comprehension and retention, shorten training times, and improve task accuracy.
Streamlined Processes: Instructions tailored to specific tasks or product variants eliminate the need for manual searches, delivering the right information at the right time.

Companies adopting digital work instructions also benefit from seamless integration with Manufacturing Execution Systems (MES) and Enterprise Resource Planning (ERP) systems. This integration enables data-driven insights for continuous improvement and fosters transparency and adaptability, ensuring production goals are consistently met.

Digital work instructions are handy in high-mix, low-volume production environments where the need to adapt to new product lines quickly is critical. The ability to easily modify and distribute updated instructions enhances operational flexibility and reduces downtime.

3. Overcoming Challenges in Operator Guidance

3.1 Finding and Retaining Skilled Labour

The global shortage of skilled operators is a significant challenge for manufacturers. High turnover rates, coupled with extended training periods, can disrupt operations. Implementing operator guidance systems minimises the impact of these challenges by:

• Reducing the dependency on pre-existing expertise
• Accelerating onboarding with intuitive training modules
• Enabling lower-skilled workers to perform complex tasks effectively

By incorporating gamification elements into training modules, manufacturers can further engage employees. This approach improves learning outcomes and increases job satisfaction, reducing turnover rates. It allows work instructions to adapt to various languages automatically.

3.2 Adapting to Increased Demand

Fluctuations in customer demand often stress production capacities. Flexible guidance systems help manufacturers scale operations by:

• Training temporary workers swiftly
• Standardising processes to ensure consistent quality
• Allowing real-time adjustments based on workload variations

Advanced analytics integrated into guidance systems allow managers to predict and prepare for demand surges, optimising resource allocation.

3.3 Addressing Operator Resistance

Operator scepticism can hinder the adoption of new technologies. Concerns about micromanagement and loss of autonomy are common. To counter this, systems are designed with operator-centric features such as:

• Experience Level Functionality: Customisable instruction intensity based on operator proficiency
• Freedom in Task Execution: Allowing operators to perform tasks in their preferred sequence where feasible
• Error Prevention as Support: Real-time feedback focuses on preventing mistakes rather than penalising them

Involving operators in designing and implementing guidance systems can also mitigate resistance. By incorporating user feedback, organisations can ensure that the systems are perceived as tools for empowerment rather than oversight.

4. The Role of Augmented Reality in Operator Guidance

Augmented reality (AR) enhances operator guidance by overlaying digital information directly onto physical workspaces. This immersive approach offers:

Real-Time Visual Feedback: Operators receive step-by-step instructions projected onto their workbench, ensuring clarity and precision.
Reduced Cognitive Load: By focusing on one task at a time, operators can work more efficiently without memorising sequences.
Error Prevention: AR systems, integrated with sensors and smart tools, provide immediate warnings if a task is performed incorrectly.

Types of AR Implementation:

Wearables: AR glasses and headsets provide hands-free access to instructions but may cause discomfort during extended use.
Handheld Devices: Tablets offer portability but interrupt workflow when operators need both hands.
Projected AR: Systems that project instructions directly onto workspaces offer ergonomic, non-intrusive solutions ideal for industrial settings. This approach is particularly valued for its standalone nature, eliminating reliance on battery power and enhancing safety.

Projected AR also offers significant scalability, making it suitable for both small-scale operations and large industrial setups. Its ability to integrate seamlessly with existing tools and workflows ensures minimal disruption during implementation.

5. Benefits of Implementing Operator Guidance Systems

5.1 Enhanced Training
Guidance systems significantly reduce training durations by up to 50%, enabling temporary and new workers to become productive faster. Interactive features foster skill acquisition and confidence. Visual and interactive tools accelerate learning and reduce dependency on senior operators.

By incorporating real-time feedback mechanisms, these systems create a continuous learning environment. Operators can identify and correct errors on the spot, enhancing their overall proficiency over time.

5.2 Improved Quality and Efficiency
Standardised processes ensure that every operator performs tasks uniformly, minimising errors and waste. Systems that validate task completion further enhance quality assurance. By minimising rework and reducing scrap, these systems help manufacturers maintain competitive advantages.

Data from guidance systems can also be used to implement continuous improvement initiatives. By analysing patterns in errors and inefficiencies, managers can identify opportunities for process optimisation.

5.3 Adaptability and Flexibility
With customisable instructions, operators can seamlessly switch between tasks or product variants. This flexibility supports high-mix, low-volume production environments. The ability to integrate temporary or seasonal workers smoothly ensures operational continuity.

Moreover, guidance systems support cross-training initiatives, enabling workers to develop skills across multiple roles. This versatility enhances workforce agility and resilience.

5.4 Data-Driven Insights
Embedded analytics identify bottlenecks and error-prone steps, providing actionable data to optimise workflows and improve productivity. Reports generated from these systems allow supervisors to target specific areas for improvement, enhancing overall operational efficiency.

The integration of IoT devices further enhances data collection, offering deeper insights into machine performance and operator interactions. This holistic view enables more informed decision-making.

6. Empowering Operators: A Broader Perspective

Instead of focusing on a single case, the real power of operator guidance systems lies in their adaptability across diverse industries. Whether it is automotive manufacturing, electronics assembly, or medical device assembly, these systems are engineered to overcome common challenges such as complex task sequences and quality control. By integrating features like real-time feedback, customisable training modules, and seamless MES/ERP connectivity, businesses of all sizes can achieve:

• Up to 50% faster onboarding of new staff
• Streamlined multi-variant production processes
• A significant reduction in waste and rework
• Enhanced adaptability for workforce changes

This versatility highlights the universal benefits of operator guidance systems, making them indispensable tools for any forward-thinking manufacturing operation.

The ability to customise systems based on industry-specific requirements ensures that organisations can address their unique challenges effectively. From ensuring compliance in regulated industries to optimising throughput in high-volume sectors, these systems deliver measurable value.

7. The Future of Operator Guidance

As Industry 4.0 continues to evolve, operator guidance systems will play an even more critical role in bridging human expertise with advanced automation. Emerging trends include:

AI-Driven Insights: Predictive analytics to pre-empt errors and optimise training
IoT Integration: Seamless communication between tools, sensors, and guidance platforms
Personalised Experiences: Tailoring guidance to individual operator preferences and skill levels
Eco-Conscious Manufacturing: Incorporating sustainable practices through optimised resource management

By aligning with these trends, operator guidance systems will continue to serve as a cornerstone of modern manufacturing strategies. The focus on sustainability and worker well-being ensures that these systems contribute to broader organisational goals beyond operational efficiency.

Conclusion

Operator guidance with vision systems represents a pivotal advancement in manufacturing, addressing challenges from labour shortages to quality assurance. By leveraging technologies like augmented reality and digital work instructions, manufacturers can empower their workforce, reduce costs, and achieve unparalleled operational excellence. Embracing these systems is not just a step forward—it is an essential leap into the future of manufacturing. These systems’ adaptability, efficiency, and support for human-centric operations make them indispensable for achieving long-term success in a competitive global manufacturing market.

More details on IVS Operator Guidance Systems can be found here: www.industrialvision.co.uk/products/operator-guidance-platform

Your quick guide to machine vision

Choosing the most appropriate combination of components needed for a machine vision system requires expert understanding of application requirements and technology capabilities. From this perspective in this month’s blog post we thought we would provide you with enough knowledge and understanding to have richer more rewarding conversations about machine vision and vision systems technology. This quick guide will help you understand the basics of machine vision and vision systems, including components, applications, return on investment and potential for use in your business.

The first thing to understand are the basic elements of a machine vision system. We’ve covered this before in this post, but it’s good to re-iterate it here. The basics of a machine vision system are as follows:

  • The machine vision process starts with the part or product being inspected.
    When the part is in the correct place, a sensor will trigger the acquisition of the digital image.
  • Structured lighting is used to ensure that the image captured is of optimum quality.
  • The optical lens focuses the image onto the camera sensor.
  • Depending on capabilities, this digitising sensor may perform some pre-processing to ensure the correct image features stand out
    The image is then sent to the processor for analysis against the set of pre-programmed rules.
  • Communication devices are then used to report and trigger automatic events, such as part acceptance or rejection.
  • What can machine vision be used for?

The applications for machine vision vary widely between industries and product environments. However, to aid understanding, they can be broken down into the following five major categories. As outlined previously, in the section on ‘processor’, these usually combine a number of algorithms and software tools to achieve the desired result.

1. Location, guidance & positioning

When combined with robotics, machine vision offers powerful application options. By accurately determining the position of parts to direct robots, find fiducials, align products and provide positional adjustment feedback, guided robots enable a flexible approach to feeding and product changes. This eliminates the need for costly precision fixtures, allowing different parts to be processed without changing tools. Some typical applications include:

  • Locating and orientation of parts to compare them to specific tolerances
  • Finding or aligning parts so that they are positioned correctly for assembly with other components
  • Picking and placing parts from bins
  • Packaging parts on a conveyor belt
  • Reporting part positioning to a robot or machine controller to ensure correct alignment

 

2. Recognition, identification, coding & sorting

Using state-of-the-art neural network pattern recognition to recognize arbitrary patterns, shapes, features and logos, vision systems enable the comparison of trained features and patterns within fractions of a second. Geometric features are also used to find objects and easily identify models that are translated, rotated and scaled to sub-pixel accuracy. Typical applications include:

  • Decoding 1D & 2D symbols, codes and characters printed on parts, labels, and packages.
  • Direct Part Marking (DPM) a character, code or symbol onto a part or packaging
  • Identifying parts by locating a unique pattern or based on colour, shape, or size.
  • Optical character recognition (OCR) and optical character verification (OCV) systems read alphanumeric characters and confirms the presence of a character string.

3. Gauging & measuring

Measuring in machine vision is used to confirm the dimensional accuracy of components, parts and sub-assemblies automatically, without the need for manual intervention. A key benefit with machine vision is that it is non-contact and so does not contaminate or damage the part being inspected. Many machine vision systems can also measure object features to within 0.0254 millimetres and therefore provide additional benefits over the contact gauge equivalent.

4. Inspection, detection & verification

Inspection is used to detect and verify functional errors, defects, contaminants and other product irregularities.

Typical applications include:

  • Verifying the presence/absence of parts or components
  • Counting to confirm quantities
  • Detecting flaws
  • Conformance checking to inspect products for completeness.

5. Archiving & Tracking

Finally, machine vision can play an important role in archiving images and tracking parts and products through the whole production process. This can be particularly valuable in tracking the genealogy of parts in a sub-assembly that make up a finished product. Data recorded can be used to drive better customer support, improve production processes and protect brands against costly product recalls.

What are the benefits of machine vision?

The economic case for investing in machine vision systems is usually strong due to the two key following areas:

1. Cost savings through reducing labour, re-work/testing, removing more expensive capital expenditure, material and packaging costs and removing waste

2. Increased productivity through process improvements, greater flexibility, increased volume of parts produced, less downtime, errors and rejections

However, just viewing the benefits from an economic perspective does not do justice to the true value of your investment. Machine vision systems can add value in all the additional following ways. Unfortunately due to the intangible nature of some of these contributors it can be difficult to put an actual figure on the value but that shouldn’t stop attempts to include them.

  • Intellectually
    • By freeing staff from repetitive, boring tasks they are able to focus thinking in ways that add more value and contribute to increasing innovation. This is good for mental health and good for the business.
    • By reducing customer complaints, product recalls and potential fines machine vision can help to build and protect your brand image in the minds of customers
    • Building a strong image in the minds of potential business customers through demonstrating adoption of the latest technology, particularly when they come and visit your factory!
    • Through the collection of better data and improved tracking, machine vision can help you develop a deeper understanding of your processes
  • Physically
    • The adoption of machine vision can help to complement and even improve health and safety practice
    • Removing operators from hazardous environments or strenuous activity reduces exposure to sickness, absence, healthcare costs or insurance claims
  • Culturally
    • Machine vision can contribute and even accelerate a culture of continuous improvement and lean manufacturing
    • Through increased competitiveness and improving service levels machine vision helps build a business your people can be proud of
  • Environmentally
    • Contributing to a positive, safe working environment for staff
    • Through better use of energy and resources, smoother material flow and reduced waste machine vision systems can help reduce your impact on the environment

The costs of machine vision systems?

Costs can range from several hundred for smart sensors and cameras, up to half a million for complex systems. Of course this will depend on the size and scope of your operations and may be more or less.

However, even in the case of high levels of capital investment it should be obvious, from the potential benefits outlined above, that a machine vision system can quickly pay for itself.

Crush Defects: The Power of Semi-Automatic Inspection

This month, we’re discussing the elements of semi-automatic inspection. This is the halfway house that pharmaceutical manufacturers have to move to from manual quality inspection (of individual products) at a bench when volumes start to increase but don’t warrant a fully automated inspection solution. The catalyst for change is when the inspection room is full of quality inspectors, and it’s time to increase the throughput of quality assessment. The semi-automatic inspection solution allows vials (liquid and lyo), syringes, cartridges and ampoules to be presented to the operator at speed to review before a magnifier, enabling manual rejection to be completed. This allows a single operator to do the job of a room of quality inspectors. This process mimics the standard GAMP manual inspection of the black and white background booths found in manual inspection, allowing for the throughput of high volumes. However, how does the semi-automatic inspection fit into the validation process and allow validation at the relevant levels?

Crush Defects: The Power of Semi-Automatic Inspection

Validation of such semi-automatic inspection must adhere to the United States Pharmacopeia (USP). The following chapters from the USP are regarded as the main literature and source for regulatory information surrounding the visual inspection of injectables:

Chapter 〈790〉 VISIBLE PARTICULATES IN INJECTIONS
Chapter 〈1790〉 VISUAL INSPECTION OF INJECTIONS
Chapter 〈788〉 PARTICULATE MATTER IN INJECTIONS

Crush Defects: The Power of Semi-Automatic Inspection

Chapter 〈790〉 establishes the expectation that each unit of injectable product will be inspected as part of the routine manufacturing process. This inspection should take place at a point when defects are most easily detected; for example, prior to labelling or insertion into a device or combination product. Semi¬automated inspection should only be performed by trained, qualified inspectors. The intent of this inspection is to detect and remove any observed defect. When in doubt, units should be removed

Crush Defects: The Power of Semi-Automatic Inspection

Defect Types

IVS semi-automatic inspection machines with operators allow for the inspection of cosmetic defects in the container and particulate matter within the fluid. The machines will enable the inspection of the following defects:

Cosmetic Defects:
• Glass cracks
• Scratches
• Missing stoppers/caps
• Improper cap/stopper closure
• Glass inclusions

Crush Defects: The Power of Semi-Automatic Inspection

Particulate Matter:
• Fibers
• Glass
• Metal
• Product Related
• Rubber

Crush Defects: The Power of Semi-Automatic Inspection

What are the particle definitions which apply to Semi-Automatic Inspection?

Extrinsic – Highest risk
Particles may originate from many sources. Those that are foreign to the manufacturing process are considered exogenous or “extrinsic” in origin; these include hair, non-process-related fibres, starch, minerals, insect parts, and similar inorganic and organic materials. Extrinsic material is generally a one-time occurrence and should result in the rejection of the affected container in which it is seen; however, elevated levels in the lot may implicate a broader contribution from the same source. These particles may carry an increased risk of microbiological or extractable contamination because less is known about their path before deposition in the product container or their interaction with the product.

Intrinsic – Medium Risk
Other particles are considered “intrinsic”, from within the process. Intrinsic particles may come from processing equipment or primary packaging materials that were either added during processing or not removed during container preparation. These primary product-contact materials may include stainless steel, seals, gaskets, packaging glass and elastomers, fluid transport tubing, and silicone lubricant. Such particles still pose the risk of a foreign body, but generally come from sterile or sanitized materials and more is known about their interactions when in contact with the product.

Inherent- Lower Risk
“Inherent” particles are considered the lowest risk as they are known to be or intended to be associated with specific product formulations. The physical form or nature of inherent particles varies from product to product and includes solutions, suspensions, emulsions, and other drug delivery systems that are designed as particle assemblies (agglomerates, aggregates). Product formulation-related particulate formation should be studied in the development phase and in samples placed on stability to determine the normal characteristics and time-based changes that can occur.

Defects are commonly grouped into classifications based on patient and compliance risk. The most common system uses three groups: critical, major, and minor. Critical defects are those that may cause serious adverse reaction or death of the patient if the product is used. This classification includes any nonconformity that compromises the integrity of the container and thereby risks microbiological contamination of the sterile product. Major defects carry the risk of a temporary impairment or medically reversible reaction, or involve a remote probability of a serious adverse reaction. This classification is also assigned to any defect which causes impairment to the use of the product. These may result in a malfunction that makes the product unusable. Minor defects do not impact product performance or compliance; they are often cosmetic in nature, affecting only product appearance or pharmaceutical elegance.

Crush Defects: The Power of Semi-Automatic Inspection

Inspection criteria

On semi-automatic inspection machines, the vials are spun up at speed prior to reaching the inspection area. This sets any visible particles in motion which aids in detection as stationary particles will be difficult to detect. Upon 100% inspection, visible extrinsic and intrinsic particles should be reliably removed. The test method allows inherent particles to be accepted if the product appearance specification allows inherent particle types. The size of particles reliably detected (2′::70% probability of detection) is generally 150 µm or larger. This Probability of Detection (POD) is dependent on the container characteristics (e.g., size, shape, transparency), inspection conditions (lighting and duration), formulation characteristics (colour and clarity), and particle characteristics (size, shape, colour, and density). For syringes the products are agitated and turned over so that no particulate matter is left in the bung and can be seen.

Crush Defects: The Power of Semi-Automatic Inspection

Critical Inspection Conditions

Light intensity
The results of the inspection process are influenced by the intensity of the light in the inspection zone. In general, increasing the intensity of the light that illuminates the container being inspected will improve inspection performance; 〈790〉 recommends light levels of 2,000-3,750 lux at the point of inspection for routine inspection of clear glass containers. Special attention should be given to assure that inspection is not performed below the lower limit of 2,000 lux.

Background and Contrast
Contrast between the defect of interest and the surrounding background is required for detection, and increased contrast improves detection. The use of both black and white backgrounds is described in 〈790〉, as well as other global pharmacopoeias. The use of both backgrounds provides good contrast for a wide range of particulate and container defects, which can be light or dark in appearance.

Inspection Rate
This is controlled by the roller travel speed. Sufficient time must be provided to allow for thorough inspection of each container; chapter 〈790〉 specifies a reference time of 10 s/container (5s each against both black and white backgrounds).

Magnification
Some inspection processes use a large magnifier to increase image size and thus increase the probability of detecting and rejecting containers with defects near the threshold of detection. Most lenses used during this inspection process are around X2 magnification.

Crush Defects: The Power of Semi-Automatic Inspection

Overall, Semi-Automatic Inspection is a necessary step when pharmaceutical manufacturers move into medium-volume production. Validating such systems allows production to ramp up to the required volume without the need for manual benches and provides capacity for future volumes before fully automated vision inspection with vision systems. More details on how to use Semi-Automatic inspection for regulatory compliance can be found here.

Machine vision: The key to safer, smarter medical devices

In this month’s blog, we are discussing the highly regulated field of medical device manufacturing, where precision and compliance are not just desirable—they are mandatory. The growth of the medical devices industry means there have been substantial advances in the manufacturing of more complex devices. The complexity of these products demands high-end technology to ensure each product is of the correct quality for human use, so what’s the key to producing safer, smarter medical devices? Machine vision is one of the most important technologies that allow for the achievement of precision and compliance within the required regulations. In this article, we will touch on the aspect of machine vision in medical device manufacturing and its role in ensuring precision while serving regulatory compliance.

How Machine Vision Works in Manufacturing

Machine vision (often referred to in a factory context as industrial vision or vision systems) uses imaging techniques and software implementation to perform visual inspections on the manufacturing line for many applications where quality control inspection is required at speed. Using cameras, sensors, and complex algorithms—Visual Perception allows machines to develop a sense of sight by capturing images with the camera(s) and processing them in order to enable decision-making from visual data.

When it comes to the medical device manufacturing process, machine vision systems play a vital role in ensuring the highest levels of precision and quality. Examples include systems that can detect the smallest imperfections, measure components with sub-micron precision, and confirm every product is produced to the tightest of tolerances—all automatically.

The Significance of Medical Device Manufacturing Accuracy

Surgical instruments, injectable insulin pens, contact lenses, asthma devices and medical syringes all need high-precision manufacturing, and this is due to several reasons. The devices not only involve patient health and safety but also must be produced at a legal level, to validated specifications. Even tiny damage or a slight deviation from those specifications can lead to the collapse of these products and hence impact human life. The drive towards miniaturisation in medical devices means that more precise parts have to be manufactured than ever before. With devices getting smaller (miniaturisation) and more complex, the margin for error decreases, hence making precision even more critical.

Machine vision systems can easily maintain precision at these levels due to the precise resolution they run at. The vision systems can use advanced imaging techniques, such as 3D vision, to measure dimensions, check alignments, and verify components are manufactured within tolerance. It makes a quality-assurance check on every unit as it comes off the assembly line to prevent defects and recalls later down the line.

Meeting Compliance Regulations

The medical device industry is heavily regulated by organizations like the FDA (Food and Drug Administration) in the United States, European Medicines Agency (EMA), and other national and international bodies that set relevant standards. Regulations govern every dimension of device manufacture, from design and production to testing and distribution.

The Current Good Manufacturing Practice (CGMP) regulations issued by the FDA draw attention to quality requirements needed throughout the manufacturing process. Machine vision systems can automatically inspect components and end products according to specifications, allowing manufacturers to validate to the relevant regulations.

Machine vision systems can produce extremely stringent inspection records with the traceability required to prove compliance against established regulatory standards. This is crucial specifically for audit purposes; if needed, manufacturers must be able to show evidence that their product was manufactured according to regulations.

Using Machine Vision in the Medical Devices Production

Each of these stages using Machine Vision to ensure precision-orientation compliance with the final product. A few of the main uses are:
Surface Inspection: Vision systems can detect scratches, cracks, and foreign material present on the surface during the manufacturing process, which may lead to a device failure in operation.
Dimensional Measurement: Vision metrology is used for measuring component dimensions to ensure they meet design specifications.
Assembly Verification: Machine vision ensures that components are assembled correctly, reducing defect risk from the assembly process.
Label Verification: Understanding which label goes on which batch is essential for compliance. Machine vision systems can inspect that labels are properly positioned and the writing on them is accurate.
Packaging Inspection: Proper packaging ensures that devices remain sterile and protected while in transit. Machine vision systems ensure that packaging is not defective and is properly sealed.
Code Reading: Machine vision systems check and verify datamatrix & barcodes on products and packaging to maintain correct product identification throughout the supply chain.

Optimising Efficiency and Lowering Costs

The machine vision system has more benefits and is likely to improve manufacturing not only in a precise manner but also with strict compliance while increasing the efficiency of manufacturing, resulting in reduced costs. This will help speed up the throughput from these systems, where automated inspection and measurement can be done much faster and more consistently than a team of human inspectors.

Machine vision products also reduce human error, which could cost in relation to medical device manufacturing. Where a simple mistake can cause the manufacturer to recall defective products, leading them to legal lawsuits and damaging their reputation. Catching defects early can help to minimise these risks and costs, which is why machine vision offers such a huge benefit.

Manufacturers can also use inspection data to analyse trends and uncover underlying issues before they lead to defects, allowing for higher uptime with preventative maintenance of the lines.

Future Trends in Medical Device Manufacturing and Machine Vision

As technology advances, the function of machine vision systems will continue to develop in medical device manufacturing. Here are some of the likely trends we can expect to see over time.
Integration of Artificial Intelligence (AI) with Machine Vision Systems: AI and machine learning are increasingly incorporated into machine vision systems, further advancing image analysis and pattern recognition. This will enable even greater precision and the possibility of detecting less visible imperfections that conventional vision systems might overlook.
3D Vision: Where 2D vision systems are traditionally used, the third dimension is being used more. Due to cross-verification, 3D vision systems provide a complete view of the components (compared to a single-plane 2D view), offering better measurement and inspection capabilities over complex shapes and assemblies.
High-Speed Vision Systems: The push for higher-speed vision systems goes hand in hand with the increased manufacturing speeds being implemented today, and the higher speeds of machines increase demand for accurate inspection.
Edge Computing: With the advent of edge computing, machine vision systems can process data locally with greater power and accuracy to eliminate latency for real-time decisions. This is crucial in applications that need instant feedback to change the manufacturing process.

Conclusion

Medical device manufacturing can be significantly enhanced with the incorporation of machine vision that provides superior levels of precision and compliance. For manufacturers, it also ensures compliance with rigorous regulatory needs by automating inspections and measurements with machine vision systems. While still advancing, the necessity of machine vision in this industry is only likely to become more critical as time goes on and improvements are made in efficiency, accuracy, and overall manufacturing quality.

The magic behind machine vision: Clever use of optics and mirrors

Sometimes, we’re tasked with some of the most complex machine vision and automated inspection tasks. In addition to our standard off-the-shelf solutions in the medical device, pharmaceutical, and automotive markets, we often get involved with the development of new applications for machine vision inspection. Usually, this consists of adapting our standard solutions to a client’s specific application requirement. A standard machine vision camera or system cannot be applied due to either the complexity of the inspection requirements, the mechanical handling, part surface condition, cycle time needs or simply the limitation to how the part can be presented to the vision system. Something new is needed.

This complex issue is often discussed in customer and engineering meetings, and the IVS engineering team often has to think outside the box and perhaps use some “magic”! This is when the optics (machine vision lenses) need to be used in a unique way to achieve the desired result. This can be done in several ways, typically combining optical components, mirrors, and lighting.

So what optics and lenses are used in machine vision?
The primary function of any lens is to collect light scattered by an object and reproduce a picture of the object on a light-sensitive ‘sensor’ (often CCD or CMOS-based); this is the machine vision camera head. The image captured is then used for high-speed industrial image processing. When selecting optics, several parameters must be considered, including the area to be imaged (field of view), the thickness of the object or features of interest (depth of field), the lens-to-object distance (working distance), the intensity of light, the optics type (e.g. telecentric), and so on.

Therefore, the following fundamental parameters must be evaluated in optics.
Field of View (FoV) is the entire area the lens can see and image on the camera sensor.
Working distance (WD) is the distance between the object and the lens at which the image is in the sharpest focus.
Depth of Field (DoF) is the maximum range over which an object seems to be in acceptable focus.
In addition, the vision system magnification (ratio of sensor size to field-of-view) and resolution (the shortest distance between two points that can still be distinguished as separate points) must also be appraised.

But what if the direct line of the site cannot be seen due to a limitation in the presentation or space available? This is when the magic happens. Optical path planning is used to understand the camera position (or cameras) relative to the part and angle of view required. Combining a set of mirrors in the line of site makes it possible to see over a longer working distance than possible without. The same parameters apply (FoV, WD, DoF etc), but a long optical path can be folded with mirrors to fit into a more compact space. This optical path can be folded multiple times using several mirrors to provide one continuous view through a long path. Cameras can also be mounted closely with several mirrored views to offer a 360-degree view of the product. This is especially true in smaller product inspections like pill or tablet vision inspection checks. Multiple mirrors allow all sides and angles of the pill to be seen, providing a comprehensive automated inspection for surface defects and imperfections for such applications. This technique is also applied in the subtle art of needle tip inspection (in medical syringes) using vision systems, with mirror systems to allow for a complete view of the tip from all angles, sometimes providing a single image with two views of the same product. The optical paths, the effect of specific machine vision lighting and the mechanical design must all be considered to create the most robust solution.

So when our engineers say they will be using “magic” optics, I no longer look for a wand to be waved but know that a very clever application of optics and mirrors will be designed to provide a robust solution for the automated vision system task at hand.

Why industrial vision systems are replacing operators in factory automation (and what they aren’t doing!)

This month, we’re following on from last month’s post.
We’ve all seen films showing the dystopian factory landscape a few hundred years in the future. Flying cars against a grey backdrop of industrial scenery. Robots serving customers at a fictitious future bar, and probably – if they showed a production line, it would be full of automatons and robots. We will drill down on the role that industrial vision systems can have on the real factory production line. Is it the case that in the future, all production processes will be replaced with automated assembly, coupled with real-time quality inspection? Intelligent robots which can handle every production process?

Let’s start with what’s happening now, today. It’s certainly the case that machine vision has developed rapidly over the last thirty years. We’ve gone from slow scientific image processing on a 386 computer (look it up!) to AI deep learning running at hundreds of frames per second to allow the real-time computation and assessment of an image in computer vision. But this doesn’t mean that every process has (or can be) replaced with industrial vision systems, but what’s stopping it from being so?

Well, the main barrier to this is the replication of the human dexterity and ability to manoeuvre the product in delicate (compared to an industrial automated process) ways. In tandem with this is the ability for a human to immediately switch to a different product mix, size and type of product. So you could have a human operator construct a simple bearing in a matter of seconds, inspect it – and then ask them to build next a completely different product, such as a small medical device. With some training and show how, this information exchange can be done and the human operator adapts immediately. But a robot would struggle. Not to say it couldn’t be done, but with today’s technology, it takes effort. This is the goal of Industry 5.0, the next step generation from Industry 4.0 flexible manufacturing. Industry 5.0 is a human-centric approach, so industry understanding that humans working in tandem with robotics, vision, and automation is the best approach.

So we need to look at industrial vision in the context of not just whether the image processing can be completed (be it with traditional algorithms or AI deep learning), but also how we can present the product meaningfully at the correct angles to assess it. Usually, these vision inspection checks are part and parcel with an assembly operation, which requires specific handling and movement (which a human is good at). This is the reason most adoption of machine vision happens on high throughput, low variation manufacturing lines – such as large-scale medical device production where the same product is validated and will be manufactured in the same way over many years. This makes sense – the payback is there and automation can be applied for assembly and inspection in one.

But what are the drivers for replacing people on production lines with automated vision inspection?
If the task can be done with automation and vision inspection, it makes sense to do it. Vision inspection is highly accurate, works at speed and is easy to maintain. Then there are the comparisons comparing the processes to a human operator. Vision systems don’t take breaks, don’t get tired, and robots don’t go to parties (I’ve never seen a robot at a party) – so they don’t start work in the morning with their minds not on the task at hand! So, it makes sense to move towards automated machine vision inspection wherever possible in a production process, and this represents huge growth in the adoption of industrial vision systems in the coming years.

If the job is highly complex in terms of intricate build, with many parts and variants, then robots, automation and vision systems are not so easy to deploy. However, with the promise of Industry 5.0, we have the template of the future – moving towards an appreciation of including the human operator in factory automation – combining humans with robots, automation, augmented reality, AI and automated inspection. So, the dystopian future might not be as bad as the filmmakers make out, with human operators still being an integral part of the production process.

Catalysts of change: How robots, machine vision and AI are changing the automotive manufacturing landscape

This month, we are discussing the technological and cultural shift in the automotive manufacturing environment caused by the advent of robotics and machine vision, coupled with the development of AI vision systems. We also drill down on the drivers of change and how augmented reality will shape the future of manufacturing technology for automotive production.

Robots in Manufacturing
Robots in manufacturing have been used for over sixty years, with the first commercial robots used in mass production in the 1960s – these were leviathans with large, unwieldy pneumatic arms, but they paved the way for what was to come in the 1970s. At this time, it was estimated that the USA had 200 such robots [1] used in manufacturing; by 1980, this was 4,000 – and now there are estimated to be more than 3 million robots in operation [2]. During this time the machine vision industry has grown to provide the “eyes” for the robot. Machine vision uses camera sensor technology to capture images from the environment for analysis to confirm either location (when it comes to robot feedback), quality assessment (from presence verification through to gauging checks) or simply for photo capture for warranty protection.

The automotive industry is a large user of mass-production robots and vision systems. This is primarily due to the overall size of the end unit (i.e. a built car), along with vision systems for confirmation of quality due to the acceptable parts per million failure rate (which is extremely low!). Robots allow repetitive and precise assembly tasks to be completed accurately every time, reducing the need for manual labour and providing a faster speed for manufacturing.

Automating with industrial robots is one of the most effective ways to reduce automotive manufacturing expenses. Factory robots help reduce labour, material, and utility expenses. Robotic automation reduces human involvement in manufacturing, lowering wages, benefits, and worker injury claims.

AI Machine Vision Systems
Deep learning in the context of industrial machine vision teaches robots and machines to do what comes naturally to humans, i.e. to learn by example. New multi-layered “bio-inspired” deep neural networks allow the latest machine vision solutions to mimic the human brain activity in learning a task, thus allowing vision systems to recognise images, perceive trends and understand subtle changes in images that represent defects. [3]

Machine vision performs well at quantitatively measuring a highly structured scene with a consistent camera resolution, optics and lighting. Deep learning can handle defect variations that require an understanding of the tolerable deviations from the control medium, for example, where there are changes in texture, lighting, shading or distortion in the image. Deep-learning vision systems can be used in surface inspection, object recognition, component detection and part identification. AI deep learning helps in situations where traditional machine vision may struggle, such as parts with varying size, shape, contrast and brightness due to production and process constraints.

Augmented Reality in Production Environments
In industrial manufacturing, machine vision is primarily concerned with quality control, which is the automatic visual identification of a component, product, or subassembly to ensure that it is proper. This can refer to measurement, the presence of an object, reading a code, or verifying a print. Combining augmented, mixed reality with automated machine vision operations creates a platform for increased efficiency. There is a shift towards utilising AI machine vision systems (where applicable!) to improve the reliability of some quality control checks in vision systems. Then, combine that assessment with the operator.

Consider an operator or assembly worker sitting in front of a workstation wearing wearable technology like the HoloLens or Apple Vision Pro (an augmented reality headset). An operator could be putting together a sophisticated unit with several pieces. They can see the tangible objects around them, including components and assemblies. They can still interact with digital content, such as a shared document that updates in real time to the cloud or assembly instructions. That is essentially the promise of mixed reality.

The mixed-reality device uses animated prompts, 3D projections, and instructions to guide the operator through the process. A machine vision camera is situated above the operator, providing a view of the scene below. As each step is completed and a part is assembled, the vision system automatically inspects the product. Pass and fail criteria can be automatically projected into the operator’s field of sight in the mixed reality environment, allowing them to continue building while knowing the part has been inspected. In the event of a rejected part, the operator receives a new sequence “beamed” into their projection, containing instructions on how to proceed with the failed assembly and where to place it. When integrated with machine vision inspection, mixed reality has become a standard aspect of the production process. Data, statistics, and important quality information from the machine vision system are shown in real time in the operator’s field of view.

Robots, AI Vision Systems and Augmented Reality
Let’s fast forward a few years and see what the future looks like. Well, it’s a combination of all these technologies as they continue to develop and mature. So AI vision systems mounted on cobots help with the manual assembly, while the operator wears an augmented reality headset to direct and guide the process for an unskilled worker. Workers are tracked while all tasks are being quality assessed and confirmed, while data is stored for traceability and warranty protection.

In direct automotive manufacturing, vision systems will become easier to use as large AI deep-learning datasets become more available for specific quality control tasks. These datasets will continue to evolve as more automotive and tier one and two suppliers use AI vision to monitor their production, allowing quicker deployment and high-accuracy quality control assessment across the factory floor.


References

Why you should transition your quality inspection from shadowgraphs to automated vision inspection

The shadowgraph used to be the preferred method, or perhaps the only method for the quality control department to check the adherence to measurement data. These typically vertical projectors have a plate bed, with parts laid on it, and the light path is vertical. This allows a silhouette and profile of the part to be shown to the operators as an exact representation of the part in black, but projected at a much higher magnification. These projectors are best for flat, flexible parts, washers, O rings, gaskets and parts that are too small to hold, such as very small shafts, screws etc. They are also used in orthopedic manufacturing to view profiles and edges of joint parts.

So, this old-fashioned method involves projecting a silhouette of the edge of the product or component. This projection is then overlaid with a clear (black-lined) drawing of the product showing some additional areas where the component shouldn’t stray into. The operator chooses the overlay from a bunch available, based on the part number of the part. This is laid over the projection and manually lined up with the tolerances shown as a thick band in key areas for quality checking. For example, if you’re an orthopedic joint manufacturer and are coating your product with an additive material, you might use a shadowgraph to check the edges of the new layer. The shadowgraph will project a magnified representation of the component to compare against.

Some optical comparators and shadowgraphs have developed to a level where they are combined with a more modern graphical user interface (GUI) and are digital to make overlays easier to check, coupled with the ability to take manual measurements.

Ultimately though, all these types of manual shadowgraphs take time to use, are manually intensive and are totally reliant on an operator to make the ultimate decision on what is a pass and a fail. You also have no record of what the projection looked like and the whole process takes time. This all leads to a loss of productivity and makes your Six Sigma quality levels dependent on an operator’s quality check, and as we all know, manual inspection (even 200% manual inspection) is not a reliable method for manufacturers to keep their quality levels in check. Operators get tired and there is a lack of consistency between different people in the quality assessment team.

But for those products with critical-to-quality measurements, the old shadowgraph methodology can now be replaced with a fully automated metrology vision system to reduce the reliance on the operator in making decisions, coupled with the ability to save SPC data and information.

In this case, the integration of ultra-high-resolution vision systems, in tandem with cutting-edge optics and collimated lighting, presents a formidable alternative to manual shadowgraphs, ushering in an era of automation. With this advancement, operators can load the product, press a button, and swiftly obtain automatic results confirming adherence to specified tolerances. Furthermore, all pertinent data can be meticulously recorded, with the option to enable serial number tracking, ensuring the preservation of individual product photos for future warranty claims. This transition not only illuminates manufacturing processes but elevates them to a brighter, six-sigma enhanced quality realm. Moreover, these systems can be rigorously validated to GAMP/ISPE standards, furnishing customers with an unwavering assurance of consistently superior quality outcomes.

For your upcoming quality initiatives or product launches, consider pivoting towards automated vision system machines, transcending the limitations of antiquated shadowgraph technology and operator-dependent assessments. Embrace the superior approach that is now within reach, and steer your manufacturing quality towards a future defined by precision, efficiency, and reliability.

By embracing automated vision systems, manufacturers streamline processes and ensure unparalleled precision and consistency in product quality. This shift represents a pivotal leap forward in manufacturing excellence, empowering businesses to meet and exceed customer quality expectations while navigating the ever-evolving landscape of modern production.

The ultimate guide to Unique Device Identification (UDI) directives for medical devices (and how to comply!)

This month, we’re talking about the print you see on medical devices and in-vitro products used for the traceability and serialisation of the product – called the UDI or Unique Device Identification mark. In the ever-evolving world of healthcare, new niche manufacturers and big medical device industrial corporations find themselves at a crossroads. Change has arrived within the Medical Device Regulation, and many large and small companies have yet to strike the right chord, grappling with the choices and steps essential to orchestrate compliance to the latest specifications. Automated vision systems are required at every stage of the production and packaging stages to confirm compliance and adherence to the standards for UDI. Vision systems ensure quality and tracking and provide a visible record (through photo save) of every product stage during production, safeguarding medical devices, orthopaedic and in-vitro product producers.

UDI

What is UDI?

The UDI system identifies medical devices uniformly and consistently throughout their distribution and use by healthcare providers and consumers. Most medical devices are required to have a UDI on their label and packaging, as well as on the product itself for select devices.

Each medical device must have an identification code: the UDI is a one-of-a-kind code that serves as the “access key” to the product information stored on EUDAMED. The UDI code can be numeric or alphanumeric and is divided into two parts:

DI (Device Identifier): A unique, fixed code (maximum of 25 digits) that identifies a device’s version or model and its maker.

Production Identifier (PI): It is a variable code associated with the device’s production data, such as batch number, expiry date, manufacturing date, etc. UDI-DI codes are supplied by authorised agencies such as GS1, HIBCC, ICCBBA, or IFA and can be freely selected by the MD maker.

UDI

UDIs have been phased in over time, beginning with the most dangerous equipment, such as heart valves and pacemakers. So, what’s the latest directive on this?

Navigating the Terrain of MDR and IVDR: Anticipating Challenges

The significant challenges presented by the European Medical Devices Regulation (MDR) 2017/745 have come to the fore since its enactment on May 26th, 2021. This regulatory overhaul supersedes the older EU directives, namely MDD and AIMDD. A pivotal aspect of the MDR is its stringent oversight of medical device manufacturing, compelling the display of a unique code for traceability throughout the supply chain – this allows full traceability through the process from the manufacture of the medical device to the final consumer use.

Adding to the regulatory landscape, the In-Vitro Diagnostic Regulation (IVDR) 2017/746 debuted on May 26th, 2022. This regulation seamlessly replaced the previous In-Vitro Diagnostic Medical Devices (IVDMD) regulation, further shaping the intricate framework governing in-vitro diagnostics. The concurrent implementation of MDR and IVDR ushers in a new era of compliance and adaptability for medical devices and diagnostics stakeholders.

What are the different classes of UDI?

Risk Profile Notify or Self-Assessment Medical Devices In-Vitro Diagnostic Medical Devices
High Risk to Low Risk Notified Body Approval Required Pacemakers, Heart Valves, Implanted cerebral simulators Class III Class D Hepatitis B blood-donor screening, ABO blood grouping
Condoms, Lung ventilators, Bone fixation plate Class IIb Class C Blood glucose self-testing, PSA screening, HLA typing
Dental fillings, Surgical clamps, Tracheomotoy Tubes Class IIa Class B Pregnancy self-testing, urine test strips, cholesterol self-testing
Self-Assessment Wheelchairs, spectacles, stethoscopes Class I Class A Clinical chemical analysers, specimen receptacles, prepared selective culture media

Deadlines for the classification to come into force

Implementing the new laws substantially influences many medical devices and in-vitro device companies, which is why the specified compliance dates have been prioritised based on the risk class to which the devices belong. Medical device manufacturers must adopt automated inspection into their processes to track and confirm that the codes are on their products in time for the impending deadlines.

The Act recognises four types of medical devices and four categories of in-vitro devices, which are classified in ascending order based on the degree of risk they provide to public health or the patient.

2023 2025 2027
UDI marking on DM Class III Class I
Direct UDI marking on reusable DM Class III Class II Class I
UDI marking on in-vitro devices (IVD) Class D Class C & B Class A

The term “unique” does not imply that each medical device must be serialised individually but must display a reference to the product placed on the market. In fact, unlike in the pharmaceutical industry, serialisation of devices in the EU is only required for active implantable devices such as pacemakers and defibrillators.

EUDAMED is the European Commission’s IT system for implementing Regulation (EU) 2017/745 on medical devices (MDR) and Regulation (EU) 2017/746 on in-vitro diagnostic medical devices (IVDR). The EUDAMED system contains a central database for medical and in-vitro devices, which is made up of six modules.

EUDAMED is currently used on a voluntary basis for modules that have been made available. The connection to EUDAMED must take place within six months of the release of all platform modules that are fully functioning.

Data can be exchanged with EUDAMED in three methods, depending on the amount of data to be loaded and the required level of automation.

The “visible” format of the UDI is the UDI Carrier, which must contain legible characters in both HRI and AIDC formats. The UDI must be displayed on the device’s label, primary packaging, and all upper packaging layers representing a sealable unit.

These rules mean medical device manufacturers need reliable, traceable automated vision inspection to provide the track and trace capability for automatic aggregation and confirmation of the UDI process.

The UDI is a critical element of the medical device manufacturing process, and therefore, applying vision systems with automated Optical Character Recognition (OCR) and Optical Character Verification (OCV) in conjunction with Print Quality Inspection (PQI) is required for manufacturers to guarantee that they comply with the legislation.

But what are the differences between Optical Character Recognition (OCR) vs Optical Character Verification (OCV) vs Print Quality Inspection (PQI)?
We get asked this often, and how you apply the vision system technology is critical for traceability. To comply with UDI legislation, medical device manufacturers must check that their products comply with the requirements and track them through the production process. Whether you use OCR, OCV, or PQI depends on the manufacturer stage and where you are in the manufacturing process.

UDI

Optical Character Recognition (OCR)

Optical character recognition is based on the need to identify a character from the image presented, effectively recognise the character, and output a string based on the characters “read”. The vision system has no prior knowledge of the string. It therefore mimics the human behaviour of reading from left to right (or any other orientation) the sequence it sees.
You can find more information on OCR here.

Optical Character Verification (OCV)
Optical character verification differs from recognition as the vision system has been told in advance what string it should expect to “read”, and usually, with a class of characters it should expect in a given location. Most UDI vision systems will utilise OCV, as opposed to OCR, as the master database, line PLC (Programmable Logic Controller), cell PLC and laser/label printer should all know what will be marked. Therefore, the UDI vision system checks that the characters marked are verified and readable in that correct location (and have been marked). This then allows for the traceability through the process through nodes of OCV systems throughout the production line.
You can find more information on OCV here.

Print Quality Inspection (PQI)

Print quality inspection methods differ considerably from the identification methods discussed so far. The idea is straightforward: an ideal template of the print to be verified is stored; this template does not necessarily have to be taken from a physically existing ‘‘masterpiece’’, it could also be computer-generated. Next, an image containing all the differences between the ideal template and the current test piece is created. Simply, this can be done by subtracting the reference image from the current image. But applying this method directly in a real-world context will show very quickly that it will always detect considerable deviations between the reference template and test piece, regardless of the actual print quality. This is a consequence of inevitable position variations and image capture and transfer inaccuracies. A change in position by a single pixel will result in conspicuous print defects along every edge of the image. So, print quality inspection is looking for the difference between the print trained and the print found. Therefore, PQI will pick up missing parts of characters, breaks in characters, streaks, lines across the pack and general gross defects of the print.

You can find more information on PQI here.

UDI
UDI

What about Artificial Intelligence (AI) with OCR/OCV for UDI?

Where we’ve been discussing OCR and OCV, we’ve assumed that the training class is based either on a standard OCR font (a unique sans-serif font created in the early days of vision inspection to provide a consistent font, recognisable by humans and vision systems), or a trainable, repeatable fixed font. OCR with AI is a comprehensive deep-learning-based OCR (or OCV) method. This innovative technology advances machine vision towards human reading. AI OCR can localise characters far more robustly than conventional algorithms, regardless of orientation, font type, or polarity. The capacity to automatically arrange characters enables the recognition of entire words. This significantly improves recognition performance because misinterpreting characters with similar appearances is avoided. Large images can be handled more robustly with this technique, and the AI OCR result offers a list of character candidates with matching confidence ratings, which can be utilised to improve the recognition results further. Users benefit from enhanced overall stability and the opportunity to address a broader range of potential applications due to expanded character support. Sometimes, this approach can be used. However, it should be noted that it can make validation difficult due to the need for a data set prior to learning, compared to a traditional setup.

Do all my UDI vision inspection systems require validation?

Yes, all the vision systems for UDI inspection must be validated to GAMP/ISPE levels. Having the systems in place is one thing, but the formal testing and validation paperwork is needed to comply fully with the legislation. You can find more information on validation in one of our blog articles here.

UDI
UDI

Industrial vision systems allow for the 100% inspection of medical devices at high production speeds, providing a final gatekeeper to prevent any rogue product from exiting the factory gates. These systems, in combination with factory information communication and track & trace capability, allow the complete tracking of products to UDI standards through the factory process.

Machine vision vs computer vision vs image processing

We’re often asked – what’s the difference between machine vision and computer vision, and how do they both compare with image processing? It’s a valid question as the two seemingly similar fields of machine vision and computer vision both relate to the interpretation of digital images to produce a result from the image processing of photos or images.

What is machine vision?
Machine vision has become a key technology in the area of quality control and inspection in manufacturing. This is due to increasing quality demands and the improvements it offers over and above human vision and manual operation.

Definition
It is the ability of a machine to consequentially sense an object, capture an image of that object, and then process and analyse the information for decision-making.

In essence, machine vision enables manufacturing equipment to ‘see’ and ‘think’. It is an exciting field that combines technologies and processes to automate complex or mundane visual inspection tasks and precisely guide manufacturing activity. In an industrial context, it is often referred to as ‘Industrial Vision’ or ‘Vision Systems’. As the raw image in machine vision are generally captured by a camera connected to the system in “real-time” (compared to computer vision), the disciplines of physics relating to optical technology, lighting and filtering are also part of the understanding in the machine vision world.

So, how does this compare to computer vision and image processing?
There is a great deal of overlap between the various disciplines, but there are clear distinctions between the groups.

Input Output
Image processing Image is processed using algorithms to correct, edit or process an image to create a new better image. Enhanced image is returned.
Computer vision Image/video is analysed using algorithms in often uncontrollable/unpredictable circumstances. Image understanding, prediction & learning to inform actions such as segmentation, recognition & reconstruction.
Machine vision Use of camera/video to analyse images in industrial settings under more predictable circumstances. Image understanding & learning to inform manufacturing processes.

Image processing has its roots in neurobiology and scientific analysis. This was primarily down to the limitations of processor speeds when image processing came into the fore in the 1970’s and 1980’s—for example, processing single images following the capture from a microscope-mounted camera.

When processors became quicker and algorithms for image processing became more adept at high throughput analysis, image processing moved into the industrial environment, and industrial line control was added to the mix. For the first time, this allowed an analysis of components, parts and assemblies as part of an automated assembly line to be quantified, checked and routed dependent on the quality assessment, so machine vision (the “eyes of the production line”) was born. Unsurprisingly, the first industry to adopt this technology en mass was the PCB and semiconductor manufacturers, as the enormous boom in electronics took place during the 1980s.

Computer vision crosses the two disciplines, creating a Venn diagram of overlap between the three areas, as shown below.

Machine vision vs computer vision vs image processing

As you can see, the work on artificial intelligence relating to deep learning has its roots in computer vision. Still, over the last few years, this has moved over into machine vision, as the image processing learnt from biological vision, coupled with cognitive vision moves slowly into the area of general purpose machine vision & vision systems. It’s rarely a two-way street, but some machine vision research moves back into computer vision and general image processing worlds. Given the more extensive reach of the computer vision industry, compared to the more niche machine vision industry, new algorithms and developments are driven by the broader computer vision industry.

Machine vision, computer vision and image processing are now broadly intertwined, and the developments across each sector have a knock-on effect on the growth of new algorithms in each discipline.