How new vision tech revolutionises medical packaging quality

For manufacturers of medical devices, quality control is no longer a matter of simple spot checks — it’s a regulatory requirement that demands precision, consistency, and traceability. One global manufacturer recently sought to address longstanding issues on its packaging line: errors in labelling, carton completeness, and aggregation processes that could not be reliably caught by human inspectors. Industrial Vision Systems (IVS) was called upon to implement a comprehensive, non-contact machine vision inspection system. The result was a tightly integrated, multi-point solution that brought automated quality control to every stage of the packaging process — from individual labels through to final case loading.

The Challenge: Visual Inspection Across the Packaging Line
Operating under strict international regulations such as ISO13485 and the European MDR, the customer required a robust, end-to-end vision inspection system to:
• Verify correct label placement and content
• Ensure completeness and conformity of each carton
• Inspect aggregated cases to confirm correct load patterns and packaging integrity
• Log inspection data to support full traceability and audit readiness
• Manual checks had proven insufficient — missing occasional errors and adding operational overhead.

IVS’s Integrated Vision Inspection Strategy
IVS designed and delivered a modular vision inspection system across the client’s primary packaging stages, using high-speed cameras and intelligent image processing software to capture, inspect and validate every product and pack as it moved through the line.

1. Label Inspection Station
IVS high-resolution cameras were mounted on the labelling unit to verify label presence, alignment, and content legibility on every bottle or vial. Labels with damage, misplacement, or illegible text were identified in real time. Any faulty product was automatically rejected before downstream processing.

2. Carton Inspection Unit
After products were inserted into cartons along with patient information leaflets, a dedicated vision inspection station confirmed that:
• Cartons were present and correctly formed
• Required contents (e.g. device, cap, leaflet) were in place
• Carton flaps were properly closed and sealed
• External carton markings (e.g. pharmacode, branding) were undamaged and correctly oriented
This step removed the risk of incomplete or malformed units reaching customers.

3. Case Aggregation and Final Visual Checks
At the final stage of packaging, as individual cartons were loaded into larger shipping cases, a vision system checked that:
• The correct number and orientation of cartons were present
• Cartons were properly aligned and free from damage
• Outer case seals were intact and packaging was undisturbed
• These checks ensured that every unit shipped was complete, compliant, and visually verified.

Data Logging and Traceability
All inspection results — including pass/fail status, timestamp, station location, and stored image snapshots — were automatically logged in a secure database. This created a complete digital audit trail, essential for both internal QA and regulatory inspections.
Operators and QA managers could view this information in real time via IVS’s user interface, or export batch-level data for documentation and trend analysis.

Results Achieved: Quality, Efficiency and Compliance
Following deployment, the manufacturer reported:
• Significant reduction in packaging errors, especially related to missing or incorrectly applied labels and misassembled cartons
• Complete removal of manual visual inspection, freeing up skilled labour for higher-value tasks
• Zero impact on throughput, with inspections running at full line speed
• Stronger audit readiness, with automated records available for every unit produced
Most importantly, the system provided confidence that no incomplete, damaged, or non-compliant product would leave the facility undetected.

Why Machine Vision Is Essential in Medical Device Packaging
Automated machine vision systems are now vital for modern medical packaging operations, offering:
• Contact-free inspection for sterile environments
• Consistent, objective pass/fail decisions
• Detection of visual defects too subtle for human eyes
• Scalable inspection across multiple packaging formats
• Real-time correction via integrated reject systems
For medical device manufacturers under regulatory pressure, this represents a vital tool in reducing risk while improving operational efficiency.

Key Capabilities of the IVS System
The IVS-installed system includes:
• High-speed industrial camera solutions with precision optics
• Edge, contour and label analysis to detect misalignment, damage, or missing items
• Lighting control to eliminate glare and shadows from complex packaging materials
• Modular station design adaptable to any point on the packaging line
• Audit-ready data collection with full traceability and image history
• User-friendly HMI for operators and QA teams

Why Choose Industrial Vision Systems (IVS)?
IVS is an internationally experienced machine vision provider, with a long track record in the medical, pharmaceutical, and life sciences sectors. Clients choose IVS for:
• Custom-built inspection systems tailored to exact product and line requirements
• Proven performance in validated and audited environments
• Comprehensive service from consultancy through to installation, validation and ongoing support
• Systems designed for real-world reliability, even under high-speed or multi-format production conditions

Final Thoughts: Vision Inspection as a New Standard
In a market where quality is paramount and compliance is critical, vision systems for medical devices are becoming a standard feature of well-run production lines. As demonstrated in this case, automated inspection not only reduces risk — it streamlines production, supports traceability, and ultimately ensures that only flawless products reach patients and healthcare providers.

If your business is still relying on human eyes for critical packaging checks, now is the time to explore how vision technology from IVS can raise your quality standards and future-proof your production.

How to boost in-line metrology with QA vision systems

In today’s increasingly automated manufacturing environments, machine vision is no longer just about reading barcodes or spotting product defects—it plays a critical role in precision in-line metrology. When properly implemented, in-line machine vision systems can deliver repeatable, accurate, non-contact measurements that significantly improve product quality and reduce inspection time. However, achieving this consistency and accuracy isn’t automatic—it requires careful consideration of imaging design, system calibration, and application-specific factors.

This article explores the key challenges and best practices for implementing repeatable and accurate machine vision metrology, with a focus on the practical realities of real-world factory floors.

Why Repeatability and Accuracy Matter in Machine Vision

In industrial settings, repeatability—the ability of a system to produce the same measurement result under consistent conditions—is often more valuable than pure accuracy. If a vision system produces consistent results, manufacturers can calibrate around any fixed bias and trust that in- and out-of-tolerance parts are reliably identified. That said, accuracy—how close a measured value is to the true value—still plays a vital role. The ideal metrology system combines high repeatability (precision) with high trueness, delivering results that are both reliable and close to the actual specification.

In high-volume, high-speed production environments, a failure in either can lead to:

  • Increased scrap or rework
  • Undetected defects reaching customers
  • Regulatory compliance issues
  • Wasted time and materials

More details on running a Gauge R & R study for machine vision can be found here.

The Unique Challenge of Non-Contact Measurement

Machine vision systems operate on non-contact principles, meaning they measure visible features of parts using cameras and light, rather than physically touching the parts as traditional gauges do. This fundamental difference introduces a range of practical challenges. For instance, consider a bore hole in a machined part. A manual caliper can measure the internal diameter at depth. However, a vision system can only assess the surface edges of the hole visible in the camera’s field of view. If the edges are chamfered, angled, or not perfectly flat, this surface-based measurement may not reflect the true inner dimension. The discrepancy between what is visible and what is intended must be understood and, where possible, corrected for. In many cases, if repeatability is high, engineers can apply known compensation factors to reconcile vision results with true dimensional targets.

Part Presentation: The Hidden Variable

One of the most critical and often overlooked contributors to measurement error in vision systems is part presentation—how the part is positioned relative to the camera and lighting system.

For example:

  • A perfectly perpendicular bore hole will appear circular.
  • A tilted part, even by a few degrees, will distort the bore into an elliptical shape, causing false readings.

Lighting setups like backlighting, which highlight the silhouette of a part, are particularly sensitive to orientation. Telecentric lenses can reduce some of these effects, but they can’t eliminate all presentation-induced distortions.

To ensure repeatability, manufacturers must:

  • Minimise variation in part positioning and orientation
  • Use fixtures or robotic handling for consistent presentation
  • Simulate and test worst-case misalignment scenarios

Understanding Measurement Metrics

Let’s explore three critical metrics that underpin repeatable and accurate measurements:

Precision (Repeatability):

  • Describes how close measurements are to each other under unchanged conditions.
  • Often expressed as standard deviation (SD)—a lower SD indicates higher precision.
  • Critical for in-line systems that must classify parts as pass/fail consistently.

Trueness:

  • How close the average measurement is to the actual value.
  • Important for ensuring product conformity to specifications.

Accuracy:

  • A combination of high precision and high trueness.
  • While important, accuracy can often be adjusted post-measurement if precision is dependable.

Spatial Resolution: The Foundation of Precision

The resolution of your imaging setup directly determines the smallest detectable feature size. It’s calculated by dividing the field of view (FOV) by the number of pixels on the sensor. For example:

A 1000-pixel wide sensor covering a 25mm FOV yields a pixel resolution of 0.025mm/pixel.

But more importantly, gauge resolution—the smallest unit that can be reliably measured—should be at least 1/10th of the tolerance band. So for a ±0.1mm tolerance, a system must reliably resolve to 0.01mm.

This often results in surprisingly high-resolution requirements. Compromising here can render a system ineffective. Choosing the right sensor, lens, and lighting combination is essential for achieving the required resolution without sacrificing image quality.

Going Beyond Pixel-Level Precision: Sub-Pixel Algorithms

To improve resolution without simply increasing sensor size, many systems leverage sub-pixel measurement algorithms:

  • Edge gradient analysis
  • Regression fits (e.g., best-fit circles or lines)
  • Shape connectivity analysis

These tools can achieve repeatability finer than one pixel—sometimes to 1/10th or even 1/20th of a pixel—but this comes with caveats. Sub-pixel claims must be empirically validated under real production conditions, not just assumed based on vendor specifications.

The Optics and Lighting Equation

You can’t achieve good measurements without a good image. That means selecting:

  • High-quality lenses (often telecentric for metrology)
  • Stable, directional lighting
  • Mechanical isolation to reduce vibration

In larger FOV applications, maintaining uniform illumination becomes challenging. Shadows, reflections, and hot spots can reduce feature visibility and distort measurement edges. These distortions reduce repeatability even if the system is otherwise high resolution.

Consider testing:

  • Dark field lighting for edge enhancement
  • Collimated or structured lighting for surface features
  • Multiple camera angles if single-view systems can’t capture all required data

3D Vision: When 2D Isn’t Enough

For complex parts with depth variations, a 2D view is sometimes insufficient. This is where 3D imaging systems—such as structured light, stereo vision, or laser triangulation—shine. They provide depth information, enabling full dimensional inspection.

However, 3D systems come with their own implementation challenges:

  • Higher data processing loads
  • More complex calibration
  • Increased sensitivity to surface finish

Still, for applications like automotive castings, gear inspection, or aerospace components, 3D systems may be the only way to achieve true spatial accuracy and repeatability.

Calibration: Turning Pixels into Real-World Measurements

Calibration transforms pixel-based measurements into real-world units (mm or microns) and is vital for traceability and compliance.

Benefits include:

  • Real-world accuracy across different systems
  • Compensation for perspective errors
  • Multi-camera correlation

Regular re-calibration using certified calibration plates ensures long-term accuracy and helps identify when components drift or degrade over time.

Testing and Validation: Static vs. Dynamic Accuracy

Performance testing must go beyond lab trials. Two essential tests are:

  • Static Testing: Measuring a part multiple times in a fixed position to assess core system repeatability.
  • Dynamic Testing: Evaluating measurement variation during real production runs—this highlights error contributions from automation, vibration, lighting variation, or handling inconsistencies.

The dynamic error is usually higher than the static one and is the best predictor of actual system performance. If static performance is good but dynamic results are poor, focus on mechanical presentation improvements, not software tweaks.

Key Takeaways for Manufacturers

For manufacturers striving to meet tight tolerance requirements, boost productivity, and ensure regulatory compliance, investing in a well-designed, repeatable, and accurate machine vision metrology system is a smart move.

But success hinges on:

  • Proper application-specific imaging design
  • Understanding the limits of non-contact measurement
  • Minimising part presentation variation
  • Using high-quality components and validated software tools
  • Performing rigorous testing, both static and dynamic

When approached methodically, machine vision metrology doesn’t just replace manual inspection—it transforms it, offering traceable, reliable measurement at production speed.

Want to Learn More?

If you’re looking to improve your factory’s in-line measurement capabilities, speak with us. Whether it’s for medical device production, orthopaedic joints, automotive production or pharmaceutical manufacturing – repeatable machine vision metrology is within reach—with the right expertise.

Why vision systems are critical in medical device blister strip and pack inspection

In the highly regulated world of medical device manufacturing, every aspect of production must meet stringent quality control standards. One critical area is the packaging of devices, particularly in the form of blister packs and blister strips. Blister packs are commonly used for tablets, test strips, and other small medical components, while blister strips are typically used in inhaler-based products, where they serve as precision dosage systems for powder-based medications. In both formats, ensuring the integrity, accuracy, and traceability of the packaging is essential, not just for product quality, but for patient safety.

This is where machine vision systems play a vital role. Inspection powered by advanced vision technology ensures consistency, reduces human error, and maintains regulatory compliance.
Furthermore, aligning such systems with GAMP (Good Automated Manufacturing Practice) validation principles guarantees that vision systems are implemented, validated, and maintained in a way that meets both regulatory and operational expectations.

Let’s explore why vision systems are used for medical device blister pack and blister strip inspection, how they work, and how GAMP validation supports their reliable deployment.

What Are Medical Device Blister Packs and Why Inspect Them?

Blister packs are thermoformed plastic packaging sealed with a backing of aluminium or paperboard. In the pharmaceutical and medical device industries, they are used to hold individual units—be they tablets, test strips, or surgical staples—securely in compartments. Blister strips, often configured in a continuous or semi-continuous format, are used in dry powder inhalers or similar drug delivery systems. These strips must be accurately formed and dimensioned to function correctly in their respective medical devices.

Inspection of these packages ensures that: – All cavities are correctly filled – Products are in the correct orientation – Foreign objects or contaminants are absent – Seals are intact and properly aligned – Labeling and print (e.g., expiry dates, lot numbers) is clear and accurate – Blister dimensions are within tolerance for mechanical compatibility
Manual inspection is time-consuming, prone to errors, and difficult to scale. As a result, machine vision has become the preferred method for high-speed, high-accuracy inspection.

Why Use Vision Systems for Inspection?

Machine vision systems use cameras, lighting, and software algorithms to “see” and analyse what’s happening on the packaging line. They are capable of:

  1. High-Speed Inspection
    Vision systems can inspect thousands of units per minute—far beyond human capability—while maintaining high levels of accuracy.
  2. Non-Contact and Non-Invasive Analysis
    Vision systems inspect blister packs and strips without physically touching them, ensuring sterile or delicate products are not compromised.
  3. Consistent and Objective Judgments
    Vision systems apply the same criteria to every unit, eliminating subjective decision-making and operator fatigue.
  4. Comprehensive Data Logging
    Every inspection event can be logged for traceability and compliance. This data is critical for audits and root-cause analysis.
  5. Immediate Feedback and Integration
    Vision systems can trigger alarms, remove faulty products from the production line, or adjust machine settings in real-time.

Typical Machine Vision Inspection Tasks in Blister Packs and Strips

Vision systems can be deployed at various stages of the packaging process. Typical inspection functions include:

  • Presence/Absence Verification
    Ensuring that each cavity contains a product and that no multiple fillings have occurred.
  • Shape and Size Detection
    Confirming that the product in the blister matches the expected shape and size.
  • Color and Contrast Analysis
    Detecting colour-coded items or ensuring that items have not been discoloured due to defects.
  • Print Inspection
    Verifying legibility and accuracy of lot codes, barcodes, and expiration dates using OCR (optical character recognition).
  • Seal Integrity Checks
    Detecting misaligned or incomplete sealing between the blister and backing.
  • Foreign Object Detection
    Identifying any unexpected items in the blister pack that could indicate a contamination event.
  • Blister Form and Dimension Verification
    Using metrology tools within vision systems to confirm that blister cavities are correctly formed. Accurate cavity depth, shape, and spacing ensure the strip will fit and move properly within a corresponding medical device.

How Machine Vision Systems Inspect Blister Strips

Blister strip inspection involves a sequence of highly specialized tasks that go beyond mere presence detection. These systems typically consist of multiple inspection stations, each optimized for specific checks:

  1. Form Verification (Metrology Checks)
    Before filling, blister cavities are inspected for correct shape and dimensions. This involves topographical imaging or laser triangulation to measure cavity depth, width, wall thickness, and pitch. Accurate forming is crucial as these strips often interface with automated drug delivery systems or diagnostic readers.
  2. Fill Verification
    After forming, vision systems check that the correct item is placed in each cavity. 2D or 3D imaging confirms the product presence, orientation, and fit. Deep learning algorithms may be used to classify subtle shape variants or defects.
  3. Sealing Validation
    During the sealing phase, systems use reflective or infrared lighting to detect sealing anomalies like wrinkles, bubbles, or partial seals that could affect sterility.
  4. Print and Label Verification
    After sealing, vision modules check for correct label placement, print alignment, and clarity. OCR ensures that the printed information is correct and legible.
  5. Dimensional Strip Measurement
    Blister strips must often fit into tightly toleranced mechanical guides. Vision systems use calibrated optics to measure total length, width, and cavity spacing with micron-level precision.
  6. Reject and Sort Mechanism
    Any detected non-conformance leads to automatic rejection using air jets or servo-driven diverters. Faulty items are collected and optionally imaged again for defect classification and process improvement.

Through this tightly integrated process, vision systems ensure not just product quality but functional compatibility with downstream devices.

GAMP and Validation of Vision Systems

Using a vision system in a regulated environment requires careful validation. GAMP 5 (Good Automated Manufacturing Practice) provides a structured approach for validating automated systems used in GxP (Good Practice) environments, including medical device manufacturing.

Key GAMP Principles Relevant to Vision Systems

  1. Risk-Based Approach
    GAMP promotes focusing validation efforts based on system complexity and risk to patient safety. A vision system used for final product inspection clearly ranks high in risk and must be rigorously validated.
  2. Lifecycle Approach
    Validation isn’t a one-time task—it spans the entire lifecycle of the system, from user requirements to system retirement.
  3. Documentation Standards
    GAMP requires traceable documentation of:

    • User Requirements Specification (URS)
    • Functional Specification (FS)
    • Design Specification (DS)
    • Installation Qualification (IQ)
    • Operational Qualification (OQ)
    • Performance Qualification (PQ)
  4. Supplier Involvement
    Since third-party vendors supply many vision systems, GAMP emphasises assessing supplier quality and leveraging vendor documentation and testing when appropriate.
  5. Change Control and Maintenance
    Any updates to vision software or configuration must follow strict change control procedures, with revalidation as needed.

GAMP Category for Vision Systems

Vision systems typically fall under GAMP Category 4 or 5, depending on customization: – Category 4: Configured software, where standard vision platforms are tailored via scripting or parameter setting. – Category 5: Custom software, often involving bespoke algorithms or AI that cannot be validated using standard test cases.

Understanding the category helps define the depth of validation required.

Best Practices for Vision System Validation and Deployment

  1. Early URS Definition
    Clearly define what the system should do—detection thresholds, false reject limits, image retention requirements, etc.
  2. Vendor Selection
    Choose vendors with experience in medical-grade vision systems and established quality management systems (QMS).
  3. FAT and SAT Testing
    Perform Factory Acceptance Testing (FAT) at the vendor’s site and Site Acceptance Testing (SAT) upon installation to confirm functionality.
  4. Simulated Defect Testing
    Use known-good and known-defect packs to test system sensitivity and false positive/negative rates.
  5. Audit Trails and Access Control
    Ensure systems log user activity and image data. Role-based access should prevent unauthorised changes.
  6. Periodic Review
    Implement a schedule for system health checks, calibration, and revalidation where necessary.

Conclusion

Vision systems have become indispensable in the inspection of medical device blister packs and blister strips, providing fast, accurate, and scalable quality control that meets the high standards of healthcare manufacturing. For blister strips in particular—used in powder-based inhalers—precision is critical, as any dimensional or content error could impair the device’s ability to deliver the correct dosage. The role of vision systems extends beyond quality assurance to ensuring that these strips interact properly with mechanical systems in drug delivery devices.

However, deploying these systems isn’t just a technical decision—it’s a regulatory commitment. Aligning implementation with GAMP validation principles ensures that vision systems operate reliably, remain compliant, and deliver measurable value throughout their lifecycle.

As technology continues to evolve, vision systems will only grow more capable, intelligent, and integral to the manufacturing process. For organisations that prioritise safety, efficiency, and compliance, investing in a validated vision inspection system is not optional—it’s essential.

Guarantee inhaler quality through automated vision inspection

The latest generation of inhalers are complex devices made up of many constituent parts. Each is moulded, machined, stamped or 3D printed through a whole series of steps prior to arrival at the final assembly of the full inhaler medical device. It’s critical at this point that products arrive 100% automatically inspected, but also that the whole assembly is checked as it runs through the manufacturing process. IVS has been involved with inhaler production for many years, keeping an eye on quality 24/7 with precision vision systems. The latest generation of IVS vision allows for the automated checking of even more complex inhaler products through the use of innovative optics, lighting and vision algorithms – increasing yield and guaranteeing quality.

One such large multi-national medical device company tasked IVS with adapting vision technology to specific application requirements relating to inhaler production. As the product is assembled, some aspects of the valve, body and o-ring need to be checked at speed to confirm adherence to tolerances and correct assembly, along with the ability to reject gross defects and particulates found in the device. This last element is especially critical for use in the pharmaceutical industry. In addition, all vision systems are validated to the ISPE GAMP/FDA protocols to confirm adherence to all necessary regulations and approvals. Validated vision systems are critical for inhaler production.

The vision system consists of multiple vision cameras mounted around the dial plate to allow synchronous inspection of several elements of the build process. Having cameras at varying angles with specific lighting arrays allows inspection in difficult-to-access areas. Each camera must be triggered and processed, and results must be given back to the Programmable Logic Controller (PLC) within the two-and-a-half-second cycle time for the machine, running two up at a time. Connection to the PLC is through industrial ethernet, with image data and results being sent to the factory information system to allow overall shift information to be easily accessible to the production management.

Statistical process control traces are available on the main vision system monitor, mounted on the production machinery. At a glance, operators can see the trends from the production process, such as if the assembly build is slowly going out of control, and can take corrective action at speed. The IVS vision cameras become the “eyes” of the operators, allowing quick determination of process control. This allows a single operator to cover the operation and maintenance of a number of production processes, which previously would have been done by two or three operators.

IVS vision systems have proved to be a critical element in the production of inhalers, from the inspection of o-rings and inserts, to the complex assembly of the product. All the while, it guarantees that the quality of the medical device meets industry and production standards.

80 years of machine vision history explained: Key milestones from 1945 to 2025

Introduction

This month, we’re digging into how it all began. The current era of “Smart Manufacturing”, encompassing machine vision and vision systems as a technology, can be traced back to just after the Second World War. In this article, we’re going to see how the field has developed over the last 80 years, ultimately leading to the current trend of “real” AI machine vision.

Machine vision is a crucial subset of computer vision, primarily applied within industrial and manufacturing environments. It enables machines to visually inspect, analyse, and make decisions based on digital images—automating what once required human eyesight and judgement. As computing capabilities and imaging technologies have evolved, so has machine vision, transitioning from a research curiosity to a critical pillar of modern automation. We’ll explore the evolution of machine vision from its conceptual birth in the mid-20th century to its rapid advancements up to the present day in 2025.

80 years of machine vision history explained: Key milestones from 1945 to 2025

Origins: Post-War Foundations (1945–1956)

The genesis of machine vision can be traced back to the broader development of artificial intelligence (AI) in the post-World War II era. The early foundations were laid not with industrial inspection in mind, but rather as part of philosophical and military inquiries into replicating human cognition and intelligence using emerging computer technologies.

Researchers began to depart from biological models like artificial neurons, instead focusing on how human thought could be more effectively emulated using modern digital computers (modern for the time!). During this phase, the concept of a machine “seeing” was not yet considered. The vision element would only emerge as AI matured.

Early AI and the Seeds of Vision (1956–1963)

A pivotal moment occurred in 1956 at the Dartmouth Conference, widely considered the birth of AI as a field. This event attracted pioneers from Carnegie Mellon, MIT, Stanford, and IBM, sparking an explosion of research.

Two major branches emerged—logical reasoning and pattern recognition. The latter would eventually evolve into machine vision. LISP, a programming language developed at MIT for AI applications, became instrumental in early image-based reasoning experiments, laying the groundwork for future vision systems.

The 1960s–1970s: First Visual Machines

Between 1963 and 1980, computer vision began taking form. At MIT, the “Blocks World” experiment marked one of the first serious efforts to enable computers to interpret real-world scenes. Cameras were used to detect blocks, and the system relayed information to a robotic arm—establishing an early link between image analysis and physical action.

David Marr, a British cognitive scientist working at MIT, became a central figure in vision theory during this era. His models of visual perception—emphasising how visual systems interpret depth, shape, and motion—remain foundational in vision science today.

The 1980s: The Birth of Machine Vision as an Industry

By the 1980s, machine vision emerged as a discrete field. The development of image processing technologies and deployment of real systems began to occur in industrial contexts.

The XCON system, developed at Carnegie Mellon for Digital Equipment Corporation, automated configuration tasks and saved the company tens of millions. Meanwhile, MIT spun out “Cognitive Experts” in 1981—one of the first commercial machine vision firms.

Key technological milestones included:
– Transition from LISP to C-based processing on standard microcomputers.
– Emergence of 8-bit greyscale image processing.
– First industrial cameras and plug-in image processing boards.
– The Windows OS revolution, starting with Windows 1.0 in 1985.

However, the industry suffered setbacks—such as the 1988 collapse of Machine Vision International—highlighting the volatility of early adoption.

The 1990s: Commercial Expansion

With the release of Windows 95 and widespread adoption of 32-bit operating systems, the 1990s saw explosive growth in machine vision.

Key trends:
– Broader adoption across electronics and semiconductor industries.
– Sharp decline in hardware costs, making machine vision accessible to smaller firms.
– Emergence of standalone “smart cameras” with onboard processing power.
– Expanding market with integration into quality control and robotics.

These advances allowed machine vision to leave the labs and integrate seamlessly into automated production lines, significantly boosting efficiency and quality assurance.

The Early 2000s: Consolidation and Interface Evolution

By the early 2000s, machine vision entered a phase of consolidation. Key developments included:
– Faster PC hardware and more user-friendly Windows interfaces.
– Migration from code-based libraries to graphical user interfaces (GUIs).
– Ergonomic designs for easy factory-floor integration.
– New digital interfaces like FireWire (IEEE 1394), allowing high-speed image transfer without frame grabbers.

The market grew steadily, supported by increasing demand for consistent quality and traceability across manufacturing sectors.

By 2004, the global machine vision market was valued at approximately $500 million, with projections of near-billion-pound potential by the following decade. However, proprietary systems, capital cost, and the need for highly skilled developers were noted as barriers to broader adoption.

The 2000s–2010s: Smart Systems and AI Re-emerge

From 2004 onward, the machine vision landscape evolved significantly with the reintroduction of AI—this time powered by vast computational advances, cheaper sensors, and big data.

Key milestones:
– Smart cameras became ubiquitous, offering embedded computing and high-resolution imaging in compact formats.
– Machine learning began to enhance image classification, particularly in pattern recognition tasks such as defect detection or surface inspection.
– The rise of GigE Vision, USB 3.0, and CoaXPress allowed faster, longer-distance data transmission between cameras and computers.
– Adoption across non-industrial sectors: medical imaging, agriculture (crop inspection), pharmaceuticals, and automotive (driver assistance systems).
– Growth of 3D vision systems and multi-camera stereo setups for robotics and logistics.

2010s–2020s: Deep Learning and Edge Computing

The last decade brought a seismic shift: deep learning revolutionised machine vision capabilities.

Highlights:
– Convolutional Neural Networks (CNNs) proved highly effective at visual classification tasks, outperforming traditional rule-based algorithms.
– Vision systems began to learn from data rather than being explicitly programmed.
– Open-source frameworks like TensorFlow and PyTorch made it easier to train and deploy custom models.
– Edge computing allowed real-time vision processing on-device, reducing latency and eliminating the need for centralised servers.
– Vision systems became integral to Industry 4.0 (and then Industry 5.0), enabling closed-loop quality control, predictive maintenance, and real-time decision-making.

2020s to 2025: The Age of Autonomous and Context-Aware Vision

In the 2020’s, machine vision is no longer just a tool—it is a strategic enabler for autonomous systems and context-aware intelligence.

Trends defining this era:
– AI-on-the-edge vision systems handle complex decision-making in real-time.
– Integration with robotic arms, drones, and AMRs (Autonomous Mobile Robots) in warehouses and factories.
– Adoption of hyperspectral and thermal imaging in areas such as agriculture, recycling, and security.
– Synthetic data and simulation environments like NVIDIA Omniverse train models without real-world images, solving the data scarcity issue.
– Continued miniaturisation and energy-efficiency improvements make machine vision viable in wearable devices and IoT sensors.
– 5G and cloud-native architectures enable distributed vision networks—real-time data sharing between cameras and AI across facilities.

Today, machine vision is embedded in almost every smart manufacturing production line, from inspecting microelectronics to identifying anomalies in real-time during automotive manufacturing. It plays a vital role in ESG compliance (e.g., reducing waste), safety (e.g., human-machine interaction monitoring), and even creative applications like art restoration and retail analytics.

2025 onwards: Modern AI machine vision

Modern machine vision AI systems can be integrated across a wide range of industrial technologies. They offer seamless compatibility with industrial cameras (both 2D and 3D), programmable logic controllers (PLCs), robotic systems, enterprise resource planning (ERP) platforms, and cloud infrastructure. This level of integration enables manufacturers to automate and manage data across every stage of production with greater efficiency and control.

A key feature of these platforms is their no-code architecture, which allows manufacturers to develop highly accurate, domain-specific vision models without the need for programming expertise. By building on robust foundational models, users can quickly adapt systems to detect, inspect, or analyse product features tailored to specific use cases or production needs. This is the promise of AI machine vision.

Flexibility is central to the design of these platforms. Manufacturers can customise entire workflows to match the demands of their operations. This includes setting up vision systems for robotic arms or fixed cameras, managing stationary or free-moving products, adjusting AI models for different product variants, configuring trigger mechanisms, and defining output responses and integrations with wider business systems.

In addition, many modern platforms include built-in tools for creating custom dashboards and analytics. These features support tasks such as root cause analysis, process optimisation, and end-to-end traceability — all essential for maintaining quality and improving decision-making on the factory floor.

By offering such adaptability and ease of use, machine vision AI platforms are playing a growing role in modernising manufacturing, helping organisations boost productivity, enhance product quality, and stay competitive in an increasingly data-driven industry.

Conclusion

Machine vision has journeyed from a theoretical curiosity within AI to an indispensable technology across modern industry. Its evolution has mirrored, and often driven, developments in computing, optics, and artificial intelligence.

From the 1940s AI labs to 2025’s edge-intelligent robotic systems, machine vision has constantly adapted to meet the growing demands of efficiency, precision, and intelligence. The next frontier may include general-purpose visual intelligence, capable of operating across dynamic, unstructured environments—ushering in truly adaptable visual machines.

As we look ahead, the convergence of quantum computing, neuromorphic processors, and further AI integration promises to keep machine vision at the forefront of technological innovation.

Machine vision lighting – Proven techniques to boost inspection accuracy and performance

This month, we are discussing one of the main, and sometimes underutilised, elements of vision systems – lighting. Lighting in machine vision quality control is perhaps one of the most important aspects to get right. It’s critical because it enables the camera to see necessary details. In fact, poor lighting is one of the major causes of failure of machine vision system installations and poor yield performance of a vision system. Therefore, it should be a top priority in determining the correct lighting approach for a machine vision application. While the latest generation AI deep learning vision systems can compensate to some extent for fluctuations in lighting, nothing will replace the ability to light a product accurately to allow defective areas to be seen by the camera. Getting these fundamentals right makes the job of the software image processing much more manageable.

For every vision quality control application, there are common lighting goals:

  1. Maximising feature contrast of the part or object to be inspected.
  2. Minimising contrast on features not of interest.
  3. Removing distractions and variations to achieve consistency.

In this respect, the positioning and type of lighting are key to maximising the contrast of inspected features and minimising everything else. Think of how a manual operator would inspect a part for quality. Typically, they would turn it in front of the light while moving their head to see how the light reflects off of specific areas, slowly covering the whole part or necessary detail as required. Sometimes, even holding it up to the light to create a background to see contamination or features. A human does all this instinctively, but for a digital vision system, this somehow has to be replicated – and at high speed!

Lighting position

The first thing to consider is the lighting position. The position of the lighting will determine the brightness, shadowing, contrast, and what element of light will be seen by the camera system. The overall classification of machine vision lighting positions can be seen below.

Description Characteristics Features
Front illumination – the light is positioned on the same side as the camera/optics Useful for inspecting defects or features on the surface of an object
Back illumination – the object being illuminated blocks light on its way to the camera Useful for highlighting the object silhouette, measuring outline shape and detecting presence/absence of holes or gaps, as well as enhancing defects on transparent objects

Types of machine vision lighting

Within machine vision, the two most common types of lighting used are fibre optics and LED. Fibre optics allows the lighting source to be positioned away from the scene, with a fibre optic cable delivering the light to the vision system scene. This can also allow a higher lux light level to be concentrated into a smaller area.

Fibre Optic Deliver light from halogen, tungsten‐halogen or xenon source. Bright, shapeable & focused.
LED lights By far the most commonly used because:

  • High speed
  • Pulsing – Switched on and off in sequence, only using when necessary
  • Strobing useful for freezing the motion of fast moving objects, to eliminate ambient light influence
  • Flexible positioning into variety of shapes. Directional or diffused
  • Long lifetime
  • Higher Stable Output
  • Colours – Available in visible and non visible
Dome lights
  • Designed to provide illumination coming from virtually any direction
  • Avoid reflection and provide even light when inspecting surface features on an object with complex curved geometry
Telecentric lighting
  • Accurate edge detection and defect analysis through silhouette imaging
  • Reflective objects
  • High accuracy & precise measurement
Diffused light
  • Filters create contrast to lighten or darken object features
  • Avoid uneven illumination on reflective surfaces
Direct light
  • Delivers lighting on the same optical path as the camera

Other key elements to consider when designing lighting for a machine vision environment include the direction, brightness and colour/wavelength compared to the target’s colour.

Lighting angle/direction

The position and angle of the LEDs have a direct effect on the ability of the vision system camera to see contrast in certain areas. This allows lighting setups to be designed to provide the optimum lighting conditions to highlight key areas of the product. This makes the image processing of the machine vision system image more consistent and makes it easier to provide a robust solution. There are several different techniques to think about in this area.

Bright field – Light reflected by the object being illuminated is inside of the optics/camera range Brightfield front lighting – Light reflected by the flat object being illuminated is collected by the optics/camera. Produces dark characteristics on a bright background
Brightfield back lighting – light is either stopped or if the material is transparent or opaque it will be transmitted through the surface
Dark field – Light reflected by the object being illuminated is outside of the optics/camera range Dark field front lighting – reflected light is not collected by the optics/camera. Enhances the non-planar features of the surface as brighter characteristics on a dark background
Dark field backlighting – light transmitted by the object and scattered by non-flat features will be enhanced as bright on a dark background
Co-axial illumination – The front light is positioned to hit the object surface at 90° Can additionally be collimated so that rays are parallel to the optics/camera axis and is useful for achieving high levels of contrast when searching for
defects on highly reflective surfaces

Colour/wavelength

Another aspect to think about when designing machine vision lighting is the filters which can be deployed on the front of the lens. These typically screw onto the front of the lens. They can limit the transmission of specific wavelengths of light, allowing the image to be changed to suit a particular condition that is being identified in the machine vision process.

Filters
  • Create contrast to lighten or darken object features.
  • Like colour filters lighten (i.e. red light makes red features brighter) and opposite colour filters darken (i.e. red light makes green features darker).
  • Polarizers are types of filters used to reduce glare or hot spots. They enhance contrast, so entire objects can be recognised.

Of course, the parameters outlined in the preceding tables are used in varying combinations and are key to achieving the optimal lighting solution. However, we also need to consider the immediate inspection environment. In this respect, the choice of effective lighting solutions can be compromised by access to the part or object to be inspected. Ambient lighting such as factory lights or sunlight can also have a significant impact on the quality and reliability of inspection and must be factored into the ultimate solution.

Finally, the interaction between the lighting and the object to be inspected must be considered. The object’s shape, composition, geometry, reflectivity, topography and colour will all help determine how light is reflected to the camera and the subsequent impact on image acquisition, processing, and measurement.

Due to the obvious complexities, there is often no substitute other than to test various techniques and solutions. It is imperative to get the best lighting solution in place before starting the development of the image processing requirements in any machine vision quality control application.

The future of AI-powered vision systems in medical device and pharmaceutical inspection

Quality control comes first in the highly regulated fields of medical device and pharmaceutical manufacture. Not only is it a question of efficiency; patient safety depends on every product meeting strict industry requirements. Long the backbone of quality control in this industry, traditional inspection techniques depend on human monitoring and standard rule-based vision systems. But the limits of traditional methods are clear as production sizes rise and manufacturing techniques get more complicated. Artificial intelligence’s inclusion into machine vision systems is changing the inspection scene and allowing more precise, flexible, effective quality control.

Vision systems driven by artificial intelligence are significantly altering how producers handle production efficiency, compliance verification, and defect discovery. Unlike rule-based systems, which run on pre-programmed criteria to find flaws, artificial intelligence models—especially those using deep learning—can spot minute, hitherto undetectable variations in items. In the medical and pharmaceutical sectors, where variances can be minute but nevertheless noteworthy enough to affect product integrity, this is extremely helpful.

Detecting discrepancies in high-volume manufacturing lines is one of the important domains where artificial intelligence-powered visual systems shine. Medical equipment including syringes, catheters, and surgical tools must to be exactly perfect; even small flaws can affect their efficacy. Conventional machine vision systems must be extensively manually programmed to distinguish between good and bad products. By means of large datasets of images, artificial intelligence models can be trained to identify acceptable deviations while indicating abnormalities that might have escaped attention under a rule-based approach. This guarantees that only really faulty goods are taken off the supply chain by greatly lowering false positives and negatives.

AI-enhanced vision technologies are proving absolutely essential is pharmaceutical packaging and labelling. Regulatory compliance depends critically on batch code verification, expiration date printing, and label placement precision. Although efficient, standard OCR (optical character recognition) techniques can have difficulty with variances in font clarity, small distortions, or package material. Even in demanding environments like curved surfaces or low-contrast printing, AI models educated on thousands of packaging samples may learn to spot mistakes with significantly more dependability. These systems’ self-learning character also helps them to adjust to new materials and packaging designs without calling for significant reprogramming.

Another area where visual inspection driven by artificial intelligence is making great progress is the sterility and integrity of medical packaging. Combining human inspectors with static rule-based visual checks has long been the basis for seal validation in blister packs, vials, and ampoules. These techniques can, however, have trouble with borderline situations such minor punctures compromising sterility or somewhat uneven heat sealing. Combining artificial intelligence-based vision inspection with hyperspectral imaging or X-ray analysis will help to identify hidden flaws missed by conventional techniques, therefore preventing the release of contaminated goods into use. In the pharmaceutical industry especially, where even little packaging flaws could cause contamination hazards, this is especially important.

The capacity of AI-powered vision systems to improve traceability and compliance with strict legal criteria adds still another benefit. Manufacturers of medical devices have to follow EU MDR, ISO 13485, and FDA 21 CFR Part 820 among other international guidelines. By means of factory execution systems (MES) and enterprise resource planning (ERP), AI vision systems can interact to generate real-time inspection data, therefore guaranteeing thorough documentation of quality control methods. For recalls and regulatory checks, this degree of traceability is absolutely helpful since it lets producers monitor and separate faulty batches with until unheard-of speed and accuracy.

Visual inspection is being used to lower industrial waste and increase productivity. Conservative flaw detection thresholds of traditional inspection techniques might cause too high rejection rates. With their capacity to recognise subtle differences in product quality, artificial intelligence models can separate between actual flaws and benign deviations. Over time, this lowers needless waste without sacrificing safety criteria, therefore saving significant costs. Moreover, vision systems driven by artificial intelligence function nonstop without tiredness, enabling constant inspection quality across protracted manufacturing cycles.

Extensive data training is one of the difficulties that artificial intelligence-based visual systems in pharmaceutical and medical production presents. Unlike rule-based systems, which may be used with specified criteria, artificial intelligence models need vast amounts of high-quality photos for training. To guarantee strong model performance, this entails selecting varied datasets considering changes in illumination, angles, and material qualities. Synthetic data creation and transfer learning, however, are solving these issues and increasing the viability of artificial intelligence implementation—even for companies with limited access to big datasets.

Another factor is how well current manufacturing infrastructure integrates. Many manufacturing plants run antiquated machinery not meant to fit vision systems driven by artificial intelligence. Modern artificial intelligence solutions, on the other hand, can be used as modular updates that complement current technology to improve its capacity instead of totally replacement. Emerging as a good alternative are cloud-based artificial intelligence vision systems, which let companies use strong computer capacity without making large on-site hardware purchases.

As artificial intelligence-powered vision systems develop, their possible uses in pharmaceutical and medical production will only grow. Real-time adaptive learning will probably be included into the future generation of vision technology so that systems may dynamically change their flaw detection models as they handle fresh input. Furthermore, predictive analytics driven by artificial intelligence can help companies foresee possible quality problems before they materialise, therefore transforming quality management from a reactive to a proactive activity.

Adoption of artificial intelligence-driven visual inspection in pharmaceutical and medical industry marks a basic change in quality control strategy. AI vision systems are redefining the sector by raising accuracy, cutting waste, boosting compliance, and raising efficiency. Although data needs and execution still present difficulties, the long-term advantages much exceed the initial outlay. Manufacturers trying to satisfy the highest standards of quality and safety in an increasingly complicated regulatory environment will find that as the technology develops an invaluable tool.

The Ultimate Guide to Operator Guidance Systems

In today’s rapidly evolving manufacturing landscape, the role of operator guidance systems has taken centre stage. As businesses strive to enhance productivity, minimise errors, and maintain workforce engagement, innovative technologies such as augmented reality (AR) and digital work instructions are transforming the way operators interact with their tasks. This guide explores the pivotal elements of operator guidance, addressing common challenges and offering actionable insights to optimise manufacturing processes.

1. Understanding Operator Guidance

Operator guidance refers to the methods and tools designed to assist workers in executing their tasks with precision and efficiency. Whether through augmented reality, vision systems, digital instructions, or automated feedback systems, these tools aim to:

• Simplify complex tasks
• Reduce errors and rework
• Shorten training durations
• Enhance workforce versatility

The integration of such systems into manufacturing environments transforms workstations into interactive, digital ecosystems that support operators in real time. By offering real-time feedback, operators are less likely to make errors, ensuring consistency and quality throughout the manufacturing process. It also allows new operators to understand a task immediately.

Additionally, operator guidance systems improve communication on the shop floor. By aligning visual aids, auditory signals, and data analytics, they ensure that workers have access to clear and concise information. This holistic approach boosts efficiency and fosters a culture of continuous improvement.

2. The Shift Toward Digital Work Instructions

Digital work instructions have emerged as a game-changer in manufacturing, replacing traditional paper manuals with dynamic, interactive platforms. The benefits include:

Cost Efficiency: Eliminating printing and distributing manuals saves resources while promoting sustainability.
Real-Time Updates: Digital platforms ensure operators have access to the latest procedures, reducing the risk of outdated information causing errors.
Interactive Learning: Multimedia elements such as videos and interactive checkpoints enhance comprehension and retention, shorten training times, and improve task accuracy.
Streamlined Processes: Instructions tailored to specific tasks or product variants eliminate the need for manual searches, delivering the right information at the right time.

Companies adopting digital work instructions also benefit from seamless integration with Manufacturing Execution Systems (MES) and Enterprise Resource Planning (ERP) systems. This integration enables data-driven insights for continuous improvement and fosters transparency and adaptability, ensuring production goals are consistently met.

Digital work instructions are handy in high-mix, low-volume production environments where the need to adapt to new product lines quickly is critical. The ability to easily modify and distribute updated instructions enhances operational flexibility and reduces downtime.

3. Overcoming Challenges in Operator Guidance

3.1 Finding and Retaining Skilled Labour

The global shortage of skilled operators is a significant challenge for manufacturers. High turnover rates, coupled with extended training periods, can disrupt operations. Implementing operator guidance systems minimises the impact of these challenges by:

• Reducing the dependency on pre-existing expertise
• Accelerating onboarding with intuitive training modules
• Enabling lower-skilled workers to perform complex tasks effectively

By incorporating gamification elements into training modules, manufacturers can further engage employees. This approach improves learning outcomes and increases job satisfaction, reducing turnover rates. It allows work instructions to adapt to various languages automatically.

3.2 Adapting to Increased Demand

Fluctuations in customer demand often stress production capacities. Flexible guidance systems help manufacturers scale operations by:

• Training temporary workers swiftly
• Standardising processes to ensure consistent quality
• Allowing real-time adjustments based on workload variations

Advanced analytics integrated into guidance systems allow managers to predict and prepare for demand surges, optimising resource allocation.

3.3 Addressing Operator Resistance

Operator scepticism can hinder the adoption of new technologies. Concerns about micromanagement and loss of autonomy are common. To counter this, systems are designed with operator-centric features such as:

• Experience Level Functionality: Customisable instruction intensity based on operator proficiency
• Freedom in Task Execution: Allowing operators to perform tasks in their preferred sequence where feasible
• Error Prevention as Support: Real-time feedback focuses on preventing mistakes rather than penalising them

Involving operators in designing and implementing guidance systems can also mitigate resistance. By incorporating user feedback, organisations can ensure that the systems are perceived as tools for empowerment rather than oversight.

4. The Role of Augmented Reality in Operator Guidance

Augmented reality (AR) enhances operator guidance by overlaying digital information directly onto physical workspaces. This immersive approach offers:

Real-Time Visual Feedback: Operators receive step-by-step instructions projected onto their workbench, ensuring clarity and precision.
Reduced Cognitive Load: By focusing on one task at a time, operators can work more efficiently without memorising sequences.
Error Prevention: AR systems, integrated with sensors and smart tools, provide immediate warnings if a task is performed incorrectly.

Types of AR Implementation:

Wearables: AR glasses and headsets provide hands-free access to instructions but may cause discomfort during extended use.
Handheld Devices: Tablets offer portability but interrupt workflow when operators need both hands.
Projected AR: Systems that project instructions directly onto workspaces offer ergonomic, non-intrusive solutions ideal for industrial settings. This approach is particularly valued for its standalone nature, eliminating reliance on battery power and enhancing safety.

Projected AR also offers significant scalability, making it suitable for both small-scale operations and large industrial setups. Its ability to integrate seamlessly with existing tools and workflows ensures minimal disruption during implementation.

5. Benefits of Implementing Operator Guidance Systems

5.1 Enhanced Training
Guidance systems significantly reduce training durations by up to 50%, enabling temporary and new workers to become productive faster. Interactive features foster skill acquisition and confidence. Visual and interactive tools accelerate learning and reduce dependency on senior operators.

By incorporating real-time feedback mechanisms, these systems create a continuous learning environment. Operators can identify and correct errors on the spot, enhancing their overall proficiency over time.

5.2 Improved Quality and Efficiency
Standardised processes ensure that every operator performs tasks uniformly, minimising errors and waste. Systems that validate task completion further enhance quality assurance. By minimising rework and reducing scrap, these systems help manufacturers maintain competitive advantages.

Data from guidance systems can also be used to implement continuous improvement initiatives. By analysing patterns in errors and inefficiencies, managers can identify opportunities for process optimisation.

5.3 Adaptability and Flexibility
With customisable instructions, operators can seamlessly switch between tasks or product variants. This flexibility supports high-mix, low-volume production environments. The ability to integrate temporary or seasonal workers smoothly ensures operational continuity.

Moreover, guidance systems support cross-training initiatives, enabling workers to develop skills across multiple roles. This versatility enhances workforce agility and resilience.

5.4 Data-Driven Insights
Embedded analytics identify bottlenecks and error-prone steps, providing actionable data to optimise workflows and improve productivity. Reports generated from these systems allow supervisors to target specific areas for improvement, enhancing overall operational efficiency.

The integration of IoT devices further enhances data collection, offering deeper insights into machine performance and operator interactions. This holistic view enables more informed decision-making.

6. Empowering Operators: A Broader Perspective

Instead of focusing on a single case, the real power of operator guidance systems lies in their adaptability across diverse industries. Whether it is automotive manufacturing, electronics assembly, or medical device assembly, these systems are engineered to overcome common challenges such as complex task sequences and quality control. By integrating features like real-time feedback, customisable training modules, and seamless MES/ERP connectivity, businesses of all sizes can achieve:

• Up to 50% faster onboarding of new staff
• Streamlined multi-variant production processes
• A significant reduction in waste and rework
• Enhanced adaptability for workforce changes

This versatility highlights the universal benefits of operator guidance systems, making them indispensable tools for any forward-thinking manufacturing operation.

The ability to customise systems based on industry-specific requirements ensures that organisations can address their unique challenges effectively. From ensuring compliance in regulated industries to optimising throughput in high-volume sectors, these systems deliver measurable value.

7. The Future of Operator Guidance

As Industry 4.0 continues to evolve, operator guidance systems will play an even more critical role in bridging human expertise with advanced automation. Emerging trends include:

AI-Driven Insights: Predictive analytics to pre-empt errors and optimise training
IoT Integration: Seamless communication between tools, sensors, and guidance platforms
Personalised Experiences: Tailoring guidance to individual operator preferences and skill levels
Eco-Conscious Manufacturing: Incorporating sustainable practices through optimised resource management

By aligning with these trends, operator guidance systems will continue to serve as a cornerstone of modern manufacturing strategies. The focus on sustainability and worker well-being ensures that these systems contribute to broader organisational goals beyond operational efficiency.

Conclusion

Operator guidance with vision systems represents a pivotal advancement in manufacturing, addressing challenges from labour shortages to quality assurance. By leveraging technologies like augmented reality and digital work instructions, manufacturers can empower their workforce, reduce costs, and achieve unparalleled operational excellence. Embracing these systems is not just a step forward—it is an essential leap into the future of manufacturing. These systems’ adaptability, efficiency, and support for human-centric operations make them indispensable for achieving long-term success in a competitive global manufacturing market.

More details on IVS Operator Guidance Systems can be found here: www.industrialvision.co.uk/products/operator-guidance-platform

How automated vision technology ensures perfect pills and tablets

When you pick up a pill or tablet, it may appear small and modest, but have you ever realised how much precision goes into making it perfect? In the pharmaceutical industry, the quality of pills and tablets is determined by more than just their chemical content. These tiny goods must also meet demanding size, shape, weight, and appearance requirements. A single flaw or defect can jeopardise not just the product’s integrity but also consumer trust, hence quality control is a major responsibility.

Automated vision technology is critical in pharmaceutical manufacturing because it ensures precision and compliance throughout the entire process. These systems are critical in modern manufacturing, delivering innovative solutions that ensure each pill fulfils stringent industry standards and regulatory criteria. From detecting cosmetic faults to following tight MHRA, FDA, and GxP criteria, these advanced vision systems add precision, speed, and dependability to an otherwise daunting process.

Why Does Perfection Matter in Pharmaceutical Manufacturing?
In the pharmaceutical sector, flaws are more than just an aesthetic issue. A tablet with obvious flaws, such as black spots, cracks, or discolouration, may raise concerns about its quality and safety. Furthermore, any change from the specified dimensions, weight, or appearance may have an impact on the medication’s efficacy or raise regulatory concerns.

Pharmaceutical firms are expected to follow strict rules such as MHRA, GAMP, and GxP laws. Automated vision systems meet these standards by performing speedy and comprehensive inspections, decreasing human error, and ensuring quality control consistency. Manual inspection is just not possible for production lines that run at high speeds, hence automation is an integral aspect of the process.

Tablet and Pill Inspection: A Complicated Process
Pills, pills, and capsules come in all shapes and sizes. Tablets typically measure 5-12mm in diameter and 2-8mm thick, with curved tablets reaching lengths of up to 21mm. Capsules, on the other hand, are classified according to their size, which typically ranges from size 0 (biggest) to size 5. Regardless of these variances, producers must detect cosmetic flaws down to the nano level—a feat well beyond the human eye’s capabilities.

Some frequent defect categories that vision systems must recognise are:

  • Surface problems include dirt, foreign particles, scratches, cracks (lengthwise and diagonal), splits, and holes.
  • Anomalies in shape include dents, bends, double capping, and collapsed tablets.
  • Color discrepancies, including discolouration and uneven finishes.

How Does Automated Vision Technology Work?
The inspection procedure starts with a high-speed feeding mechanism. Pills are carefully transferred from hoppers or bowls into the inspection channel and positioned for best viewing by the vision system. To guarantee both sides of the pill are inspected, modern systems frequently use a two-step process:
-First Side Inspection: A vacuum dial plate holds the pill in place while cameras take photos of the top surface.
-Second Side Inspection: The pill is moved to another plate and hung securely through a vacuum while cameras analyse the bottom.
This system ensures that each tablet is properly scrutinised with no blind spots.

The Magic of Multi-Camera Vision Systems.
The true value of automated vision technology comes from its capacity to deliver a 360-degree image of each tablet or pill. Multi-camera modules, which are frequently coupled with custom optics and mirrors, enable for thorough inspection. Manufacturers may take images of all sides and angles of the pill by strategically positioning cameras and mirrors, resulting in eight distinct perspectives of a single product.

These photos are evaluated by advanced algorithms that detect even the smallest imperfections, guaranteeing that only faultless products make it into packaging. The system’s capacity to eliminate blind spots is vital for performing the high-speed, high-accuracy inspections demanded by current pharmaceutical manufacturing.

Beyond Detection: The advantages of vision systems
Automated vision systems offer distinct benefits that correspond to the pharmaceutical industry’s rigorous requirements:

  • Vision systems can inspect hundreds of pills per minute without sacrificing accuracy, resulting in optimal manufacturing line efficiency.
  • Advanced imaging can detect faults as small as a few microns, invisible to the human sight.
  • Real-time data analytics in vision systems help manufacturers spot trends, foresee concerns, and take preventive action to preserve product quality.
  • Effective Compliance: Built-in checks linked with FDA, GAMP, MHRA and GxP standards ease regulatory conformity, decreasing the risk of infractions and assuring consistent product quality.
  • Vision systems reduce waste by recognising damaged pills early in the production process, saving money and resources.
  • These specific qualities make automated vision systems indispensable in modern pharmaceutical manufacturing, where precision and compliance are critical.

Innovation on the Horizon
As technology advances, so does the possibility for automated vision systems. Emerging technologies such as artificial intelligence (AI) and machine learning are improving defect detection capabilities by allowing systems to learn from prior inspections and improve with time. Additionally, integration with Internet of Things (IoT) devices enables real-time monitoring and data analysis, giving businesses with actionable insights to further improve manufacturing operations.

Final Thoughts
Automated vision technology is transforming the pharmaceutical industry by assuring that every pill and tablet meets the highest levels of quality and safety. These technologies improve manufacturing efficiency while maintaining consumer trust by combining speed, precision, and innovation.

The next time you take a pill, remember the sophisticated processes and cutting-edge technology that secured its perfection. Automated vision inspection enables manufacturers to confidently provide clean, high-quality products, one pill at a time.

Your quick guide to machine vision

Choosing the most appropriate combination of components needed for a machine vision system requires expert understanding of application requirements and technology capabilities. From this perspective in this month’s blog post we thought we would provide you with enough knowledge and understanding to have richer more rewarding conversations about machine vision and vision systems technology. This quick guide will help you understand the basics of machine vision and vision systems, including components, applications, return on investment and potential for use in your business.

The first thing to understand are the basic elements of a machine vision system. We’ve covered this before in this post, but it’s good to re-iterate it here. The basics of a machine vision system are as follows:

  • The machine vision process starts with the part or product being inspected.
    When the part is in the correct place, a sensor will trigger the acquisition of the digital image.
  • Structured lighting is used to ensure that the image captured is of optimum quality.
  • The optical lens focuses the image onto the camera sensor.
  • Depending on capabilities, this digitising sensor may perform some pre-processing to ensure the correct image features stand out
    The image is then sent to the processor for analysis against the set of pre-programmed rules.
  • Communication devices are then used to report and trigger automatic events, such as part acceptance or rejection.
  • What can machine vision be used for?

The applications for machine vision vary widely between industries and product environments. However, to aid understanding, they can be broken down into the following five major categories. As outlined previously, in the section on ‘processor’, these usually combine a number of algorithms and software tools to achieve the desired result.

1. Location, guidance & positioning

When combined with robotics, machine vision offers powerful application options. By accurately determining the position of parts to direct robots, find fiducials, align products and provide positional adjustment feedback, guided robots enable a flexible approach to feeding and product changes. This eliminates the need for costly precision fixtures, allowing different parts to be processed without changing tools. Some typical applications include:

  • Locating and orientation of parts to compare them to specific tolerances
  • Finding or aligning parts so that they are positioned correctly for assembly with other components
  • Picking and placing parts from bins
  • Packaging parts on a conveyor belt
  • Reporting part positioning to a robot or machine controller to ensure correct alignment

 

2. Recognition, identification, coding & sorting

Using state-of-the-art neural network pattern recognition to recognize arbitrary patterns, shapes, features and logos, vision systems enable the comparison of trained features and patterns within fractions of a second. Geometric features are also used to find objects and easily identify models that are translated, rotated and scaled to sub-pixel accuracy. Typical applications include:

  • Decoding 1D & 2D symbols, codes and characters printed on parts, labels, and packages.
  • Direct Part Marking (DPM) a character, code or symbol onto a part or packaging
  • Identifying parts by locating a unique pattern or based on colour, shape, or size.
  • Optical character recognition (OCR) and optical character verification (OCV) systems read alphanumeric characters and confirms the presence of a character string.

3. Gauging & measuring

Measuring in machine vision is used to confirm the dimensional accuracy of components, parts and sub-assemblies automatically, without the need for manual intervention. A key benefit with machine vision is that it is non-contact and so does not contaminate or damage the part being inspected. Many machine vision systems can also measure object features to within 0.0254 millimetres and therefore provide additional benefits over the contact gauge equivalent.

4. Inspection, detection & verification

Inspection is used to detect and verify functional errors, defects, contaminants and other product irregularities.

Typical applications include:

  • Verifying the presence/absence of parts or components
  • Counting to confirm quantities
  • Detecting flaws
  • Conformance checking to inspect products for completeness.

5. Archiving & Tracking

Finally, machine vision can play an important role in archiving images and tracking parts and products through the whole production process. This can be particularly valuable in tracking the genealogy of parts in a sub-assembly that make up a finished product. Data recorded can be used to drive better customer support, improve production processes and protect brands against costly product recalls.

What are the benefits of machine vision?

The economic case for investing in machine vision systems is usually strong due to the two key following areas:

1. Cost savings through reducing labour, re-work/testing, removing more expensive capital expenditure, material and packaging costs and removing waste

2. Increased productivity through process improvements, greater flexibility, increased volume of parts produced, less downtime, errors and rejections

However, just viewing the benefits from an economic perspective does not do justice to the true value of your investment. Machine vision systems can add value in all the additional following ways. Unfortunately due to the intangible nature of some of these contributors it can be difficult to put an actual figure on the value but that shouldn’t stop attempts to include them.

  • Intellectually
    • By freeing staff from repetitive, boring tasks they are able to focus thinking in ways that add more value and contribute to increasing innovation. This is good for mental health and good for the business.
    • By reducing customer complaints, product recalls and potential fines machine vision can help to build and protect your brand image in the minds of customers
    • Building a strong image in the minds of potential business customers through demonstrating adoption of the latest technology, particularly when they come and visit your factory!
    • Through the collection of better data and improved tracking, machine vision can help you develop a deeper understanding of your processes
  • Physically
    • The adoption of machine vision can help to complement and even improve health and safety practice
    • Removing operators from hazardous environments or strenuous activity reduces exposure to sickness, absence, healthcare costs or insurance claims
  • Culturally
    • Machine vision can contribute and even accelerate a culture of continuous improvement and lean manufacturing
    • Through increased competitiveness and improving service levels machine vision helps build a business your people can be proud of
  • Environmentally
    • Contributing to a positive, safe working environment for staff
    • Through better use of energy and resources, smoother material flow and reduced waste machine vision systems can help reduce your impact on the environment

The costs of machine vision systems?

Costs can range from several hundred for smart sensors and cameras, up to half a million for complex systems. Of course this will depend on the size and scope of your operations and may be more or less.

However, even in the case of high levels of capital investment it should be obvious, from the potential benefits outlined above, that a machine vision system can quickly pay for itself.

How to deploy an effective vision system for automated inspection

When we start developing a new vision system project or are asked to quote for a specific project, we return to the first principles of good practice for vision systems design. We have created these first principles from our millions of hours of experience developing machine vision solutions for Original Equipment Manufacturers (OEM) and end-user customers. It’s a holistic approach to a project review, from the initial understanding of the vision system elements to assessment for automation, an understanding of the process and production environment, and finally, control and factory information connectivity.

The key to this is being driven by the specification and development of a machine vision solution that will work and be robust in the long term. This might sound obvious, as why wouldn’t you design something which will work for the customer, provide long-term stability and make support (and everyone’s life) easier? But we sometimes hear of customers who have supposedly found a solution for a lower price or specification, which, when looked at in detail, proves to be a solution which won’t work, provides the customer with a lot of hassle and ends up being a quality control white elephant in their production. Vision systems have to be adequately thought out and understood. First and foremost, it must work and be robust so that the industrial automation in the line continues to run 24/7, without interruption.

So, how do we approach a machine vision project to ensure its effectiveness, I hear you say? Well, as we said, we look at the sequence of steps that will make for an effective vision system solution. This is best practice for vision system deployment and successful delivery. Where do you start:

1. User Requirement Specification (URS)/Specification/Detailed Information. The first thing to have is a very defined URS or specification to work with. Again, this might sound strange, but some customers have no specific written specifications, only an idea of a system. This is fine in the early stages, but to quote and ultimately deploy effectively, a specification is needed detailing the product, the variants, speeds, communication, quality checks required, tolerances, production information, process control and all elements of the production interaction (and rejection) that are required. For a formal medical device/pharma production URS, this will be spelt out in more detail on an individually numbered sentence-by-sentence or paragraph-by-paragraph basis, based on GAMP (Good Automated Manufacturing Practice). However, for others, it could be a few pages of written detail defining the requirements for the vision system. Whichever approach, the project needs to be defined from the get-go.

2. Samples Testing. Testing of samples is always a good idea. These are real-world samples from the production line showing the product variation on a shift-by-shift, week-by-week basis. Samples that show both good and actual reject samples are needed. Samples of varying defect styles are required to understand how such a rejection will manifest itself on the product. Depending on the project’s difference from those already known algorithms, a formal Proof of Principle may be suggested to determine how the vision system can effectively identify the rejects.

3. Test Documentation. Our proven and reliable test format drives our engineering to document the sample testing results. This allows us to record all the information on the project and provide tangible data to review in the future if the project goes ahead, so it’s a key document in the fight for effectiveness in the vision system deployment. The sheet is divided into specific areas and completed with the sample testing. It includes all details on the job, including:
-Application type: for example presence verification, gauging, code reading or Optical Character Recognition (OCR)
-Inspected Part: is the sample indicative of the real “in-production” type, how many variants are there?
-Material: is the part metallic, opaque, transparent, etc?
-Surface and Form: is it reflective, painted rough etc?
-Environment: what environment will the vision system operate in. For example, if surface detection of contamination is required, a clean room will help to reduce the false failures or debris on the surface.
-Background: how is the product presented on a fixture, and what does that look like?
-Vision System specifics: what camera head type is needed, resolution, pixel size, frame rate, spatial resolution, field of view, object distance and optics.
-In-process specifics: How fast is the product moving, what cycle time is needed, and what wavelength of light is used?
-Image processing: once all the above elements are confirmed the actual machine vision image processing needs to be reviewed. Can traditional algorithms be used or is it an AI application? How repeatable is the processing based on the product deviation over many samples? Sample images should be saved showing the test conditions and every sample reviewed.
-Process views and comms: What will we display on the production line HMI, what stats and yields need to be seen, and finally how will the system communicate with the Programmable Logic Controller (PLC) or factory information system.

4. Project Review. A multi-disciplinary team, including vision engineering, controls, mechanical engineering, and automation engineering carry out a review of the results. The team reviews testing results and assesses how this applies to the production environment, automation and interaction with other production processes. The test documentation will provide some specific angles, lighting and filters required for the vision set-up. Can the product be presented consistently to allow the defined fields of view to be seen? The whole team must agree that the project is viable and the vision testing was successful for us to progress – again, another gate to make sure the vision system deployment is effective.

5. Commercial Review. The commercial team will become involved in the project review to understand all elements of the vision system, the specific development needed, and how this translates into a complete system or machine, including all aspects of automation and validation. The commercials are agreed upon for the project.

6. Inspection Specification. Once a project is live, all of the work done in advance is transferred to the engineering team. This will lead to the development and publication of an Inspection Specification that will run through the whole project. This specification defines all elements of the vision system and the approach for inspection. It is agreed with the customer and becomes the guiding document through the project’s completion. For OEM customers who will be purchasing multiple systems, it also provides the ability to provide a consistent machine vision system from project to project.

7. Installation and Commissioning. As the project progresses, competent and experienced IVS vision engineers perform installation and commissioning. They are guided by all the information already collated and assessed, therefore reducing any guesswork about the vision system’s operation as it’s installed. This measured approach means there are no surprises or unknowns once this stage is reached, derisking the project from both sides.

8. Validation FAT and SAT. Prior to final confirmation of operation, a Factory Acceptance Test (FAT) and Site Acceptance Test (SAT) are conducted based on the vision engineering and inspection specification requirements. This rigid document tests all fail conditions of the machine vision system, robust operation over a long period, and confirmation of the calibration of the complete system.

How to deploy an effective vision system for automated inspection

Overall, these steps create a practical framework for the orderly specification and deployment of a robust and fit-for-purpose vision system. Even for our OEM customers who require multiple machines or vision systems over a long contractual period, it pays to follow the standard procedure for effective vision system deployment. The process is designed to minimise risk and provide a robust and long-service vision system that can easily be supported and maintained.

Crush Defects: The Power of Semi-Automatic Inspection

This month, we’re discussing the elements of semi-automatic inspection. This is the halfway house that pharmaceutical manufacturers have to move to from manual quality inspection (of individual products) at a bench when volumes start to increase but don’t warrant a fully automated inspection solution. The catalyst for change is when the inspection room is full of quality inspectors, and it’s time to increase the throughput of quality assessment. The semi-automatic inspection solution allows vials (liquid and lyo), syringes, cartridges and ampoules to be presented to the operator at speed to review before a magnifier, enabling manual rejection to be completed. This allows a single operator to do the job of a room of quality inspectors. This process mimics the standard GAMP manual inspection of the black and white background booths found in manual inspection, allowing for the throughput of high volumes. However, how does the semi-automatic inspection fit into the validation process and allow validation at the relevant levels?

Crush Defects: The Power of Semi-Automatic Inspection

Validation of such semi-automatic inspection must adhere to the United States Pharmacopeia (USP). The following chapters from the USP are regarded as the main literature and source for regulatory information surrounding the visual inspection of injectables:

Chapter 〈790〉 VISIBLE PARTICULATES IN INJECTIONS
Chapter 〈1790〉 VISUAL INSPECTION OF INJECTIONS
Chapter 〈788〉 PARTICULATE MATTER IN INJECTIONS

Crush Defects: The Power of Semi-Automatic Inspection

Chapter 〈790〉 establishes the expectation that each unit of injectable product will be inspected as part of the routine manufacturing process. This inspection should take place at a point when defects are most easily detected; for example, prior to labelling or insertion into a device or combination product. Semi¬automated inspection should only be performed by trained, qualified inspectors. The intent of this inspection is to detect and remove any observed defect. When in doubt, units should be removed

Crush Defects: The Power of Semi-Automatic Inspection

Defect Types

IVS semi-automatic inspection machines with operators allow for the inspection of cosmetic defects in the container and particulate matter within the fluid. The machines will enable the inspection of the following defects:

Cosmetic Defects:
• Glass cracks
• Scratches
• Missing stoppers/caps
• Improper cap/stopper closure
• Glass inclusions

Crush Defects: The Power of Semi-Automatic Inspection

Particulate Matter:
• Fibers
• Glass
• Metal
• Product Related
• Rubber

Crush Defects: The Power of Semi-Automatic Inspection

What are the particle definitions which apply to Semi-Automatic Inspection?

Extrinsic – Highest risk
Particles may originate from many sources. Those that are foreign to the manufacturing process are considered exogenous or “extrinsic” in origin; these include hair, non-process-related fibres, starch, minerals, insect parts, and similar inorganic and organic materials. Extrinsic material is generally a one-time occurrence and should result in the rejection of the affected container in which it is seen; however, elevated levels in the lot may implicate a broader contribution from the same source. These particles may carry an increased risk of microbiological or extractable contamination because less is known about their path before deposition in the product container or their interaction with the product.

Intrinsic – Medium Risk
Other particles are considered “intrinsic”, from within the process. Intrinsic particles may come from processing equipment or primary packaging materials that were either added during processing or not removed during container preparation. These primary product-contact materials may include stainless steel, seals, gaskets, packaging glass and elastomers, fluid transport tubing, and silicone lubricant. Such particles still pose the risk of a foreign body, but generally come from sterile or sanitized materials and more is known about their interactions when in contact with the product.

Inherent- Lower Risk
“Inherent” particles are considered the lowest risk as they are known to be or intended to be associated with specific product formulations. The physical form or nature of inherent particles varies from product to product and includes solutions, suspensions, emulsions, and other drug delivery systems that are designed as particle assemblies (agglomerates, aggregates). Product formulation-related particulate formation should be studied in the development phase and in samples placed on stability to determine the normal characteristics and time-based changes that can occur.

Defects are commonly grouped into classifications based on patient and compliance risk. The most common system uses three groups: critical, major, and minor. Critical defects are those that may cause serious adverse reaction or death of the patient if the product is used. This classification includes any nonconformity that compromises the integrity of the container and thereby risks microbiological contamination of the sterile product. Major defects carry the risk of a temporary impairment or medically reversible reaction, or involve a remote probability of a serious adverse reaction. This classification is also assigned to any defect which causes impairment to the use of the product. These may result in a malfunction that makes the product unusable. Minor defects do not impact product performance or compliance; they are often cosmetic in nature, affecting only product appearance or pharmaceutical elegance.

Crush Defects: The Power of Semi-Automatic Inspection

Inspection criteria

On semi-automatic inspection machines, the vials are spun up at speed prior to reaching the inspection area. This sets any visible particles in motion which aids in detection as stationary particles will be difficult to detect. Upon 100% inspection, visible extrinsic and intrinsic particles should be reliably removed. The test method allows inherent particles to be accepted if the product appearance specification allows inherent particle types. The size of particles reliably detected (2′::70% probability of detection) is generally 150 µm or larger. This Probability of Detection (POD) is dependent on the container characteristics (e.g., size, shape, transparency), inspection conditions (lighting and duration), formulation characteristics (colour and clarity), and particle characteristics (size, shape, colour, and density). For syringes the products are agitated and turned over so that no particulate matter is left in the bung and can be seen.

Crush Defects: The Power of Semi-Automatic Inspection

Critical Inspection Conditions

Light intensity
The results of the inspection process are influenced by the intensity of the light in the inspection zone. In general, increasing the intensity of the light that illuminates the container being inspected will improve inspection performance; 〈790〉 recommends light levels of 2,000-3,750 lux at the point of inspection for routine inspection of clear glass containers. Special attention should be given to assure that inspection is not performed below the lower limit of 2,000 lux.

Background and Contrast
Contrast between the defect of interest and the surrounding background is required for detection, and increased contrast improves detection. The use of both black and white backgrounds is described in 〈790〉, as well as other global pharmacopoeias. The use of both backgrounds provides good contrast for a wide range of particulate and container defects, which can be light or dark in appearance.

Inspection Rate
This is controlled by the roller travel speed. Sufficient time must be provided to allow for thorough inspection of each container; chapter 〈790〉 specifies a reference time of 10 s/container (5s each against both black and white backgrounds).

Magnification
Some inspection processes use a large magnifier to increase image size and thus increase the probability of detecting and rejecting containers with defects near the threshold of detection. Most lenses used during this inspection process are around X2 magnification.

Crush Defects: The Power of Semi-Automatic Inspection

Overall, Semi-Automatic Inspection is a necessary step when pharmaceutical manufacturers move into medium-volume production. Validating such systems allows production to ramp up to the required volume without the need for manual benches and provides capacity for future volumes before fully automated vision inspection with vision systems. More details on how to use Semi-Automatic inspection for regulatory compliance can be found here.