Machine vision vs computer vision vs image processing

We’re often asked – what’s the difference between machine vision and computer vision, and how do they both compare with image processing? It’s a valid question as the two seemingly similar fields of machine vision and computer vision both relate to the interpretation of digital images to produce a result from the image processing of photos or images.

What is machine vision?
Machine vision has become a key technology in the area of quality control and inspection in manufacturing. This is due to increasing quality demands and the improvements it offers over and above human vision and manual operation.

Definition
It is the ability of a machine to consequentially sense an object, capture an image of that object, and then process and analyse the information for decision-making.

In essence, machine vision enables manufacturing equipment to ‘see’ and ‘think’. It is an exciting field that combines technologies and processes to automate complex or mundane visual inspection tasks and precisely guide manufacturing activity. In an industrial context, it is often referred to as ‘Industrial Vision’ or ‘Vision Systems’. As the raw image in machine vision are generally captured by a camera connected to the system in “real-time” (compared to computer vision), the disciplines of physics relating to optical technology, lighting and filtering are also part of the understanding in the machine vision world.

So, how does this compare to computer vision and image processing?
There is a great deal of overlap between the various disciplines, but there are clear distinctions between the groups.

Input Output
Image processing Image is processed using algorithms to correct, edit or process an image to create a new better image. Enhanced image is returned.
Computer vision Image/video is analysed using algorithms in often uncontrollable/unpredictable circumstances. Image understanding, prediction & learning to inform actions such as segmentation, recognition & reconstruction.
Machine vision Use of camera/video to analyse images in industrial settings under more predictable circumstances. Image understanding & learning to inform manufacturing processes.

Image processing has its roots in neurobiology and scientific analysis. This was primarily down to the limitations of processor speeds when image processing came into the fore in the 1970’s and 1980’s—for example, processing single images following the capture from a microscope-mounted camera.

When processors became quicker and algorithms for image processing became more adept at high throughput analysis, image processing moved into the industrial environment, and industrial line control was added to the mix. For the first time, this allowed an analysis of components, parts and assemblies as part of an automated assembly line to be quantified, checked and routed dependent on the quality assessment, so machine vision (the “eyes of the production line”) was born. Unsurprisingly, the first industry to adopt this technology en mass was the PCB and semiconductor manufacturers, as the enormous boom in electronics took place during the 1980s.

Computer vision crosses the two disciplines, creating a Venn diagram of overlap between the three areas, as shown below.

Machine vision vs computer vision vs image processing

As you can see, the work on artificial intelligence relating to deep learning has its roots in computer vision. Still, over the last few years, this has moved over into machine vision, as the image processing learnt from biological vision, coupled with cognitive vision moves slowly into the area of general purpose machine vision & vision systems. It’s rarely a two-way street, but some machine vision research moves back into computer vision and general image processing worlds. Given the more extensive reach of the computer vision industry, compared to the more niche machine vision industry, new algorithms and developments are driven by the broader computer vision industry.

Machine vision, computer vision and image processing are now broadly intertwined, and the developments across each sector have a knock-on effect on the growth of new algorithms in each discipline.

IVS helps global contact lens manufacturers achieve error-free lenses (at speed!)

This month, we are talking about automated visual inspection of cosmetic defects in contact lens inspection, a major area of expertise in the IVS portfolio. Contact lens production falls between medical device manufacturing and pharmaceuticals manufacturing, but it does mean that it falls within the GAMP/ISPE/FDA validation requirements when it comes to manufacturing quality and line validation.

The use of vision technology for the inspection of contact lenses is intricate. Due to the wide variety of lens profiles, powers, and types (such as spherical, toric, aspherical etc.), manufacturers historically relied on human operators to visually verify each lens before it was shipped to a customer. Cosmetic defects in lenses range from scratches and inclusions, through to edge defects, mould polymerisation problems and overall deformity in the contact lens.

The Challenge: Manual Checks are Prone to Human-Error and are Slow!

The global contact lens industry must maintain the highest standards of quality due to the specialist nature of its products. Errors or faults in contact lenses can lead to negative recalls, financial losses, or a drop in brand loyalty and integrity – all issues that could otherwise be very difficult to rectify. However, manually checking lenses has become extremely time-consuming, inefficient, and error-prone. Due to the brain’s tendency to rectify errors naturally, human error is a common cause of oversights. This makes it next to impossible to spot faults by hand, therefore many errors are missed, and contact lens defects can be passed to the consumer.

Coupled with this, many global contact lens manufacturing brands run their production at high speed, which means using human inspectors simply becomes untenable based on the volume and speed of production; so in the last twenty-five years, contact lens manufacturers have moved to high-speed, high-accuracy automated visual inspection, negating the need for manual checks.

To ensure their products’ quality, accuracy, and consistency, many of the world’s top contact lens manufacturing brands have turned to IVS to solve their automated inspection and verification problems. This can be either wet or dry contact lens inspection, depending on the specific nature of the production process.

The Solution: Minimize Errors and Ensure Quality Through IVS Automated Visual Inspection

IVS contact lens inspection solutions have been developed over many years based on a deep and long heritage in contact lens inspection and the required FDA/GAMP validation. The vision system drills down on the specific contact lens defects, allowing finite, highly accurate quality checks and inspection coupled with the latest generation AI vision inspection techniques. This allows the maximum yield achievable, coupled with full reporting and statistical analysis, to give the data feedback to the factory information system, providing real-time, reliable information to the contact lens manufacturer. Failed lenses are rejected, allowing 100% quality inspected contact lenses to be packed and shipped.

IVS have inspection systems for manual load, automated and high-speed contact lens inspection, allowing our solutions to cover the full range of contact lens manufacturers – from small start-ups to the major high-level production volume producers. Whatever the size of your contact lens production, we have validated solutions to FDA/ISPE/GAMP for automated contact lens inspection.

Contact lens manufacturers now have no requirement for manual visual inspection by operators; this can all be achieved through validated automated visual inspection of cosmetic defects, providing high-yield, high-throughput quality inspection results.

More details: https://www.industrialvision.co.uk/wp-content/uploads/2023/05/IVS-Lens-Inspection-Solutions.pdf

How vision systems are deployed for 100% cosmetic defect detection (with examples)

Cosmetic defects are the bane of manufacturers across all industries. Producing a product with the correct quality in terms of measurement, assembly, form, fit, and function is one thing – but cosmetic defects are more easily noticed and flagged by the customer and end-user of the product. Most of the time, they have no detrimental effect on the use or function of the product, component or sub-assembly, they just don’t look good! They are notoriously difficult for an inspection operator to spot and are therefore harder to check using traditional quality control methods. The parts can be checked on a batch or 100% basis for measurement and metrology checks, but you can’t batch-check for a single cosmetic defect in a high-speed production process. So for mass production medical device manufacturing at 300-500 parts per minute, using vision systems for automated cosmetic defect inspection is the only approach.

From medical device manufacturers, cosmetic defects might manifest themselves as inclusions, black spots, scratches, marks, chips, dents or a foreign body on the part. For automotive manufacturers, it could be the wrong position of a stitch on a seat body, defects on a component, a mark on a sub-assembly, or the gap and flush of the panels on a vehicle.

Using vision systems for automated cosmetic inspection is especially important for products going to the Japanese and the Asian markets. These markets are particularly sensitive to cosmetic issues on products, and medical device manufacturers need to use vision systems to 100% inspect every product produced to confirm that no cosmetic defects are present on their products. This could be syringe bodies, vials, needle tips, ampules, injector pen parts, contact lenses, medical components and wound care products. All need checking at speed during the manufacturing and high-speed assembly processes.

How are vision systems applied for automated cosmetic checks?

The key to applying vision systems for cosmetic visual inspection is the combination of camera resolutions, optics (normally telecentric in nature) and filters to bring out the specific defect to provide the ability to segment it from the natural variation and background of the product. Depending on the type of cosmetic defect that is being checked, it may also require a combination of either linescan, areascan or 3D camera acquisition. For example, syringe bodies or vials can be spun in front of a linescan camera to provide an “unrolled” view of the entire surface of the product.

Let’s drill down on some of the specific elements which are used, using the syringe body as a reference (though these methods can be applied to other product groups):

Absorbing defects

These defects are typically impurities, particulates, fibers and bubble. This needs a lighting technique with linescan to provide a contrasting element for these defects:

Hidden defects

These defects are typically “hidden” from view via the linescan method and so using a direct on-axis approach with multiple captures from an areascan sensor will provide the ability to notice surface defects such as white marks (on clear and glass products) and bubbles.

Cracks and scratch defects

A number of approaches can be taken for this, but typically applying an axial illuminiation with a combination of linescan rotation will provide the ability to identify cracks, scratches and clear fragments.

Why do we use telecentric optics in cosmetic defect detection?

Telecentric lenses are optics that only gather collimated light ray bundles (those that are parallel to the optical axis), hence avoiding perspective distortions. Because only rays parallel to the optical axis are admitted, telecentric lens magnification is independent of object position. Due to this distinguishing property, telecentric lenses are ideal for measuring and cosmetic inspection applications where perspective problems and variations in magnification might result in inconsistent measurements. The front element of a telecentric lens must be at least as large as the intended field of view due to its construction, rendering telecentric lenses insufficient for imaging very large components.

Fixed focal length lenses are entocentric, catching rays that diverge from the optical axis. This allows them to span wide fields of view, but because magnification varies with working distance, these lenses are not suitable for determining the real size of an item. Therefore, telecentric optics are well suited for cosmetic and surface defect detection, often combined with collimated lighting to provide the perfect silhouette of a product.

In conclusion, machine vision systems can be deployed effectively for 100% automated cosmetic defect detection. By combining the correct optics, lighting, filters and camera technology, manufacturers can use a robust method for 100% automated visual inspection of cosmetic and surface defects.

The complete guide to Artificial Intelligence (AI) and Deep Learning (DL) for machine vision systems.

Where are we at, and what is the future?

We had a break from the blog last month while we worked on this latest piece, as it’s a bit of a beast. This month we discuss artificial intelligence (AI) for machine vision systems. An area with much hype due to the introduction of AI language models such as ChatGPT and AI image models like DALLE. AI is starting to take hold for vision systems, but what’s it all about? And is it viable?

Where are we at?

Let’s start at the beginning. The term “AI” is an addendum for a process that has been around for the last twenty years in machine vision but has only now become more prolific (or should we say, the technology has matured to the point of mass deployment). We provided vision systems in the early 00’s using low-level neural networks for feature differentiation and classification. These were primarily used for simple segmentation and character verification. These networks were basic and had limited capability. Nonetheless, the process we used then to train the network is the same as it is now, but on a much larger scale. Back then, we called it a “neural network”; now it’s termed artificial intelligence.

The term artificial intelligence is coined as the network has some level of “intelligence” to learn by example and expand its knowledge on iterative training levels. The initial research into computer vision AI discovered that human vision has a hierarchical structure on how neurons respond to various stimuli. Simple features, such as edges, are detected by neurons, which then feed into more complex features, such as shapes, which finally feed into more complex visual representations. This builds individual blocks into “images” the brain can perceive. You can think of pixel data in industrial vision systems as the building blocks of synthetic image data. Pixel data is collected on the image sensor and transmitted to the processor as millions of individual pixels of differing greyscale or colour. For example, a line is simply a collection of like-level pixels in a row with edges that transition pixel by pixel in grey level difference to an adjacent location.

Neural networks AI vision attempts to recreate how the human brain learns edges, shapes and structures. Neural networks consist of layers of processing units. The sole function of the first layer is to receive the input signals and transfer them to the next layer and so on. The following layers are called “hidden layers”, because they are not directly visible to the user. These do the actual processing of the signals. The final layer translates the internal representation of the pattern created by this processing into an output signal, directly encoding the class membership of the input pattern. Neural networks of this type can realise arbitrarily complex relationships between feature values and class designations. The relationship incorporated in the network depends on the weights of the connections between the layers and the number of layers – and can be derived by special training algorithms from a set of training patterns. The end result is a “web” of connections between the networks in a non-linear fashion. All of this is to try and mimic how the brain operates in learning.

With this information in hand, vision engineering developers have been concentrating their efforts on digitally reproducing human neural architecture, which is how the development of AI came into being. When it comes to perceiving and analysing visual stimuli, computer vision systems employ a hierarchical approach, just like their biological counterparts do. Traditional machine vision inspection continued in this vein, but AI requires a holistic view of all the data to compare and develop against.

So, the last five years have seen an explosion in the development of artificial intelligence, deep learning vision systems. These systems are based on neural network development in computer vision in tandem with the development of AI in other software engineering fields. All are generally built on an automatic differentiation library to implement neural networks in a simple, easy-to-configure solution. Deep learning AI vision solutions incorporate artificial intelligence-driven defect finding, combined with visual and statistical correlations, to help pinpoint the core cause of a quality control problem.

To keep up with this development, vision systems have evolved into manufacturing AI and data gathering platforms, as the need for vision data for training becomes an essential element for all AI vision systems compared to deploying traditional machine vision algorithms. Traditional vision algorithms still need image data for development, but not to the same extent as needed for deep learning. This means vision platforms have developed to integrate visual and parametric data into a single digital thread from the development stage of the vision project all the way through production quality control deployment on the shop floor.

In tandem with the proliferation of the potential use of AI in vision systems is the explosion in suppliers of data-gathering “AI platforms”. These systems are more of a housekeeping exercise for image gathering, segmentation and human classification before the submission to the neural network, rather than being a quantum leap in image processing or a clear differential compared to traditional machine vision. Note: most of these companies have “AI” in their titles. These platforms allow for clear presentation of the images and the easy submission to the network for computation of the neural algorithm. Still, all are based on the same overall architecture.

The deep learning frameworks – Tensorflow or PyTorch – what are they?

Tensorflow and PyTorch are the two main deep learning frameworks developers of machine vision AI systems use. Each was developed by Google and Facebook, respectively. They are used as a base for developing the AI models at a low level, generally with a graphical user interface (GUI) above it for image sorting.

TensorFlow is a symbolic math toolkit that is best suited for dataflow programming across a variety of workloads. It provides many abstraction levels for modelling and training. It’s a promising and rapidly growing deep learning solution for machine vision developers. It provides a flexible and comprehensive ecosystem of community resources, libraries, and tools for building and deploying machine-learning apps. Recently, Tensorflow has integrated Keras into the framework, a precursor to Tensorflow.

PyTorch is a highly optimised deep learning tensor library built on Python and Torch. Its primary use is for applications that use graphics processing units (GPUs) and central processing units (CPUs). Vision system vendors favour PyTorch over other Deep Learning frameworks such as TensorFlow and Keras because it employs dynamic computation networks and is entirely written in Python. It gives researchers, software engineers, and neural network debuggers the ability to test and run sections of the code in real-time. Because of this, users do not have to wait for the entirety of the code to be developed before determining whether or not a portion of the code works.

Whichever solution is used for deployment, the same pros and cons apply in general to machine vision solutions utilising deep learning.

What are the pros and cons of using artificial intelligence deep learning in machine vision systems?

Well, there are a few key takeaways when considering using artificial intelligence deep learning in vision systems. These offer pros and cons when considering whether AI is the appropriate tool for industrial vision system deployment.

Cons

Industrial machine vision systems must have high yield, so we are talking 100% correct identification of faults, in substitute for accepting some level of false failure rate. There is always a trade-off in this process in setting up vision systems to ensure you err on the side of caution so that real failures are guaranteed to be picked up and automatically rejected by the vision system. But with AI inspection, the yields are far from perfect, primarily because the user has no control of the processing functionality that has made the decision on what is deemed a failure. A result of pass or fail is simply given for the outcome. The neural network having been trained from a vast array of both good and bad failures. This “low” yield (though still above 96%) is absolutely fine for most requirements of deep learning (e.g. e-commerce, language models, general computer vision), but for industrial machine vision, this is not acceptable in most application requirements. This needs to be considered when thinking about deploying deep learning.

It takes a lot of image data. Sometimes thousands, or tens of thousands of training images are required for the AI machine vision system to start the process. This shouldn’t be underestimated. And think about the implications for deployment in a manufacturing facility –you have to install and run the system to gather data before the learning part can begin. With traditional machine vision solutions, this is not the case, development can be completed before deployment, so the time-to-market is quicker.

Most processing is completed on reduced-resolution images. It should be understood that most (if not all) deep learning vision systems will reduce the image size down to a manageable size to process in a timely manner. Therefore, resolution is immediately lost from a mega-pixel resolution image down to a few hundred pixels. Data is compromised and lost.

There are no existing data sets for the specific vision system task. Unlike AI deep learning vision systems used in other industries, such as driverless cars or crowd scene detection, there are usually no pre-defined data sets to work from. In those industries, if you want to detect a “cat”, “dog”, or “human”, there is available data sources for fast-tracking the development of your AI vision system. Invariably in industrial machine vision, we look at a specific widget or part fault with no pre-determined visual data source to refer to. Therefore, the image data has to be collected, sorted and trained.

You need good, bad and test images. You need to be very careful in selecting the images into the correct category for processing. A “bad” image in the “good” pile of 10,000 images is hard to spot and will train the network to recognise bad as good. So, the deep learning system is only as good as the data provided to it for training and how it is categorised. You must also have a set of reference images to test the network with.

You sometimes don’t get definitive data on the reason for failure. You can think of the AI deep learning algorithm as a black box. So, you feed the data in and it will provide a result. Most of the time you won’t know why it passed or failed a part, or for what reason, only that it did. This makes deploying such AI vision systems into validated industries (such as medical devices, life sciences and pharmaceuticals) problematic.

You need a very decent GPU & PC system for training. A standard PC won’t be sufficient for the processing required in training the neural network. While the PC used for the runtime need be nothing special, your developers will need a high graphics memory PC with a top-of-the-range processer. Don’t underestimate this.

Pros

They do make good decisions in challenging circumstances. Imagine your part requiring automated inspection has a supplier constantly supplying a changing surface texture for your widget. This would be a headache for traditional vision systems, which is almost unsolvable. For AI machine vision, this sort of application is naturally suited.

They are great for anomaly detection which is out of the ordinary. One of the main benefits of AI-based vision systems is the ability to spot a defect that has never been seen before and is “left field” from what was expected. With traditional machine vision systems, the algorithm is developed to predict a specific condition, be it the grey scale of a feature, the size in pixels which deems a part to fail, or the colour match to confirm a good part. But if your part in a million that is a failure has a slight flaw that hasn’t been accounted for, the traditional machine vision system might miss it, whereas the AI deep learning machine vision system might well spot it.

They are useful as a secondary inspection tactic. You’ve exhausted all traditional methods of image processing, but you still have a small percentage of faults which are hard to classify against your known classification database. You have the image data, and the part fails when it should, but you want to drill down to complete a final analysis to improve your yield even more. This is the perfect scenario for AI deep learning deployment. You have the data, understand the fault and can train on lower-resolution segments for more precise classification. This is probably where AI deep learning vision adoption growth will increase over the coming years.

They are helpful when traditional algorithms can’t be used. You’ve tried all conventional segmentation methods, pixel measurement, colour matching, character verification, surface inspection or general analysis – nothing works! This is where AI can step in. The probable cause of the traditional algorithms not operating is the consistency in the part itself which is when deep learning should be tried.

Finally, it might just be the case that deep learning AI is not required for the vision system application. Traditional vision system algorithms are still being developed. They are entirely appropriate to use in many applications where deep learning has recently become the first point of call for solving the machine vision requirement. Think of artificial intelligence in vision systems as a great supplement and a potential tool in the armoury, but to be used wisely and appropriately for the correct machine vision application.

Learn more about Artificial Intelligence (AI) Deep Learning here:

View Page

Automated Vision Metrology: The Future of Manufacturing Quality Control

In-line visual quality control is more crucial than ever in today’s competitive medical device production environment. To keep ahead of the competition, manufacturers must ensure that their products fulfil the highest standards. One of the most effective ways to attain this goal is through automated vision metrology using vision technology.

So what do we mean by this? There are several traditional methods that a quality manager uses to keep tabs on the production quality. In the past, this would have been the use of a collection of slow CMM metrology machines to inspection the product one at a time. CMMs measure the coordinates of points on a part using a touch probe. This information can then be used to calculate the dimensions, surface quality and tolerances of the part. So we are not talking about high-speed vision inspection solutions in this context, but the precise, micron level, precision metrology inspection for measurements and surface defects.

The use of computer-controlled vision inspection equipment to measure and examine products is known as automated vision metrology. These vision systems offer several advantages over manual inspection CMM-type probing methods. For starters, automated metrology is faster. This is due to the fact that machines can perform measurements with great precision and repeatability at speed, as there is no need to touch the product, it’s non-contact vision assessment. Second, real-time inspection of parts is possible with automated metrology. This enables firms to detect and rectify flaws early in the manufacturing process, before they become costly issues.

Advantages of Automated Vision Metrology
The use of automated metrology in manufacturing has many intrinsic benefits. These include:

  • Decreased inspection time: Compared to manual inspection procedures, automated vision metrology systems can inspect parts significantly faster. This can save a significant amount of time during the manufacturing process.
  • Specific CTQ: Vision inspection systems can be set to measure specific critical to quality elements with great precision and reproducibility.
  • Non-contact vision inspection: As the vision camera system has no need to touch the product, it has no chance of scratching or adding to the surface quality problems which a touch probe inevitably has. This is especially important in industries such as orthopaedic joint and medical device tooling manufacturing.
  • Enhanced productivity: Automated vision metrology systems can be equipped with autoloaders, allowing for the fast throughput of products without the need for an operator to manually load a product into a costly fixture. Manufacturers can focus on other tasks that demand human judgement and inventiveness by freeing up human workers from manual inspection tasks.

Modern production relies heavily on automated vision metrology. It has several advantages, including enhanced accuracy, less inspection time, improved quality control, increased production, and higher product quality. With automated loading options now readily available, speed of inspection for precision metrology measurements can be completed at real production rates.

Overall, automated vision metrology is a potent tool for improving the quality and efficiency of production processes. It is an excellent addition to any manufacturing organisation wanting to increase quality control at speed, and make inspection more efficient compared to traditional CMM probing methods.

Clinical insight: 3 ways to minimise automated inspection errors in your medical device production line

Your vision system monitoring your production quality is finally validated and running 24/7, with the PQ behind you it’s time to relax. Or so you thought! It’s important that vision systems are not simply consigned to maintenance without an understanding of the potential automated inspection errors and what to look out for once your machine vision system is installed and running. These are three immediate ways to minimise inspection errors in your medical device, pharma or life science vision inspection solution.

1. Vision system light monitor.
As part of the installation of the vision system, most modern machine vision solutions in medical device manufacturing will have light controllers and the ability to compensate for any changes in the light degradation over time. Fading is a slow process but needs to be monitored. LEDs are made up of various materials such as semiconductors, substrates, encapsulants, and connectors. Over time, these materials can degrade, leading to reduced light output and colour shift. It’s important in a validated solution to understand these issues and have either an automated approach to the changing light condition through close loop control of grey level monitoring, or a manual assessment on a regular interval. It’s something to definitely keep an eye on.

2. Calibration pieces.
For a vision machines preforming automated metrology inspection the calibration is an important aspect. The calibration process will have been defined as part of the validation and production plan. Typically the calibration of a vision system will normally in the form of a calibrated slide with graticules, a datum sphere or a machined piece with traceability certification. Following from this would have been the MSA Type 1 Gauge Studies, this the starting point prior to a G R&R to determine the difference between an average set of measurements and a reference value (bias) of the vision system. Finally the system would be validated with the a Gauge R&R, which is an industry-standard methodology used to investigate the repeatability and reproducibility of the measurement system. So following this the calibration piece will be a critical part of the automated inspection calibration process. It’s important to store the calibration piece carefully, use it as determined from the validation process and keep it clean and free from debris. Make sure your calibration pieces are protected.

3. Preventative maintenance.
Vision system preventative maintenance is essential in the manufacturing of medical devices because it helps to ensure that the vision systems performs effectively over the intended lifespan. Medical devices are used to diagnose, treat, and monitor patients, and they play an important part in the delivery of healthcare. If these devices fail, malfunction, or are not properly calibrated, substantial consequences can occur, including patient damage, higher healthcare costs, and legal culpability for the manufacturer. Therefore, any automated machine vision system which is making a call on the quality of the product (at speed), must be maintained and checked regularly. Preventative maintenance of the vision systems involves inspecting, testing, cleaning, and calibrating the system on a regular basis, as well as replacing old or damaged parts.

Medical device makers benefit from implementing a preventative maintenance for the vision system in the following ways –
Continued reliability: Regular maintenance of the machine vision system can help identify and address potential issues before they become serious problems, reducing the risk of device failure and increasing device reliability.
Extend operational lifespan: Regular maintenance can help extend the lifespan of the vision system, reducing the need for costly repairs and replacements.
Ensure regulatory compliance: Medical device manufacturers are required to comply with strict regulatory standards (FDS, GAMP, ISPE) and regular maintenance is an important part of meeting these standards.

These three steps will ultimately help to lessen the exposure of the manufacture to production faults, and stop errors being introduced into the medical devices vision system production process. By reducing errors in the machine vision system the manufacturer can keep production running smoothly, increase yield and reduce downtime.

Why pharmaceutical label inspection using vision systems is adapting to the use of AI deep learning techniques

Automated inspection of pharmaceutical labels is a critical part of an automated production process in the pharmaceutical and medical device industries. The inspection process ensures that the correct labels are applied to the right products and that the labels contain relevant and validated information. These are generally a combination of Optical Character Recognition (OCR), Optical Character Verification (OCV), Print Quality Inspection and measurement of label positioning. Some manufacturers also require a cosmetic inspection of the label for debris, inclusions, smudges and marks. The use of vision inspection systems can significantly improve the efficiency and accuracy of the automation process, while also reducing the potential for human error.

Automated vision inspection systems can also help to ensure compliance with regulatory requirements for labelling, and provide manufacturers with a cost-effective and efficient way to improve the quality of their products. With increasing pressure to improve production efficiency and reduce costs, more and more pharmaceutical and medical device manufacturers are turning to automated vision inspection systems to improve their production processes and ensure quality products for their customers.

Over the last few years, more vision inspection systems for pharmaceutical label checks have been adapting to the use of deep learning and artificial intelligence neural networks, such as Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs).

Using AI and deep learning for optical character recognition (OCR) can significantly improve the performance of automated inspection systems for pharmaceutical labels. Traditional OCR systems rely on pre-defined templates and rules to recognize and interpret text on labels, which can be limited in their ability to recognize text accurately in different fonts, sizes, and layouts.

AI-based OCR systems, such as those using deep learning, can be trained on a large dataset of labelled images, allowing them to learn and recognize different fonts, sizes, and layouts of text. This makes them more robust and accurate in recognizing text in real-world scenarios, where labels may have variations in their appearance. The deep learning machine vision systems can also be trained to recognize text that is partially obscured or distorted, which is a common problem in real-world scenarios. This allows the system to make educated guesses about the text, which can improve its accuracy and reliability.

On the flip side, while the level of recognition may improve for hard-to-recognise situations for traditional machine vision inspection, the system also must be validated to set criteria for FDA/GAMP validation. How can this be achieved for a neural deep learning system?

It is true that gathering data for training an AI-based vision inspection system can be a hassle, but it is a crucial step to ensure the system’s performance. One way to overcome this is by using synthetic, validated data, which can be generated using computer-generated images. This can reduce the need for natural images and allows a broader range of variations to be included in the dataset. These synthetic images could be tested and validated on a traditional machine vision set-up, before then transferred to the training set for the AI-based vision inspection. Another way is to use transfer learning, in which a pre-trained validated model is fine-tuned on a smaller dataset of images specific to the task. This can significantly reduce the amount of data and resources needed to train a new model.

In conclusion, validated industries such as medical devices and pharmaceuticals continue to adapt to new, robust methods for traceability and quality print checking. Deep learning is evolving to meet the unique validation requirements of these industries.

How to select the correct machine vision lighting for your application

Machine vision lighting is one of the most critical elements in automated visual inspection. Consistent lighting at the correct angle, of the correct wavelength with the right lux level allows automated inspection to be robust and reliable. But how do you select the correct light for your machine vision application?

Where the inspection takes place on the production process has to be set according to the requirements of the quality check. This part of the process may also include what is often referred to as ‘staging’. Imagine a theatre, and this is the equivalent of putting the actor centre stage in the best possible place for the audience to see. Staging in machine vision is often mechanical and is required to:

  • Ensure the correct part surface is facing the camera. This may require rotation if several surfaces need inspecting
  • Hold the part still for the moment that the camera or lens captures the image
  • Consistently put the part in the same place within the overall image ‘scene’ to make it easy for the processor to analyse
  • Fixed machine vision lighting for each inspection process

Lighting is critical because it enables the camera to see necessary details. In fact, poor lighting is one of the major causes of failure of a machine vision systems. For every application, there are common lighting goals:

  • Maximising feature contrast of the part or object to be inspected
  • Minimising contrast on features not of interest
  • Removing distractions and variations to achieve consistency

In this respect, the positioning and type of lighting is key to maximise contrast of features being inspected and minimise everything else. The positioning matrix of machine vision lighting is shown below.

So it’s important to select the machine vision light relative to the position of the light with respect to the part being inspected.

Machine vision lighting is used in varying combinations and are key to achieving the optimal lighting solution. However, we also need to consider the immediate inspection environment. In this respect, the choice of effective lighting solutions can be compromised by access to the part or object to be inspected. Ambient lighting such as factory lights or sunlight can also have a significant impact on the quality and reliability of inspection and must be factored into the ultimate solution.

Finally, the interaction between the lighting and the object to be inspected must be considered. The object’s shape, composition, geometry, reflectivity, topography and colour will all help determine how light is reflected to the camera and the subsequent impact on image acquisition, processing, and measurement.

Due to the obvious complexities, there is often no substitute other than to test various techniques and solutions. It is imperative to get the best machine vision lighting solution in place to improve the yield, robustness and long term effectiveness of the automated inspection solution.

How to use semi-automatic inspection of pharmaceutical vials and syringes for regulatory compliance.

Semi-automatic (or Semi-automated) inspection machines for the pharmaceutical industry are intended for easy and effective examination of vials, ampoules, cartridges, and syringes carrying injectable liquids, powders, or freeze-dried products. This is the step up from a manual inspection approach which would typically be done by a trained inspector by rotating the vial in front of a black and white background for the analyses of certain defects, particles and particulates. The semi-automatic approach allows for higher volumes of pharmaceutical products to be inspected at greater speeds.

In simple terms, “Semi-Automatic” means the ability to rotate the product and present it with the correct lighting LUX level on a black and white background for the purpose of an operator to visually identify defects in the container or contents. This is typically in the form of a stand-alone semi-automated machine or part of an in-feed or in-line production system. Therefore, this shouldn’t be confused with fully automated visual inspection systems.

The key documents relating to the inspection of pharmaceutical vials and ampoules for contamination are driven by the United States Pharmacopeia (USP) Chapter 1 Injections and Implanted Drug Products (Parenterals). Product Quality Tests states that injectable drug preparations should be designed to exclude particulate matter as defined in USP Chapters “787” Subvisible Particulate Matter in Therapeutic Protein Injections, “788” Particulate Matter in Injections, and “789” Particulate Matter in Ophthalmic Solutions. Each final container should be inspected for particulate matter, as defined in Chapter “790” Visible Particulates in Injections. Containers that show the presence of visible particulates must be rejected.

Particulates can take all forms, from contamination and debris particulates, coating fragments, air bubbles or inconsistencies, visual defects in primary containers, polymer particles through to glass & metallic particles and microscopic cracks or inclusions in the glass or polymer. All these levels of defects need to be identified and rejected.

Every product needs to undergo a visual inspection to determine whether or not it contains any particle matter. It is possible to automate, manually perform, or partially automate the initial 100% check. The subsequent inspection to ensure that the acceptable quality level (AQL) has been met must be carried out manually. The presumed value for the lower limit of the visible range is 100µm; however, this value can shift based on the product container, the nature of the drug product, and the qualities of the particulate matter (shape, colour, refractive index etc.).

The sample in its primary container (vial, prefilled syringe, and glass cartridge) is rotated under standardised conditions for semi-automated visual inspection (roughly above 50-100 µm). The liquid in the spinning and/or subsequently stopped container can then be viewed to confirm compliance to quality levels. This is the way to reach regulatory compliance by applying semi-automatic inspection.

Although the regulation of particulate matter in pharmaceutical products is as detailed above, there is no regulatory guidance on either the limits of particulate matter in principal packaging components. Instead, the specifications are determined through collaboration between the customers and the providers. To lessen the amount of particulate matter in finished drug products, pharmaceutical manufacturers and packaging suppliers must work together to achieve this goal. In particular, this can be accomplished by making use of components that have minimal levels of loose, embedded, and adhered particulates. But no matter what contingencies are put in place, the production process should include a level of semi-automatic inspection to allow rouge particulates to be rejected from the final product.

Human inspectors are adaptable and can provide a response to something they have never seen before or something that “doesn’t look right”. They are also able to more easily withstand typical variations in containers, particularly those generated by moulding, which results in a reduction in the frequency of wrongly rejected good products. This compares to fully automated visual inspection, which has rigid boundaries for a pass and fails. However, people are restricted in the number of inspections that can be performed (i.e., the number of containers per minute or hour that they can inspect). Therefore, semi-automatic inspection is the solution to allow a high volume of product to be inspected at speed by an operator while still complying with the relevant regulatory levels.

Operators are also prone to weariness and require numerous pauses in order to work at a high level for extended periods of time. All of these constraints contribute to a greater degree of variation in the outcomes of manual inspections; however, this variation can be reduced by semi-automated inspection, combined with receiving enough training and adhering to standard operating procedures.

Manufacturers should conduct a 100% inspection during the stage at which there is the greatest likelihood that visible particulates will be detected in the final container (for example, before labelling to maximise container clarity). This is the stage at which the likelihood that visible particulates will be detected in the final container is highest. The equipment and the physical environment in which the visual inspection will be performed should both be designed by the manufacturers to reduce the amount of unpredictability in the inspection process while simultaneously increasing the amount of information that can be gleaned from it.

After 100% semi-automated inspection is completed, a manual inspection based on ISO 2859-1/ANSI/ASQ Z1.4 should be performed. The size of the AQL sampling depends on the batch size. Inspection Level II should be used.

Overall, Semi-automatic inspection of pharmaceutical products provides a route to regulatory compliance prior to the adoption of fully automated visual quality control.

Mixed Reality with Machine Vision, and what it means for automated visual inspection in production

We are about to see a huge crossover in the application of Mixed Reality combined with automated machine vision applications. While they are two separate and distinct application areas in their own rights, we are now at a point where the two disciplines can be combined to aid automation and increase productivity in manufacturing.

What is Mixed Reality?

Mixed Reality combines the physical and digital realms. These two realities represent the opposites of a spectrum known as the virtuality continuum. This spectrum of realities is known as the mixed reality spectrum. On one end of the spectrum is the physical reality of our existence as humans. On the opposite end of the spectrum, we have digital reality. In layman’s terms, it’s the ability to mix what you perceive and see in front of you with a digital map or projection into your line of sight. So it’s the ability of a system to display digital objects as if they existed in the real world. You can imagine the heads-up displays (HUD) from fighter jets as the original pre-cursor to the current mixed reality that allowed pilots to see a projection in front of them. The difference is that mixed reality is now carried with the user, providing a real-time and immersive experience. Mixed reality experiences result from serious technical advances in the last few years.

What is Mixed Reality in production?

Let’s drill down on the hardware – one of the key players in this market is the Microsoft HoloLens 2 headset. This mixed reality unit is designed to operate untethered (no cables) from the control. It’s a self-contained holographic device which allows industrial applications to be loaded directly into the unit, thus allowing for complex mixed reality industrial operations. It has a see-through display; this allows the user to see the physical environment while wearing the headset (as compared to virtual reality).

An operator, technician or engineer will typically put on the unit at the start of a shift. It has a screen in front of the line of sight. This allows for instructions, graphics, prompts and signs to be projected directly onto the engineer’s vision allowing a mixed reality industrial environment. The unit also has multiple camera units for eye tracking and can broadcast the video the user sees to a remote location. All these aspects are combined to give a mixed and augmented reality experience.

How can Mixed Reality be applied in industry?

Mixed reality can be applied in several different ways. Let’s drill down on some of the application areas it could be used for.

– Task Guidance. Imagine you have a machine on the other side of the world and this machine has a fault. Your engineers can assist remote workers by walking them through issues as if standing in front of the device themselves.

– Remote Inspection. The ability to inspect a remote area for maintenance, damage or repair requirements. It reduces the cost of quality assurance and allows experts to assess issues from a distance immediately.

– Assembly Assistance. The ability to assist remote workers by walking through complex tasks using animated prompts and step-by-step guides. The ability to project drawings, schematics and 3D renders of how the assembly will look will help to facilitate mixed reality use in manufacturing.

– Training. Industrial training courses can be integrated directly onto the mixed reality headset. This is especially important when manufacturers rely on transitory contract workers who need training repeatedly.

– Safety. Improvement in health and safety training and situational awareness. Project prompts and holograms relating to safety in the workplace to guide workers in factory environments.

Mixed Reality assembly and machine vision.

Machine vision in industrial manufacturing is primarily concerned with quality control, i.e. verifying a product through automated visual identification that a component, product or sub-assembly is correct. This can be related to measurement, presence of an object, reading a code or verifying print. Combining augmented mixed reality with the automated machine vision operation provides a platform to boost productivity.

Consider an operator/assembly worker sitting in front of a workbench with wearable tech such as the HoloLens. An operator could be assembling a complex unit with many parts. They can see the physical items around them, such as components and assemblies. Still, they can also interact with digital content, such as a shared document that updates in real-time to the cloud or instruction animations for assembly. That is, in essence, the promise of mixed reality.

The mixed reality unit provides animated prompts, 3D projections and instructions to walk the operator step-by-step through the build. With this, a machine vision camera is mounted above the operator, looking down on the scene. As each sequence is run and a part assembled, the vision system will automatically inspect the product. Pass and fail criteria can be automatically projected into the line of sight in the mixed reality environment, allowing the operator to continue the build knowing the part has been inspected. In the case of a rejected part, the operator receives a new sequence “beamed” into their projection, with instructions on how to proceed with the failed assembly and where to place it. Now the mixed reality is part of the normal production process when combined with machine vision inspection. Data, statistics and critical quality information from the machine vision system are all provided in real-time in front of the operator’s field of view.

What’s the future of Mixed Reality and Machine Vision?

Imagine the engineering director walking the production line. Though now he wears a mixed reality unit, each machine vision system has a QR code “anchor” on the outside of the machine. By looking in that direction, the engineering manager has all the statistics beamed in front of him of the vision system operation. Yield, parts inspected, reasons for failure, real-time data and even an image of the last part that failed through the system. Data combined with graphical elements all allow better control of yield and ultimately the factory’s productivity. Machine vision systems will have integrated solutions for communication directly with wearable technology. Coupled with this, some integrated vision system solutions will have specific needs for operators to direct image capture for artificial intelligence image gathering.

And perhaps ultimately, such a mixed reality unit becomes part of the standard supply of a machine vision inspection machine to help with not just the running of the machine but to facilitate faster back-up, maintenance and training of such machine vision systems.

How robotic vision systems and industrial automation can solve your manufacturing problems

In this post we’re going to drill down on robotic vision systems and how in the context of industrial automation they can help you boost productivity, increase yield and decrease quality concerns in your manufacturing process.

The question of how you can increase productivity in manufacturing is an ongoing debate, especially in the UK where productivity has been dampened over the last years compared to other industrialised nations. The key to unlocking and kickstarting productivity in manufacturing (output per hour worked) is the increased use of automation, to produce more, with less labour overhead. This is in the form of general industrial automation and robotic vision systems, by combining these two disciplines manufacturers can boost productivity and become less reliant on transient labour and a temporary workforce who requires continuous training and replacement.

So where do you start? Well, a thorough review of the current manufacturing process is a good place to begin. Where are the bottlenecks in flow? What tasks are labour-intensive? Where do processes slow down and start up again? Are there processes where temporary labour might not build to your required quality level? What processes require a lot of training? Which stations could easily lend themselves to automating?

When manufacturers deploy robot production cells, there can be a worry that you are losing the human element in the manufacturing process. How will the robot spot a defect when no human is involved in the production process now? We always have the quality supervisor and his team checking our quality while they built it, who’s going to do it now? The robot will never see what they did. And they mark a tally chart of the failures, we’re going to lose all our data!

All these questions go on in the mind of the quality director and engineering manager. So the answer is to integrate vision systems at the same time as the deployment of robotic automation. This can be in the form of robotic vision or as a separate stand-alone vision inspection process. This guarantees the process has been completed and becomes the eyes of the robot. Quality concerns relating to the build level, presence of components, correct fitment of components and measuring tolerances can all be solved.

For example, a medical device company may automate the packing of syringe products. Perhaps the medical syringes are pre-filled, labelled and placed into trays, all with six-axis robots and automation. A machine vision system could be used following all these tasks to check the pre-fill syringe level is correct, that all components in the plastic device are fitted in the correct position and finally an automated metrology check to confirm the medical device is measured to tolerance automatically. And remember the quality supervisor and his team also created a tally chart of failures, well with modern vision systems you’ll get all the data automatically stored, batch information, failure rates, yield, as well as the images of every product before they went out the door for information and warranty protection.

So how can robots replace operators? It normally comes down to some of the simple tasks, moved over to robots, including:

  • Robot Inspection Cells – robotics and vision combine to provide fully automated inspection of raw, assembled or sub-assemblies, replacing the tedious human inspection for the task.
  • Robot Pick and Place – the ability for the robot to pick from a conveyor, at speed, replacing the human element in the process.
  • Automated Bin Picking – By combining 3D vision with robot control feedback, a robot can autonomously pick from a random bin to allow parts to be presented to the next process, a machine tool loaded or even for handing to a human operator to increase the takt time for the line.
  • Robot construction and assembly. A task which has been done for many years by robots, but the advent of vision systems allow the quality inspection to be combined with the assembly process.
  • Industrial Automation – using vision systems to poke yoke processes to deliver zero defects forward in the manufacturing process, critical in a kanban, Just-in-Time (JIT) environment.
  • Robot and Vision Packing – picking and packing into boxes, containers and pallets using a combination of robots and vision for precise feedback. Reducing the human element in packing.

So, maybe you can deploy a robotic vision system and industrial automation into your process to help solve your manufacturing problems. It’s a matter of capital investment in the first place in order to streamline production, increase productivity and reduce reliance on untrained or casual workers in the manufacturing process. Robotic vision systems and industrial automation can solve your manufacturing problems.

3 simple steps to improve your machine vision system

What’s the secret to a perfect vision system installation? That’s what all industrial automation and process control engineers want to know. The engineering and quality managers have identified a critical process requiring automated inspection, so how do you guarantee that the machine vision system has been set up to the best possible engineering standard. And how can you apply this knowledge to refine the current vision systems on the line? Let’s look at some of the critical parts of the vision inspection chain and see the vital elements and what needs to be addressed. IVS has been commissioning machine vision systems for over two decades, so we understand the core elements inside and out of vision system installs. Use some of our tricks of the trade to improve your vision system. These 3 simple steps to improve your machine vision system will help you understand the immediate changes you can make to improve a vision system.

Step 1 -Lighting and filters.
What wavelength of light are you using, and how does this compare to the sensitivity of the camera, coupled with the colours on the part requiring inspection? An intelligent approach to how segmentation of the scene will take place should be based on the correct lighting set-up, the angle of the machine vision lighting and how the contrast is created based on the colour of the part against the light, either reflecting to or away from the camera. Coloured illumination may intensify or suppress defined colours in monochrome imaging. This way, the contrast helps recognise relevant features, which is decisive for an application-specific and optimally matched vision solution. For example, blue light cast onto a multi-colour surface will only be reflected by the blue content. The more blue content in the object, the more light is reflected, so the brighter the object will appear. In contrast, red content illuminated in blue appears exceptionally dark.

Combining, a filter mounted on the end of the lens allows blocking all unwanted ambient light and passing only the necessary wavelength of light required, increasing contrast and resolution. The choice of which filter to use depends on the type of result you are trying to achieve. But selecting an appropriate filter may improve your current system’s quality if ambient light is a problem for you.

Bandpass filters are frequently one of the simplest methods to significantly improve image quality. They have a Gaussian transmission curve and are ideal for investigating the effects of monochromatic imaging. But what are the different filters available?

Multi Bandpass Filters
Multi-Bandpass Filters use a single filter to transmit two or more distinct wavelength bands. They provide natural colour reproduction during the day and near-infrared illumination at night, resulting in accurate, high-contrast pictures. As a result, these are more suited for applications including intelligent traffic solutions, security monitoring, and agricultural inspection.

Longpass Filters
Longpass Filters enable a smooth transition from reflection to transmission and are available in various wavelengths based on the system’s requirements. Longpass filters are an efficient method for blocking excitation light in fluorescent applications and are best employed in a controlled environment where several wavelengths must be passed.

Shortpass/NIR Cut Filters
Shortpass filters offer an unrivalled shift from transmission to reflection and superb contrast. They work best in colour imaging to ensure natural colour reproduction and prevent infrared saturation.

Polarising Filters
Polarising filters minimise reflection, increase contrast, and discover flaws in transparent materials. They’re ideally suited for tasks requiring a lot of reflection, such as analysing items on glossy or very shiny surfaces.

Neutral Density Filters
Neutral Density Filters are known in the machine vision industry as “sunglasses for your system” because they diminish light saturation. Absorptive and reflective styles are available and are ideally suited for situations with excessive light. They offer excellent methods for controlling lens aperture and improving field depth.

So how do you match the filter to the light? Well, the LED’s nanometers (nm) wavelength typically relates to the band pass rating for the filter. So if you work in UV at 365 or 395nm, you would use an equivalent rated filter – BP365. If you’re using a green LED at 520nm, this would be a 520 band pass filter, and so forth. According to the filter rating, most filters have a curve chart to see where the light is cut-off.

Step 2 Camera resolution and fields of view (fov)
A vision system’s spatial resolution refers to the sensor’s active available pixels. When reviewing an improvement in your set-up and understanding where the fault may lie, it’s important to consider how many pixels you have available for image processing relative to the size of the defect or computation you are trying to achieve. For example, if you are trying to defect a 100 micron (0.1mm) x 100 microns (0.1mm) inclusion on the surface of a product, but the surface is 4m x 4m in size, by using a standard 5MP camera (say 2456 x 2054 pixels) you will only be able to resolve a single pixel to at best 4000/2054 ~ so 1.9mm. This is nowhere near the level of detail you need for your imaging system. So to improve your machine vision system, see what size of a defect you need to see and then work out your field of view (fov) relative to the pixels available. But bear in mind having a single-pixel resolution for your measurement is not good enough, as the system will fail the Measurement Systems Analysis (MSA) and Gauge R&R studies (These are tests used to determine the accuracy of measurements).

In this case, you need to be at a minimum of 20 pixels across the measurement tolerance – so 0.2/20 = 0.01mm. Therefore, you will need 10 microns per pixel for accurate, repeatable measurement – over 4m; this is 400,000 pixels. So the 5MP pixel camera original specified will never inspect one shot – the camera or the object will have to be moved in small fields of view or larger resolution cameras specified for the system to hit the inspection criteria.

Step 3 Vision algorithm optimisation
One of the major steps for improving a machine vision system is a thorough review of the vision system software and the underlying sequence of inspection for the image processing system. If the system is a simple, intelligent vision sensor, it might be a matter of reviewing the way the logical flow of the system has been set. Still, for more complex PC-based vision applications, there will be some fundamental areas which can be reviewed and checked.

1. Thresholding. Thresholding is a simple look-up table that places pixel levels into a binary 1 and 0 depending on where they sit above, below or in-between two levels. The resulting image is a pure black and white image, then used to segment an area from the background from another object. Dynamic thresholding is also available, which intuitively sets the threshold level based on the pixel value distribution. But because it uses the fundamental pixel grey level to determine the segmentation, it can be easily affected by changes in light, as this will ultimately change the pixel value and lead to changes required to the threshold level. This is one of the most common faults in a system which has not been set up with a controlled lighting level. So if you are reviewing a threshold level change, you should also check the lighting and filters used on the system, as seen in Step 1.

2. Calibration. Perhaps not calibration in the true sense of the word, but if the camera on your vision system has been knocked out of place (even slightly), it may have detrimental effects on the inspection outcome and the yields of the vision system. Re-calibration may need to be done. This could initially be checked, as most modern systems have a cross-hair type target built into the software to allow a comparison image from the original system install to be compared with a live image. Check the camera is in-line, and the field of view is still the same. Focus and light should also be consistent. Some measurement-based machine vision systems will come shipped with a calibration piece or a Dot and Square Calibration Target. These generally come with a NIST Traceability Certificate, so you know that the artefact is traceable to national standards. This calibration target should be placed, and the system checked against it.

So these 3 simple steps to improve your machine vision system may just help you increase your vision system’s yield and stop the engineering manager from breathing down your neck. This will ultimately enhance your production productivity and advance the systems in use. And remember, you should always look at ways to reduce false failures in a vision system; this is another way to boost your vision system performance and improve yield.