Why industrial vision systems are replacing operators in factory automation (and what they aren’t doing!)

This month, we’re following on from last month’s post.
We’ve all seen films showing the dystopian factory landscape a few hundred years in the future. Flying cars against a grey backdrop of industrial scenery. Robots serving customers at a fictitious future bar, and probably – if they showed a production line, it would be full of automatons and robots. We will drill down on the role that industrial vision systems can have on the real factory production line. Is it the case that in the future, all production processes will be replaced with automated assembly, coupled with real-time quality inspection? Intelligent robots which can handle every production process?

Let’s start with what’s happening now, today. It’s certainly the case that machine vision has developed rapidly over the last thirty years. We’ve gone from slow scientific image processing on a 386 computer (look it up!) to AI deep learning running at hundreds of frames per second to allow the real-time computation and assessment of an image in computer vision. But this doesn’t mean that every process has (or can be) replaced with industrial vision systems, but what’s stopping it from being so?

Well, the main barrier to this is the replication of the human dexterity and ability to manoeuvre the product in delicate (compared to an industrial automated process) ways. In tandem with this is the ability for a human to immediately switch to a different product mix, size and type of product. So you could have a human operator construct a simple bearing in a matter of seconds, inspect it – and then ask them to build next a completely different product, such as a small medical device. With some training and show how, this information exchange can be done and the human operator adapts immediately. But a robot would struggle. Not to say it couldn’t be done, but with today’s technology, it takes effort. This is the goal of Industry 5.0, the next step generation from Industry 4.0 flexible manufacturing. Industry 5.0 is a human-centric approach, so industry understanding that humans working in tandem with robotics, vision, and automation is the best approach.

So we need to look at industrial vision in the context of not just whether the image processing can be completed (be it with traditional algorithms or AI deep learning), but also how we can present the product meaningfully at the correct angles to assess it. Usually, these vision inspection checks are part and parcel with an assembly operation, which requires specific handling and movement (which a human is good at). This is the reason most adoption of machine vision happens on high throughput, low variation manufacturing lines – such as large-scale medical device production where the same product is validated and will be manufactured in the same way over many years. This makes sense – the payback is there and automation can be applied for assembly and inspection in one.

But what are the drivers for replacing people on production lines with automated vision inspection?
If the task can be done with automation and vision inspection, it makes sense to do it. Vision inspection is highly accurate, works at speed and is easy to maintain. Then there are the comparisons comparing the processes to a human operator. Vision systems don’t take breaks, don’t get tired, and robots don’t go to parties (I’ve never seen a robot at a party) – so they don’t start work in the morning with their minds not on the task at hand! So, it makes sense to move towards automated machine vision inspection wherever possible in a production process, and this represents huge growth in the adoption of industrial vision systems in the coming years.

If the job is highly complex in terms of intricate build, with many parts and variants, then robots, automation and vision systems are not so easy to deploy. However, with the promise of Industry 5.0, we have the template of the future – moving towards an appreciation of including the human operator in factory automation – combining humans with robots, automation, augmented reality, AI and automated inspection. So, the dystopian future might not be as bad as the filmmakers make out, with human operators still being an integral part of the production process.

Catalysts of change: How robots, machine vision and AI are changing the automotive manufacturing landscape

This month, we are discussing the technological and cultural shift in the automotive manufacturing environment caused by the advent of robotics and machine vision, coupled with the development of AI vision systems. We also drill down on the drivers of change and how augmented reality will shape the future of manufacturing technology for automotive production.

Robots in Manufacturing
Robots in manufacturing have been used for over sixty years, with the first commercial robots used in mass production in the 1960s – these were leviathans with large, unwieldy pneumatic arms, but they paved the way for what was to come in the 1970s. At this time, it was estimated that the USA had 200 such robots [1] used in manufacturing; by 1980, this was 4,000 – and now there are estimated to be more than 3 million robots in operation [2]. During this time the machine vision industry has grown to provide the “eyes” for the robot. Machine vision uses camera sensor technology to capture images from the environment for analysis to confirm either location (when it comes to robot feedback), quality assessment (from presence verification through to gauging checks) or simply for photo capture for warranty protection.

The automotive industry is a large user of mass-production robots and vision systems. This is primarily due to the overall size of the end unit (i.e. a built car), along with vision systems for confirmation of quality due to the acceptable parts per million failure rate (which is extremely low!). Robots allow repetitive and precise assembly tasks to be completed accurately every time, reducing the need for manual labour and providing a faster speed for manufacturing.

Automating with industrial robots is one of the most effective ways to reduce automotive manufacturing expenses. Factory robots help reduce labour, material, and utility expenses. Robotic automation reduces human involvement in manufacturing, lowering wages, benefits, and worker injury claims.

AI Machine Vision Systems
Deep learning in the context of industrial machine vision teaches robots and machines to do what comes naturally to humans, i.e. to learn by example. New multi-layered “bio-inspired” deep neural networks allow the latest machine vision solutions to mimic the human brain activity in learning a task, thus allowing vision systems to recognise images, perceive trends and understand subtle changes in images that represent defects. [3]

Machine vision performs well at quantitatively measuring a highly structured scene with a consistent camera resolution, optics and lighting. Deep learning can handle defect variations that require an understanding of the tolerable deviations from the control medium, for example, where there are changes in texture, lighting, shading or distortion in the image. Deep-learning vision systems can be used in surface inspection, object recognition, component detection and part identification. AI deep learning helps in situations where traditional machine vision may struggle, such as parts with varying size, shape, contrast and brightness due to production and process constraints.

Augmented Reality in Production Environments
In industrial manufacturing, machine vision is primarily concerned with quality control, which is the automatic visual identification of a component, product, or subassembly to ensure that it is proper. This can refer to measurement, the presence of an object, reading a code, or verifying a print. Combining augmented, mixed reality with automated machine vision operations creates a platform for increased efficiency. There is a shift towards utilising AI machine vision systems (where applicable!) to improve the reliability of some quality control checks in vision systems. Then, combine that assessment with the operator.

Consider an operator or assembly worker sitting in front of a workstation wearing wearable technology like the HoloLens or Apple Vision Pro (an augmented reality headset). An operator could be putting together a sophisticated unit with several pieces. They can see the tangible objects around them, including components and assemblies. They can still interact with digital content, such as a shared document that updates in real time to the cloud or assembly instructions. That is essentially the promise of mixed reality.

The mixed-reality device uses animated prompts, 3D projections, and instructions to guide the operator through the process. A machine vision camera is situated above the operator, providing a view of the scene below. As each step is completed and a part is assembled, the vision system automatically inspects the product. Pass and fail criteria can be automatically projected into the operator’s field of sight in the mixed reality environment, allowing them to continue building while knowing the part has been inspected. In the event of a rejected part, the operator receives a new sequence “beamed” into their projection, containing instructions on how to proceed with the failed assembly and where to place it. When integrated with machine vision inspection, mixed reality has become a standard aspect of the production process. Data, statistics, and important quality information from the machine vision system are shown in real time in the operator’s field of view.

Robots, AI Vision Systems and Augmented Reality
Let’s fast forward a few years and see what the future looks like. Well, it’s a combination of all these technologies as they continue to develop and mature. So AI vision systems mounted on cobots help with the manual assembly, while the operator wears an augmented reality headset to direct and guide the process for an unskilled worker. Workers are tracked while all tasks are being quality assessed and confirmed, while data is stored for traceability and warranty protection.

In direct automotive manufacturing, vision systems will become easier to use as large AI deep-learning datasets become more available for specific quality control tasks. These datasets will continue to evolve as more automotive and tier one and two suppliers use AI vision to monitor their production, allowing quicker deployment and high-accuracy quality control assessment across the factory floor.


References

Why you should transition your quality inspection from shadowgraphs to automated vision inspection

The shadowgraph used to be the preferred method, or perhaps the only method for the quality control department to check the adherence to measurement data. These typically vertical projectors have a plate bed, with parts laid on it, and the light path is vertical. This allows a silhouette and profile of the part to be shown to the operators as an exact representation of the part in black, but projected at a much higher magnification. These projectors are best for flat, flexible parts, washers, O rings, gaskets and parts that are too small to hold, such as very small shafts, screws etc. They are also used in orthopedic manufacturing to view profiles and edges of joint parts.

So, this old-fashioned method involves projecting a silhouette of the edge of the product or component. This projection is then overlaid with a clear (black-lined) drawing of the product showing some additional areas where the component shouldn’t stray into. The operator chooses the overlay from a bunch available, based on the part number of the part. This is laid over the projection and manually lined up with the tolerances shown as a thick band in key areas for quality checking. For example, if you’re an orthopedic joint manufacturer and are coating your product with an additive material, you might use a shadowgraph to check the edges of the new layer. The shadowgraph will project a magnified representation of the component to compare against.

Some optical comparators and shadowgraphs have developed to a level where they are combined with a more modern graphical user interface (GUI) and are digital to make overlays easier to check, coupled with the ability to take manual measurements.

Ultimately though, all these types of manual shadowgraphs take time to use, are manually intensive and are totally reliant on an operator to make the ultimate decision on what is a pass and a fail. You also have no record of what the projection looked like and the whole process takes time. This all leads to a loss of productivity and makes your Six Sigma quality levels dependent on an operator’s quality check, and as we all know, manual inspection (even 200% manual inspection) is not a reliable method for manufacturers to keep their quality levels in check. Operators get tired and there is a lack of consistency between different people in the quality assessment team.

But for those products with critical-to-quality measurements, the old shadowgraph methodology can now be replaced with a fully automated metrology vision system to reduce the reliance on the operator in making decisions, coupled with the ability to save SPC data and information.

In this case, the integration of ultra-high-resolution vision systems, in tandem with cutting-edge optics and collimated lighting, presents a formidable alternative to manual shadowgraphs, ushering in an era of automation. With this advancement, operators can load the product, press a button, and swiftly obtain automatic results confirming adherence to specified tolerances. Furthermore, all pertinent data can be meticulously recorded, with the option to enable serial number tracking, ensuring the preservation of individual product photos for future warranty claims. This transition not only illuminates manufacturing processes but elevates them to a brighter, six-sigma enhanced quality realm. Moreover, these systems can be rigorously validated to GAMP/ISPE standards, furnishing customers with an unwavering assurance of consistently superior quality outcomes.

For your upcoming quality initiatives or product launches, consider pivoting towards automated vision system machines, transcending the limitations of antiquated shadowgraph technology and operator-dependent assessments. Embrace the superior approach that is now within reach, and steer your manufacturing quality towards a future defined by precision, efficiency, and reliability.

By embracing automated vision systems, manufacturers streamline processes and ensure unparalleled precision and consistency in product quality. This shift represents a pivotal leap forward in manufacturing excellence, empowering businesses to meet and exceed customer quality expectations while navigating the ever-evolving landscape of modern production.

How to create data-driven manufacturing using automated vision metrology systems

Applying automated vision metrology technologies for process control before, during and after assembly and machining/moulding is now a prerequisite in any production environment, especially in the orthopaedic and additive manufacturing industries. This month, we’re drilling down on the data and knowledge you can create for your manufacturing process when deploying vision metrology systems for production quality control. Let’s face it – we all want to know where we are versus the production schedule!

What are “vision metrology” systems and machines?

It’s the use of non-contact vision systems and machines using metrology grade optics, lighting and software algorithms to allow increased throughput of quality control inspection and data collection in real-time for the production process.

What are the benefits of vision metrology systems?

Before we look at the data available on such systems and how they can be utilised to create data-driven production decisions, we should look at the benefits of automated vision metrology systems. This, combined with the data output, clearly focuses on why such systems should be deployed in today’s production environments.

Increased throughput. One of the main drivers and benefits of using automated vision metrology systems is the higher throughput compared to older contact and probing CMM approaches. Coupled with integration into autoload and autotending options, the higher production rates of deploying automated vision metrology allow for faster production and increased output from the factory door.

Better yields. As the systems are non-contact, there is no chance of damage or marking of the product, which is the risk with probing and contact inspection solutions. Checking quality through automated vision allows yields to improve across the production process.

Faster reaction to manufacturing issues. Data is king in the fast-moving production environment. Real-time defect detection, seeing spikes in quality and identifying quality issues quicker, allows production managers to react faster to changes or faults in manufacturing faults. Vision metrology allows immediate review and analysis of statistical process control on a batch, shift and ongoing basis. Monitor live progress of work orders as they happen.

Less downtime. With no contact points and fewer moving parts, maintenance downtime is minimal when applying automated vision metrology machines. Maintenance can be used for other tasks and increases productivity across the manufacturing floor.

Guarantees your quality level. Ultimately, the automated vision metrology machine is a goal-keeper. This allows the quality and production team to sleep easy, knowing that parts are not only being inspected, but also providing warranty protection via a photo save of every product through production.

How do we get data from an automated vision metrology quality control system?

This is where the use of such vision technology gets interesting. The central aspect is the quality control and automated checking of crucial measurements (usually critical to quality CTQ characteristics). Still, the data driving the quality decision is paramount for the manager. The overall data covers a range of data objects that could potentially be needed when running a modern manufacturing plant. These include OEE data, SPC information, shift records, KPI metrics, continuous improvement information, root cause analysis, and data visualisation.

OEE Data (Overall Equipment Effectiveness). The simplest way to calculate OEE is the ratio of fully productive time to planned production time. Fully productive time is just another way of saying manufacturing only good parts as quickly as possible (at the ideal cycle time) with no stop time. A complete OEE analysis includes availability (run time/planned production time), performance ((ideal cycle time x total count)/run time) and quality (good count/total count). The vision metrology data set allows this data to be easily calculated. The data is stored within the automated vision metrology device or sent directly to the factory information system.

Statistical Process Control (SPC) is used in industrial manufacturing quality control to manage, monitor, and maintain production processes. The formal term is “the use of statistical techniques to control a process or production method”. The idea is to make a process as efficient as possible while producing products within conformance specifications with as little scrap as possible. Vision metrology allows vital characteristics to be analysed at speed on 100% of product, compared to the slow, manual load CMM route. SPC data can be displayed on the vision system HMI, and immediate decisions can be made on the process performance and data presentation.

Shift records are part of the validation and access control system for the vision system used in vision metrology. This stops operators from accessing certain features and logging all information against a specific shift operator or team. Data analytics allow trends in operator handling and contribute to the data provided by the system. You may see a spike in quality concerns from a specific shift or operator; these trends can easily be tracked and traced.

KPI metrics are Key Performance Indicators normally specific to the manufacturing site. However, these invariably include monitoring the quality statistics, OEE, process parameters, environmental factors (e.g. temperature and humidity), shift data, downtime and metrology measurements. This is all part of understanding how certain situations impact the output from processes and how levels can be maintained.

Continuous Improvement is the process to improve a manufacturing facility’s product and service quality. Therefore, the data and images from any automated vision metrology machine play an important role in providing live data related to Six Sigma and real-time monitoring of the ppm (parts per million) failure rate within the facility.

Root cause analysis of manufacturing issues is challenging without tangible data from a quality inspection source. The ability to review product images at the point of inspection (with the date and time stamped information), coupled with the specific measurement data, allows easier and quicker root cause analysis when problems crop up.

Data visualisation is now built into modern automated vision metrology systems. The production manager and operators can see quality concerns, spikes in rejects, counts of good and bad, shift data, and live SPC all in a single screen. This can be displayed locally or sent immediately to the factory information system.

By utilising the latest-generation automated vision metrology systems manufacturers can now build a fully-integrated data driven production environment. They are allowing easier control based on accurate, immediate information and photos of products running through production.

Machine vision vs computer vision vs image processing

We’re often asked – what’s the difference between machine vision and computer vision, and how do they both compare with image processing? It’s a valid question as the two seemingly similar fields of machine vision and computer vision both relate to the interpretation of digital images to produce a result from the image processing of photos or images.

What is machine vision?
Machine vision has become a key technology in the area of quality control and inspection in manufacturing. This is due to increasing quality demands and the improvements it offers over and above human vision and manual operation.

Definition
It is the ability of a machine to consequentially sense an object, capture an image of that object, and then process and analyse the information for decision-making.

In essence, machine vision enables manufacturing equipment to ‘see’ and ‘think’. It is an exciting field that combines technologies and processes to automate complex or mundane visual inspection tasks and precisely guide manufacturing activity. In an industrial context, it is often referred to as ‘Industrial Vision’ or ‘Vision Systems’. As the raw image in machine vision are generally captured by a camera connected to the system in “real-time” (compared to computer vision), the disciplines of physics relating to optical technology, lighting and filtering are also part of the understanding in the machine vision world.

So, how does this compare to computer vision and image processing?
There is a great deal of overlap between the various disciplines, but there are clear distinctions between the groups.

Input Output
Image processing Image is processed using algorithms to correct, edit or process an image to create a new better image. Enhanced image is returned.
Computer vision Image/video is analysed using algorithms in often uncontrollable/unpredictable circumstances. Image understanding, prediction & learning to inform actions such as segmentation, recognition & reconstruction.
Machine vision Use of camera/video to analyse images in industrial settings under more predictable circumstances. Image understanding & learning to inform manufacturing processes.

Image processing has its roots in neurobiology and scientific analysis. This was primarily down to the limitations of processor speeds when image processing came into the fore in the 1970’s and 1980’s—for example, processing single images following the capture from a microscope-mounted camera.

When processors became quicker and algorithms for image processing became more adept at high throughput analysis, image processing moved into the industrial environment, and industrial line control was added to the mix. For the first time, this allowed an analysis of components, parts and assemblies as part of an automated assembly line to be quantified, checked and routed dependent on the quality assessment, so machine vision (the “eyes of the production line”) was born. Unsurprisingly, the first industry to adopt this technology en mass was the PCB and semiconductor manufacturers, as the enormous boom in electronics took place during the 1980s.

Computer vision crosses the two disciplines, creating a Venn diagram of overlap between the three areas, as shown below.

Machine vision vs computer vision vs image processing

As you can see, the work on artificial intelligence relating to deep learning has its roots in computer vision. Still, over the last few years, this has moved over into machine vision, as the image processing learnt from biological vision, coupled with cognitive vision moves slowly into the area of general purpose machine vision & vision systems. It’s rarely a two-way street, but some machine vision research moves back into computer vision and general image processing worlds. Given the more extensive reach of the computer vision industry, compared to the more niche machine vision industry, new algorithms and developments are driven by the broader computer vision industry.

Machine vision, computer vision and image processing are now broadly intertwined, and the developments across each sector have a knock-on effect on the growth of new algorithms in each discipline.

4 common red flags in your product design that will cause you trouble during automated vision inspection

This month, we’re going to discuss the aspects of design in your product which could affect the ability to apply automated visual inspection using vision systems to the production process. This can be a frustrating aspect for the engineering and quality managers when they’ve been asked to safeguard bad products going out of the door using machine vision, but there has been no attempt up the chain at the product design stages to either design the product in an effective way to make it simple to manufacture or to apply vision inspection to the process. Once the design has been signed off and validated, it’s often extremely difficult to get small changes instigated, and engineering is then tasked with the mass production and assembly of a product, with in-line visual inspection required.

Very often in medical device manufacturing, due to the long lifetime and cycles of products, the design was conceived months, or even years before it goes through a process of small batch production, before moving to mass production at high run rates – all this with the underlining aspect of validation to GAMP and FDA production protocols. This is the point when automated inspection will be integral to the quality and safeguarding of the consumer as part of the manufacturing process.

From our experience, we see the following red flags which could have been addressed at the design phase, which make the vision system more difficult to design and run.

1. Variants. Often, there can be too many variants of a product with additional variants of sub-assemblies, which means the number of final product variations can run to hundreds. While the vision system may be able to cope with this variation, it makes set-up more costly and ongoing maintenance more difficult. Sometimes, these variations are due to multiple customers down the line wanting subtle changes or unique part numbers, which the design team happily accommodate with no thought on the impact on the manufacturing process. The more variation which can be removed from the design phase, the easier and more cost-effective a solution for machine vision inspection will be.

2. Lack of specification in surface/material conditions. The settings for a machine vision system will depend on a product’s surface and material conditions. This is especially critical in surface inspection or any process where the segmentation of the background is required from the foreground – so presence verification, inclusion detection and even edge detection for gauging and measurement in machine vision. Suppose conditions vary too much due to a lack of specification on colour or surface being included in the design. This variation can make the vision system more susceptible to false failures or higher maintenance. While latest-generation artificial intelligence (AI) vision systems are making such analysis easier and less prone to failure in these conditions, having as little deviation in your incoming image quality for vision inspection makes sense. The design should include a specification on the surface conditions expected for repeatable manufacturing and inspection, which, for example, in plastic mould colour specification can be challenging to specify.

3. Inability to see the inspection area. In the past, we’ve been asked to automatically inspect surfaces which can’t be seen due to a lack of clear view from any angle. The product’s design is such that perhaps datum edges and points are too far away and not easily accessible. The design could be such that an overhang or other feature obscures the view. This is often the case in large sub-assemblies where the datum edge can be in a completely different area and angle to the area where vision inspection will be applied. Design engineers should assess the feasibility of applying automated inspection to the process to account for how easily the vision system can access certain areas and conditions. We often discuss such applications with manufacturing engineers, with the frustration from their side that the design engineers could have incorporated cut-outs or areas which would have provided a clear line of sight for the vision system to be applied.

4. Handling. Products should be designed to be easily handled, fed and presented to a vision system as part of the process. There will nearly always be a contact surface on tooling for handling, which will then limit the area where the vision system can view. Sometimes, this surface requires critical inspection, and the site is obscured due to the product’s design. For example, in syringe body inspection, the plastic body can be held by the top and the bottom and rotated, meaning some areas of the mould are not visible. If a lip had been designed in, the part could have been inspected twice, held differently, thus providing 100% inspection of the entire surface. Small design changes can impact both the manufacturing and automated vision inspection process. Smaller products can be inspected on glass dials and fixtures to mitigate this issue, but the design team should consider these issues at the start of the design process.

Product design needs to get input from the manufacturing and quality team early in the design process. Now vision systems are a standard part of the modern automated production process and an invaluable tool in the industry 4.0 flexible production concept; thought needs to be given to how subtle design changes and more detailed specification requirements on colour and surface quality in the design stage can help the integration and robust use of vision systems.

Connecting your vision system to the factory, everything you need to know

There has been a rapid move over the last few years from using vision systems not just for stand-alone, in-line quality control processes, but to drill down and use the information which is created, assessed and archived through the automated vision system.

When vision systems first started being used in numbers in the early 1990’s, the only real option of connection to the production line PLC (programmable logic controller) was through hardwired digital I/O (PNP/NPN), simply acting like a relay to make a decision on product quality and then to trigger a reject arm or air blast to remove the rogue product. The next evolution from there was to start to save some of the basic data at the same time as the inspection process took place. Simple statistics on how many products had passed the vision system inspection and how many had been rejected. This data could then either be displayed on the PLC HMI in simple form or via rudimentary displays on the vision system. From there, communication started with the RS-232 serial line, and this morphed into USB and onto TCP/IP.

The Fieldbus protocol was the next evolution of communication with vision systems. From the standpoint of the user, the fieldbus looked to be state-based, similar to digital I/O. In reality, the data was exchanged serially through a network. Because these messages were sent on a clock cycle, the information was delayed. The benefit of Fieldbus over digital I/O was that each data exchange package of several hundred to 1,000 bytes was exchanged. PROFIBUS, CC-Link, CANopen, and DeviceNet were among the first protocols to be created. While fieldbuses of the first generation utilised serial connections to exchange data, fieldbuses of the second generation used Ethernet. As a result, the technology was sometimes referred to as “Industrial Ethernet” as it evolved, but the term’s meaning is slightly ambiguous.

When compared to serial data transfer, Ethernet enables substantially more data to be sent. However, using Ethernet as a fieldbus medium has the problem of having non-deterministic transmission timings. To obtain enough real-time performance, several Industrial Ethernet protocols such as PROFINET, EtherNet/IP, Ethernet Powerlink, or EtherCAT typically expanded the Ethernet standard (to the Common Industrial Protocol, CIP). For PC-based vision systems, some of these expansions are implemented in hardware, which necessitates the insertion of an expansion card in the vision controller unit.

So, the big evolution came with the support of this “industrial ethernet” communication, so vision systems now use a unified standard supporting real-time Ethernet and Fieldbus systems for PC-based automation. This allows for data to be transferred and exchanged with the PLC through ethernet protocols on a single connection. The benefit is the reduction in complex and costly cabling, easy integration and fast deployment of vision systems. Ethernet is characterised by the large amount of data that can be transferred.

This allows for the seamless and simple integration with all standard PLCs, such as Allen-Bradley, Siemens, Schneider, Omron and others – directly to the vision systems controller unit. Vision system data can be displayed on the PLC HMI, along with the vision HMI, where required.

The most common protocols are:
Profinet — This industrial communications protocol is defined by Profibus International and allows vision systems to communicate with Siemens PLCs and other factory automation devices which support the protocol.

EtherNet/IP — This Rockwell-defined protocol enables a vision system to connect to Allen-Bradley PLCs and other devices.

ModBus/TCP — This industrial network protocol is defined by Schneider Electric and permits direct connectivity to PLCs and other devices over Ethernet.

This progress was and is being accompanied by an increase in the resolutions of industrial cameras. Industrial cameras for machine vision inspection are not always up to date with the commercial world of smartphones. This is largely because the quality necessary for automatic vision assessment is considerably greater than the quality required for merely seeing a photo (i.e. you don’t worry about a dead pixel when looking at your holiday pics!). As a preserved picture archive for end-of-line photo archiving in an industrial environment, high-resolution image quality is now important. In addition to connecting to the line PLC, vision systems must now link to the whole factory network environment and industrial information systems. Vision systems may now interact directly with production databases, with every image of every product leaving the factory gates being taken, recorded and time-stamped, and even linked to a batch or component number via serial number tracking. Now, vision systems are not only used as a goalkeeper to stop bad products from going out the door, but also as a warranty protection providing tangible image data for historic record keeping.

In conclusion, the industrial ethernet connection is now the easiest and fastest way to integrate vision system devices into an automated environment. The ease of connection, the lower cost of cabling, and the use of standard protocols make it a simple and effective method for high-speed communication, coupled with database connections for further image and data collection from the vision system. Vision systems are now linked to complete factory line control and command centres for fully immersive data collection.

How to calibrate optical metrology systems to ensure precise measurements

Optical metrology systems play a crucial role in various industries, enabling accurate and reliable measurements for quality control, inspection, and manufacturing processes. To ensure precise and consistent results, it is essential to calibrate these systems meticulously. By understanding the significance of calibration, considering key factors, and implementing regular calibration practices, organizations can optimise the performance of these systems. Accurate measurements obtained through properly calibrated optical metrology systems empower industries to maintain quality control, enhance efficiency, and drive continuous improvement.

How to calibrate your industrial vision system.

We’re often asked about the process for carrying out calibration on our automated vision inspection machines. After all, vision systems are based on pixels whose size is arbitrarily dependent on the sensor resolutions, fields-of-view and optical quality. It’s important that any measurements are validated and confirmed from shift-to-shift, day-to-day. Many vision inspection machines are replacing stand-alone slow probing metrology-based systems, or if it’s an in-line system, it will be performing multiple measurements on a device at high speed. Automating metrology measurement helps reduce cycle time and boost productivity in medical device manufacturing; therefore, accuracy and repeatability are critical.

In the realm of optical metrology, the utilisation of machine vision technology has brought about a transformative shift. However, to ensure that the results of the vision equipment have a meaning that everyone understands, the automated checks must be calibrated against recognised standards, facilitating compliance with industry regulations, certifications, and customer requirements.

The foundational science that instils confidence in the interpretation and accuracy of measurements is known as metrology. It encompasses all aspects of measurement, from meticulous planning and execution to meticulous record-keeping and evaluation, along with the computation of measurement uncertainties. The objectives of metrology extend beyond the mere execution of measurements; they encompass the establishment and maintenance of measurement standards, the development of innovative and efficient techniques, and the promotion of widespread recognition and acceptance of measurements.

For metrology measurements, we need a correlation between the pixels and the real-world environment. For those of you who want the technical detail, we’re going to dive down into the basis for the creation of individual pixels and how the base pixel data is created. In reality, the users of our vision systems need not know this level of technical detail, as the main aspect is the calibration process itself – so you can skip the next few paragraphs if you want to!

The camera sensor is split into individual pixels in machine vision. Each pixel represents a tiny light-sensitive region that, when exposed to light, creates an electric charge. When an image is acquired, the vision system captures the amount of charge produced at each pixel position and stores it in a memory array. A protocol is used to produce a uniform reference system for pixel positions. The top left corner of the picture is regarded the origin, and a coordinate system with the X-axis running horizontally across the rows of the sensor and the Y-axis running vertically down the columns is utilised.

The pixel positions inside the image may be described using the X and Y coordinates in this coordinate system. For example, the top left pixel is called (0,0), the pixel to its right is called (1,0), the pixel below it is called (0,1), and so on. This convention enables the image’s pixels to be referred to in a consistent and standardised manner. So this method provides a way (in combination with sub-pixel processing) to provide a standard reference coordinate position for a set of pixels combined to create an edge, feature or “blob”.

In machine vision, metrology calibration involves the mapping of the pixel coordinates of the vision system sensor back to real-world coordinates. This “mapping” ties the distances measured back to the real-world measurements, i.e., back to millimetres, microns or other defined measurement system. In the absence of a calibration, any edges, lines or points measured would all be relative to pixel measurements only. But quality engineers need real, tangible measurements to validate the production process – so all systems must be pre-calibrated to known measurements.

The process of calibration ensures traceability of measurements to recognised standards, facilitating compliance with industry regulations, certifications, and customer requirements. It enhances the credibility and trustworthiness of measurement data.

How do you calibrate the vision system in real-world applications?

Utilising certified calibration artifacts or reference objects with known dimensions and properties is essential. These standards serve as benchmarks for system calibration, enabling the establishment of accurate measurement references. Proper calibration guarantees that measurements are free from systematic errors, ensuring the reliability and consistency of the data collected. The method of calibration will depend on if the system is a 2D or 3D vision system.

For a 2D vision system, common practice is to use slides with scales carved or printed on them, referred to as micrometre slides, stage graticules, and calibration slides. There is a diverse range of sizes, subdivision accuracy, and patterns of the scales. These slides are used to measure the calibration of vision systems. These calibration pieces are traceable back to national standards, which is key to calibrating the vision inspection machine effectively. A machined piece can also be used with traceability certification, this is convenient when you need a specific measurement to calibrate from for your vision metrology inspection.

One form of optical distortion is called linear perspective distortion. This type of distortion occurs when the optical axis is not perpendicular to the object being imaged. A chart can be used with a pre-defined pattern to compensate for this distortion through software. The calibration system does not compensate for spherical distortions and aberrations introduced by the lens, so this is something to be aware of.

For 3D vision, you no longer need pricey bespoke artefacts or specifically prepared calibration sheets. Either the sensor is factory calibrated or a single round item in the shape of a sphere suffices. Calibrate and synchronise the vision sensor by sending calibrated component measurements to the IVS machine. You will instantly receive visible feedback that you may check and assess during the calibration process.

How often do I need to re-calibrate a vision system?

We often get asked this question, since optimal performance and accuracy of optical metrology systems can diminish over time due to factors like wear and tear or component degradation. Establishing a regular calibration schedule ensures consistent and reliable measurements. However, it’s often down to the application requirements, and the customer needs based on their validation procedures. This can take place once a shift, every two weeks, one a month or even once a year.

One final aspect is storage, all our vision inspection machines come with calibration storage built into the machine itself. It’s important to store the calibration piece carefully, use it as determined from the validation process, and keep it clean and free from dirt.

Overall, calibration is often automatic, and the user need never know this level of detail on how the calibration procedure operates. But it’s useful to have an understanding of the pixel to real-world data and to know that all systems are calibrated back to traceable national standards.

The complete guide to Artificial Intelligence (AI) and Deep Learning (DL) for machine vision systems.

Where are we at, and what is the future?

We had a break from the blog last month while we worked on this latest piece, as it’s a bit of a beast. This month we discuss artificial intelligence (AI) for machine vision systems. An area with much hype due to the introduction of AI language models such as ChatGPT and AI image models like DALLE. AI is starting to take hold for vision systems, but what’s it all about? And is it viable?

Where are we at?

Let’s start at the beginning. The term “AI” is an addendum for a process that has been around for the last twenty years in machine vision but has only now become more prolific (or should we say, the technology has matured to the point of mass deployment). We provided vision systems in the early 00’s using low-level neural networks for feature differentiation and classification. These were primarily used for simple segmentation and character verification. These networks were basic and had limited capability. Nonetheless, the process we used then to train the network is the same as it is now, but on a much larger scale. Back then, we called it a “neural network”; now it’s termed artificial intelligence.

The term artificial intelligence is coined as the network has some level of “intelligence” to learn by example and expand its knowledge on iterative training levels. The initial research into computer vision AI discovered that human vision has a hierarchical structure on how neurons respond to various stimuli. Simple features, such as edges, are detected by neurons, which then feed into more complex features, such as shapes, which finally feed into more complex visual representations. This builds individual blocks into “images” the brain can perceive. You can think of pixel data in industrial vision systems as the building blocks of synthetic image data. Pixel data is collected on the image sensor and transmitted to the processor as millions of individual pixels of differing greyscale or colour. For example, a line is simply a collection of like-level pixels in a row with edges that transition pixel by pixel in grey level difference to an adjacent location.

Neural networks AI vision attempts to recreate how the human brain learns edges, shapes and structures. Neural networks consist of layers of processing units. The sole function of the first layer is to receive the input signals and transfer them to the next layer and so on. The following layers are called “hidden layers”, because they are not directly visible to the user. These do the actual processing of the signals. The final layer translates the internal representation of the pattern created by this processing into an output signal, directly encoding the class membership of the input pattern. Neural networks of this type can realise arbitrarily complex relationships between feature values and class designations. The relationship incorporated in the network depends on the weights of the connections between the layers and the number of layers – and can be derived by special training algorithms from a set of training patterns. The end result is a “web” of connections between the networks in a non-linear fashion. All of this is to try and mimic how the brain operates in learning.

With this information in hand, vision engineering developers have been concentrating their efforts on digitally reproducing human neural architecture, which is how the development of AI came into being. When it comes to perceiving and analysing visual stimuli, computer vision systems employ a hierarchical approach, just like their biological counterparts do. Traditional machine vision inspection continued in this vein, but AI requires a holistic view of all the data to compare and develop against.

So, the last five years have seen an explosion in the development of artificial intelligence, deep learning vision systems. These systems are based on neural network development in computer vision in tandem with the development of AI in other software engineering fields. All are generally built on an automatic differentiation library to implement neural networks in a simple, easy-to-configure solution. Deep learning AI vision solutions incorporate artificial intelligence-driven defect finding, combined with visual and statistical correlations, to help pinpoint the core cause of a quality control problem.

To keep up with this development, vision systems have evolved into manufacturing AI and data gathering platforms, as the need for vision data for training becomes an essential element for all AI vision systems compared to deploying traditional machine vision algorithms. Traditional vision algorithms still need image data for development, but not to the same extent as needed for deep learning. This means vision platforms have developed to integrate visual and parametric data into a single digital thread from the development stage of the vision project all the way through production quality control deployment on the shop floor.

In tandem with the proliferation of the potential use of AI in vision systems is the explosion in suppliers of data-gathering “AI platforms”. These systems are more of a housekeeping exercise for image gathering, segmentation and human classification before the submission to the neural network, rather than being a quantum leap in image processing or a clear differential compared to traditional machine vision. Note: most of these companies have “AI” in their titles. These platforms allow for clear presentation of the images and the easy submission to the network for computation of the neural algorithm. Still, all are based on the same overall architecture.

The deep learning frameworks – Tensorflow or PyTorch – what are they?

Tensorflow and PyTorch are the two main deep learning frameworks developers of machine vision AI systems use. Each was developed by Google and Facebook, respectively. They are used as a base for developing the AI models at a low level, generally with a graphical user interface (GUI) above it for image sorting.

TensorFlow is a symbolic math toolkit that is best suited for dataflow programming across a variety of workloads. It provides many abstraction levels for modelling and training. It’s a promising and rapidly growing deep learning solution for machine vision developers. It provides a flexible and comprehensive ecosystem of community resources, libraries, and tools for building and deploying machine-learning apps. Recently, Tensorflow has integrated Keras into the framework, a precursor to Tensorflow.

PyTorch is a highly optimised deep learning tensor library built on Python and Torch. Its primary use is for applications that use graphics processing units (GPUs) and central processing units (CPUs). Vision system vendors favour PyTorch over other Deep Learning frameworks such as TensorFlow and Keras because it employs dynamic computation networks and is entirely written in Python. It gives researchers, software engineers, and neural network debuggers the ability to test and run sections of the code in real-time. Because of this, users do not have to wait for the entirety of the code to be developed before determining whether or not a portion of the code works.

Whichever solution is used for deployment, the same pros and cons apply in general to machine vision solutions utilising deep learning.

What are the pros and cons of using artificial intelligence deep learning in machine vision systems?

Well, there are a few key takeaways when considering using artificial intelligence deep learning in vision systems. These offer pros and cons when considering whether AI is the appropriate tool for industrial vision system deployment.

Cons

Industrial machine vision systems must have high yield, so we are talking 100% correct identification of faults, in substitute for accepting some level of false failure rate. There is always a trade-off in this process in setting up vision systems to ensure you err on the side of caution so that real failures are guaranteed to be picked up and automatically rejected by the vision system. But with AI inspection, the yields are far from perfect, primarily because the user has no control of the processing functionality that has made the decision on what is deemed a failure. A result of pass or fail is simply given for the outcome. The neural network having been trained from a vast array of both good and bad failures. This “low” yield (though still above 96%) is absolutely fine for most requirements of deep learning (e.g. e-commerce, language models, general computer vision), but for industrial machine vision, this is not acceptable in most application requirements. This needs to be considered when thinking about deploying deep learning.

It takes a lot of image data. Sometimes thousands, or tens of thousands of training images are required for the AI machine vision system to start the process. This shouldn’t be underestimated. And think about the implications for deployment in a manufacturing facility –you have to install and run the system to gather data before the learning part can begin. With traditional machine vision solutions, this is not the case, development can be completed before deployment, so the time-to-market is quicker.

Most processing is completed on reduced-resolution images. It should be understood that most (if not all) deep learning vision systems will reduce the image size down to a manageable size to process in a timely manner. Therefore, resolution is immediately lost from a mega-pixel resolution image down to a few hundred pixels. Data is compromised and lost.

There are no existing data sets for the specific vision system task. Unlike AI deep learning vision systems used in other industries, such as driverless cars or crowd scene detection, there are usually no pre-defined data sets to work from. In those industries, if you want to detect a “cat”, “dog”, or “human”, there is available data sources for fast-tracking the development of your AI vision system. Invariably in industrial machine vision, we look at a specific widget or part fault with no pre-determined visual data source to refer to. Therefore, the image data has to be collected, sorted and trained.

You need good, bad and test images. You need to be very careful in selecting the images into the correct category for processing. A “bad” image in the “good” pile of 10,000 images is hard to spot and will train the network to recognise bad as good. So, the deep learning system is only as good as the data provided to it for training and how it is categorised. You must also have a set of reference images to test the network with.

You sometimes don’t get definitive data on the reason for failure. You can think of the AI deep learning algorithm as a black box. So, you feed the data in and it will provide a result. Most of the time you won’t know why it passed or failed a part, or for what reason, only that it did. This makes deploying such AI vision systems into validated industries (such as medical devices, life sciences and pharmaceuticals) problematic.

You need a very decent GPU & PC system for training. A standard PC won’t be sufficient for the processing required in training the neural network. While the PC used for the runtime need be nothing special, your developers will need a high graphics memory PC with a top-of-the-range processer. Don’t underestimate this.

Pros

They do make good decisions in challenging circumstances. Imagine your part requiring automated inspection has a supplier constantly supplying a changing surface texture for your widget. This would be a headache for traditional vision systems, which is almost unsolvable. For AI machine vision, this sort of application is naturally suited.

They are great for anomaly detection which is out of the ordinary. One of the main benefits of AI-based vision systems is the ability to spot a defect that has never been seen before and is “left field” from what was expected. With traditional machine vision systems, the algorithm is developed to predict a specific condition, be it the grey scale of a feature, the size in pixels which deems a part to fail, or the colour match to confirm a good part. But if your part in a million that is a failure has a slight flaw that hasn’t been accounted for, the traditional machine vision system might miss it, whereas the AI deep learning machine vision system might well spot it.

They are useful as a secondary inspection tactic. You’ve exhausted all traditional methods of image processing, but you still have a small percentage of faults which are hard to classify against your known classification database. You have the image data, and the part fails when it should, but you want to drill down to complete a final analysis to improve your yield even more. This is the perfect scenario for AI deep learning deployment. You have the data, understand the fault and can train on lower-resolution segments for more precise classification. This is probably where AI deep learning vision adoption growth will increase over the coming years.

They are helpful when traditional algorithms can’t be used. You’ve tried all conventional segmentation methods, pixel measurement, colour matching, character verification, surface inspection or general analysis – nothing works! This is where AI can step in. The probable cause of the traditional algorithms not operating is the consistency in the part itself which is when deep learning should be tried.

Finally, it might just be the case that deep learning AI is not required for the vision system application. Traditional vision system algorithms are still being developed. They are entirely appropriate to use in many applications where deep learning has recently become the first point of call for solving the machine vision requirement. Think of artificial intelligence in vision systems as a great supplement and a potential tool in the armoury, but to be used wisely and appropriately for the correct machine vision application.

Learn more about Artificial Intelligence (AI) Deep Learning here:

View Page

Automated Vision Metrology: The Future of Manufacturing Quality Control

In-line visual quality control is more crucial than ever in today’s competitive medical device production environment. To keep ahead of the competition, manufacturers must ensure that their products fulfil the highest standards. One of the most effective ways to attain this goal is through automated vision metrology using vision technology.

So what do we mean by this? There are several traditional methods that a quality manager uses to keep tabs on the production quality. In the past, this would have been the use of a collection of slow CMM metrology machines to inspection the product one at a time. CMMs measure the coordinates of points on a part using a touch probe. This information can then be used to calculate the dimensions, surface quality and tolerances of the part. So we are not talking about high-speed vision inspection solutions in this context, but the precise, micron level, precision metrology inspection for measurements and surface defects.

The use of computer-controlled vision inspection equipment to measure and examine products is known as automated vision metrology. These vision systems offer several advantages over manual inspection CMM-type probing methods. For starters, automated metrology is faster. This is due to the fact that machines can perform measurements with great precision and repeatability at speed, as there is no need to touch the product, it’s non-contact vision assessment. Second, real-time inspection of parts is possible with automated metrology. This enables firms to detect and rectify flaws early in the manufacturing process, before they become costly issues.

Advantages of Automated Vision Metrology
The use of automated metrology in manufacturing has many intrinsic benefits. These include:

  • Decreased inspection time: Compared to manual inspection procedures, automated vision metrology systems can inspect parts significantly faster. This can save a significant amount of time during the manufacturing process.
  • Specific CTQ: Vision inspection systems can be set to measure specific critical to quality elements with great precision and reproducibility.
  • Non-contact vision inspection: As the vision camera system has no need to touch the product, it has no chance of scratching or adding to the surface quality problems which a touch probe inevitably has. This is especially important in industries such as orthopaedic joint and medical device tooling manufacturing.
  • Enhanced productivity: Automated vision metrology systems can be equipped with autoloaders, allowing for the fast throughput of products without the need for an operator to manually load a product into a costly fixture. Manufacturers can focus on other tasks that demand human judgement and inventiveness by freeing up human workers from manual inspection tasks.

Modern production relies heavily on automated vision metrology. It has several advantages, including enhanced accuracy, less inspection time, improved quality control, increased production, and higher product quality. With automated loading options now readily available, speed of inspection for precision metrology measurements can be completed at real production rates.

Overall, automated vision metrology is a potent tool for improving the quality and efficiency of production processes. It is an excellent addition to any manufacturing organisation wanting to increase quality control at speed, and make inspection more efficient compared to traditional CMM probing methods.

Your ultimate guide to GAMP validation for vision systems and industrial automation

You’re starting a new automation project, and vision inspection is a critical part of the production process. How do you allow for validation of the vision inspection systems and the machine vision hardware and software? This is a question we hear a lot, and in this post we’re going to drill down on the elements of validation in the context of vision systems for the medical devices and pharmaceutical industries. We’ll attempt to answer the questions by providing an overview of GAMP, a guide to key terminology and offering advice on implementing validation for your vision system and machine vision applications.

You’ll hear a lot of terminology and acronyms in industrial automation validation. So we’ll start by explaining some of the terms before drilling down on how the validation process works.

So what is GAMP? Good Automated Manufacturing Practice is abbreviated as “GAMP.” It’s a methodology for generating high-quality machinery based on a life cycle model and the concept of prospective validation. It was developed to serve the needs of the pharmaceutical industry’s suppliers and consumers and is part of any automation project. The current version of GAMP is GAMP 5. The full title is “A Risk-Based Approach to Compliant GxP Computerized Systems”.

The GAMP good practices were developed to meet the Food and Drug Administration’s (FDA) and other regulatory authorities’ (RAs’) rising requirements for computerised system compliance and validation in the industrial automation industry, and they are now employed globally by regulated enterprises and their suppliers.

GAMP was developed by experts in the pharmaceutical manufacturing sector, to aid in the reduction of grey areas in the interpretation of regulatory standards, leading to greater conformity, higher quality, more efficiency, and lower production costs. They tend to be methodological in nature, which in the context of machine vision matches the processes of industrial image processing.

So in layman’s terms, the GAMP and FDA validation is designed to prove the performance of a system in the fundamentals of what it is designed for and lays down the parameters and testing procedures to validate the process across the design, build, commissioning, test and production processes. In machine vision for industrial automation, this will be heavily biased towards the hardware and software specifications, along with the performance criteria for the vision system.

What is GxP that GAMP refers to? Simply, this refers to good practice, with the x in the middle referring to “various fields”. It refers to a set of guidelines and regulations that are followed in industries that produce pharmaceuticals, medical devices, and other products that have a direct impact on the health and well-being of people.

GxP guidelines and regulations are designed to ensure the quality, safety, and efficacy of medical device and pharma products. They cover various aspects of product development, manufacturing, testing, and distribution, as well as the training, qualifications, and conduct of the personnel involved in these processes.

GxP guidelines and regulations are enforced by regulatory agencies, such as the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA), and non-compliance with these guidelines can result in fines, product recalls, and other legal consequences.
There are several different types of GxP, including:

  • GMP (Good Manufacturing Practice): This set of guidelines and regulations covers the manufacturing and testing of pharmaceuticals, medical devices, and other products. This is the most relevant to the validation of vision systems, inspection and machine vision in industrial automation.
  • GCP (Good Clinical Practice): This set of guidelines and regulations covers the conduct of clinical trials involving human subjects.
  • GLP (Good Laboratory Practice): This set of guidelines and regulations covers the conduct of laboratory studies involving the testing of chemicals, drugs, and other substances.
  • GPP (Good Pharmacy Practice): This set of guidelines and regulations covers the practice of pharmacy, including the dispensing and distribution of drugs and other products.

By following GxP guidelines and regulations and validation processes, companies can ensure that their products meet the highest standards of quality, safety, and efficacy, and that they are manufactured, tested, and distributed in a way that protects the health and well-being of their customers.

What is the ISPE? This is the “International Society for Pharmaceutical Engineering”. ISPE is an organisation that bridges pharmaceutical knowledge to improve the development, production, and distribution of high-quality medicines for patients around the world through such means as innovation in manufacturing and supply chain management, operational excellence, and insights into regulatory matters. The ISPE develops the GAMP framework, and GAMP 5 was developed by the ISPE GAMP Community of Practice (CoP).

So the above sets out the framework for validation and why it exists. We’ll now look at what processes are needed to fully validate a vision system into medical device and pharmaceutical manufacturing. On a high level the following critical steps are followed when validating a vision system in production:

1. Define the validation plan: This includes identifying the scope of the validation, the criteria that will be used to evaluate the system, and the testing that will be done to ensure that the system meets these criteria.

2. Install and configure the system: This includes setting up the hardware and software necessary for the system to function correctly, as well as configuring the system to meet the specific needs of the production process.

3. Test the system: This includes conducting functional tests to ensure that the system performs as intended, as well as performance and reliability tests to ensure that the system is accurate and reliable.

4. Validate the system: This includes verifying that the system meets all relevant regulatory standards and requirements, as well as any internal quality standards or guidelines.

5. Document the validation process: This includes documenting the testing that was done, the results of the tests, and any issues that were identified and corrected during the validation process.

6. Implement the system: This includes training production staff on how to use the system and integrating the system into the production process.

7. Monitor and maintain the system: This includes ongoing monitoring and maintenance of the system to ensure that it continues to function correctly and meets all relevant standards and requirements.

But each step needs to be broken down and understood in the context of the validation process relating to GAMP. The following lifecycle shows the typical steps required for the validation of the vision system in production.

What is a URS? A URS, or User Requirement Specification, is a document that outlines the requirements for a product, system, or service. It is typically created at the beginning of a project, and it serves as a roadmap for the development team, providing guidance on what the final product should look like and how it should function.

A URS typically includes a detailed description of the product’s intended use and user base, as well as the specific requirements that the product must meet in order to be successful. These requirements may include functional requirements (e.g., what the product should be able to do), performance requirements (e.g., how fast or accurately the product should operate), and quality requirements (e.g., how reliable or durable the product should be).

The URS is an important tool for defining and communicating the project’s scope and objectives, and it is often used as a reference document throughout the development process. It is typically reviewed and approved by stakeholders, including the development team, customers, and regulatory agencies, before the project begins.

In addition to serving as a guide for the development team, the URS can also be used to evaluate the final product to ensure that it meets all of the specified requirements. This process is known as “verification,” and it is an important step in the quality control process to ensure that the product is fit for its intended use.

The URS is the first document that is required for validation of the vision system (or other industrial automation system), it’s the base document which then leads on the FDS (Functional Design Specification) and the HDS (Hardware Design Specification) and the SDS (Software Design Specification) incorporated as part of that specification. These documents are typically provided by the vision system company as part of the scope of supply of a fully validated vision inspection solution.

What is an FDS? A Functional Design Specification (FDS) is a document that outlines the functional requirements of a product, system, or service. It describes the specific behaviors and capabilities that the machine vision inspection system should have in order to fulfill its intended purpose. The FDS typically includes a detailed description of the vision systems functional requirements, as well as any technical constraints or requirements that must be considered during the design and development of the inspection process.

The purpose of an FDS is to provide a clear and comprehensive understanding of the functional requirements of the overall inspection system, and to serve as a reference for the design and development team as you bring the product to life. It is an important tool for ensuring that the final vision system meets the needs and expectations of the end users.

An FDS may also include information on the vision user interface design, user experience design, and any other aspects of the product that are relevant to its functional requirements. It is typically developed early in the design process and is used to guide the development of the product throughout the project.

Prior to installation, the manufacturer will undertake a Factory Acceptance Test (FAT) which should be documented and agreed between the vision system supplier and the customer.

As the project progresses the applications team need to think about the three major milestones for test, commissioning and integration of the validated vision system – these include:

Installation Qualification (IQ) – this is the process used to verify that the vision system, equipment has been installed correctly and meets all relevant specifications and requirements. It is an important step in the validation process for ensuring that an inspection system is ready for use in a production environment.

During an IQ, a series of tests and checks are performed to ensure that the machine vision system has been installed properly and is functioning correctly. This may include verifying that all required components are present and properly connected, checking that all necessary documentation is complete and accurate, and verifying that the system or equipment meets all relevant safety and performance standards.

The purpose of IQ is to ensure that the vision system is ready for use and meets all necessary requirements before it is put into operation. It is typically performed by a team of qualified professionals from the customers side, and the results of the IQ are documented in a report that can be used to demonstrate that the system or equipment has been installed correctly and is ready for use.

Operational Qualification (OQ) – This is the process used to verify that the vision system complies with the URS and FDS. During an OQ, a series of tests and checks are performed to ensure that the system or equipment operates correctly and consistently under normal operating conditions. This may include verifying that the system or equipment performs all of its required functions correctly, that it operates within specified parameters, and that it is capable of handling normal production volumes.

Performance Qualification (PQ) – This is the final step and is performed by the final user of the vision system. It verifies the operation of the system under full operational conditions. It generally produces production batches using either a placebo or live product, dependent on the final device requirements.

All aspects of the validation approach are captured in the Validation Plan (VP), which, combined with the Requirement Traceability Matrix (RTM) remains live during the project’s life to track all documents and ensure compliance with the relevant standards and traceability of documentation.

Only after the completion of the Performance Qualification is the system validated, and ready for production use.

In summary. The process of validating a vision inspection system in an industrial automation system involves several steps to ensure that the system is accurate, reliable, and meets the specified requirements. These steps can be broadly divided into three categories: planning, execution, and evaluation.

Planning: Before beginning the validation process, it is important to carefully plan and prepare for the testing. This includes defining the scope of the validation, identifying the personnel who will be involved, and establishing the criteria that will be used to evaluate the system’s performance. It may also involve obtaining necessary approvals and documents, such as a validation plan or protocol.

Execution: During the execution phase, the vision inspection system is tested under a variety of conditions to ensure that it is functioning properly. This may include testing with different types of objects, lighting conditions, and image resolutions to simulate the range of conditions that the system will encounter in real-world use. The system’s performance is then evaluated using the criteria established in the planning phase.

Evaluation: After the testing is complete, the results of the validation process are analyzed and evaluated. This may involve comparing the vision system’s performance to the specified requirements and determining whether the system can meet those requirements. If the system fails to meet the requirements, it may be necessary to make modifications or adjustments to the system to improve its performance.

Throughout the validation process, it is important to maintain thorough documentation of all testing and evaluation activities according to GAMP 5 regulations. This documentation can be used to demonstrate the system’s performance to regulatory agencies or other stakeholders, and it can also be used as a reference for future maintenance or upgrades.

The process of validating a vision inspection system in an industrial automation system is a crucial step in ensuring that the system is accurate, reliable, and meets the specified requirements. By following a structured and thorough validation process, companies can ensure that their vision inspection systems are fit for their intended use and can help to improve the efficiency and quality of their operations.

How to select the correct machine vision lighting for your application

Machine vision lighting is one of the most critical elements in automated visual inspection. Consistent lighting at the correct angle, of the correct wavelength with the right lux level allows automated inspection to be robust and reliable. But how do you select the correct light for your machine vision application?

Where the inspection takes place on the production process has to be set according to the requirements of the quality check. This part of the process may also include what is often referred to as ‘staging’. Imagine a theatre, and this is the equivalent of putting the actor centre stage in the best possible place for the audience to see. Staging in machine vision is often mechanical and is required to:

  • Ensure the correct part surface is facing the camera. This may require rotation if several surfaces need inspecting
  • Hold the part still for the moment that the camera or lens captures the image
  • Consistently put the part in the same place within the overall image ‘scene’ to make it easy for the processor to analyse
  • Fixed machine vision lighting for each inspection process

Lighting is critical because it enables the camera to see necessary details. In fact, poor lighting is one of the major causes of failure of a machine vision systems. For every application, there are common lighting goals:

  • Maximising feature contrast of the part or object to be inspected
  • Minimising contrast on features not of interest
  • Removing distractions and variations to achieve consistency

In this respect, the positioning and type of lighting is key to maximise contrast of features being inspected and minimise everything else. The positioning matrix of machine vision lighting is shown below.

So it’s important to select the machine vision light relative to the position of the light with respect to the part being inspected.

Machine vision lighting is used in varying combinations and are key to achieving the optimal lighting solution. However, we also need to consider the immediate inspection environment. In this respect, the choice of effective lighting solutions can be compromised by access to the part or object to be inspected. Ambient lighting such as factory lights or sunlight can also have a significant impact on the quality and reliability of inspection and must be factored into the ultimate solution.

Finally, the interaction between the lighting and the object to be inspected must be considered. The object’s shape, composition, geometry, reflectivity, topography and colour will all help determine how light is reflected to the camera and the subsequent impact on image acquisition, processing, and measurement.

Due to the obvious complexities, there is often no substitute other than to test various techniques and solutions. It is imperative to get the best machine vision lighting solution in place to improve the yield, robustness and long term effectiveness of the automated inspection solution.