How robotic vision systems and industrial automation can solve your manufacturing problems

In this post we’re going to drill down on robotic vision systems and how in the context of industrial automation they can help you boost productivity, increase yield and decrease quality concerns in your manufacturing process.

The question of how you can increase productivity in manufacturing is an ongoing debate, especially in the UK where productivity has been dampened over the last years compared to other industrialised nations. The key to unlocking and kickstarting productivity in manufacturing (output per hour worked) is the increased use of automation, to produce more, with less labour overhead. This is in the form of general industrial automation and robotic vision systems, by combining these two disciplines manufacturers can boost productivity and become less reliant on transient labour and a temporary workforce who requires continuous training and replacement.

So where do you start? Well, a thorough review of the current manufacturing process is a good place to begin. Where are the bottlenecks in flow? What tasks are labour-intensive? Where do processes slow down and start up again? Are there processes where temporary labour might not build to your required quality level? What processes require a lot of training? Which stations could easily lend themselves to automating?

When manufacturers deploy robot production cells, there can be a worry that you are losing the human element in the manufacturing process. How will the robot spot a defect when no human is involved in the production process now? We always have the quality supervisor and his team checking our quality while they built it, who’s going to do it now? The robot will never see what they did. And they mark a tally chart of the failures, we’re going to lose all our data!

All these questions go on in the mind of the quality director and engineering manager. So the answer is to integrate vision systems at the same time as the deployment of robotic automation. This can be in the form of robotic vision or as a separate stand-alone vision inspection process. This guarantees the process has been completed and becomes the eyes of the robot. Quality concerns relating to the build level, presence of components, correct fitment of components and measuring tolerances can all be solved.

For example, a medical device company may automate the packing of syringe products. Perhaps the medical syringes are pre-filled, labelled and placed into trays, all with six-axis robots and automation. A machine vision system could be used following all these tasks to check the pre-fill syringe level is correct, that all components in the plastic device are fitted in the correct position and finally an automated metrology check to confirm the medical device is measured to tolerance automatically. And remember the quality supervisor and his team also created a tally chart of failures, well with modern vision systems you’ll get all the data automatically stored, batch information, failure rates, yield, as well as the images of every product before they went out the door for information and warranty protection.

So how can robots replace operators? It normally comes down to some of the simple tasks, moved over to robots, including:

  • Robot Inspection Cells – robotics and vision combine to provide fully automated inspection of raw, assembled or sub-assemblies, replacing the tedious human inspection for the task.
  • Robot Pick and Place – the ability for the robot to pick from a conveyor, at speed, replacing the human element in the process.
  • Automated Bin Picking – By combining 3D vision with robot control feedback, a robot can autonomously pick from a random bin to allow parts to be presented to the next process, a machine tool loaded or even for handing to a human operator to increase the takt time for the line.
  • Robot construction and assembly. A task which has been done for many years by robots, but the advent of vision systems allow the quality inspection to be combined with the assembly process.
  • Industrial Automation – using vision systems to poke yoke processes to deliver zero defects forward in the manufacturing process, critical in a kanban, Just-in-Time (JIT) environment.
  • Robot and Vision Packing – picking and packing into boxes, containers and pallets using a combination of robots and vision for precise feedback. Reducing the human element in packing.

So, maybe you can deploy a robotic vision system and industrial automation into your process to help solve your manufacturing problems. It’s a matter of capital investment in the first place in order to streamline production, increase productivity and reduce reliance on untrained or casual workers in the manufacturing process. Robotic vision systems and industrial automation can solve your manufacturing problems.

3 simple steps to improve your machine vision system

What’s the secret to a perfect vision system installation? That’s what all industrial automation and process control engineers want to know. The engineering and quality managers have identified a critical process requiring automated inspection, so how do you guarantee that the machine vision system has been set up to the best possible engineering standard. And how can you apply this knowledge to refine the current vision systems on the line? Let’s look at some of the critical parts of the vision inspection chain and see the vital elements and what needs to be addressed. IVS has been commissioning machine vision systems for over two decades, so we understand the core elements inside and out of vision system installs. Use some of our tricks of the trade to improve your vision system. These 3 simple steps to improve your machine vision system will help you understand the immediate changes you can make to improve a vision system.

Step 1 -Lighting and filters.
What wavelength of light are you using, and how does this compare to the sensitivity of the camera, coupled with the colours on the part requiring inspection? An intelligent approach to how segmentation of the scene will take place should be based on the correct lighting set-up, the angle of the machine vision lighting and how the contrast is created based on the colour of the part against the light, either reflecting to or away from the camera. Coloured illumination may intensify or suppress defined colours in monochrome imaging. This way, the contrast helps recognise relevant features, which is decisive for an application-specific and optimally matched vision solution. For example, blue light cast onto a multi-colour surface will only be reflected by the blue content. The more blue content in the object, the more light is reflected, so the brighter the object will appear. In contrast, red content illuminated in blue appears exceptionally dark.

Combining, a filter mounted on the end of the lens allows blocking all unwanted ambient light and passing only the necessary wavelength of light required, increasing contrast and resolution. The choice of which filter to use depends on the type of result you are trying to achieve. But selecting an appropriate filter may improve your current system’s quality if ambient light is a problem for you.

Bandpass filters are frequently one of the simplest methods to significantly improve image quality. They have a Gaussian transmission curve and are ideal for investigating the effects of monochromatic imaging. But what are the different filters available?

Multi Bandpass Filters
Multi-Bandpass Filters use a single filter to transmit two or more distinct wavelength bands. They provide natural colour reproduction during the day and near-infrared illumination at night, resulting in accurate, high-contrast pictures. As a result, these are more suited for applications including intelligent traffic solutions, security monitoring, and agricultural inspection.

Longpass Filters
Longpass Filters enable a smooth transition from reflection to transmission and are available in various wavelengths based on the system’s requirements. Longpass filters are an efficient method for blocking excitation light in fluorescent applications and are best employed in a controlled environment where several wavelengths must be passed.

Shortpass/NIR Cut Filters
Shortpass filters offer an unrivalled shift from transmission to reflection and superb contrast. They work best in colour imaging to ensure natural colour reproduction and prevent infrared saturation.

Polarising Filters
Polarising filters minimise reflection, increase contrast, and discover flaws in transparent materials. They’re ideally suited for tasks requiring a lot of reflection, such as analysing items on glossy or very shiny surfaces.

Neutral Density Filters
Neutral Density Filters are known in the machine vision industry as “sunglasses for your system” because they diminish light saturation. Absorptive and reflective styles are available and are ideally suited for situations with excessive light. They offer excellent methods for controlling lens aperture and improving field depth.

So how do you match the filter to the light? Well, the LED’s nanometers (nm) wavelength typically relates to the band pass rating for the filter. So if you work in UV at 365 or 395nm, you would use an equivalent rated filter – BP365. If you’re using a green LED at 520nm, this would be a 520 band pass filter, and so forth. According to the filter rating, most filters have a curve chart to see where the light is cut-off.

Step 2 Camera resolution and fields of view (fov)
A vision system’s spatial resolution refers to the sensor’s active available pixels. When reviewing an improvement in your set-up and understanding where the fault may lie, it’s important to consider how many pixels you have available for image processing relative to the size of the defect or computation you are trying to achieve. For example, if you are trying to defect a 100 micron (0.1mm) x 100 microns (0.1mm) inclusion on the surface of a product, but the surface is 4m x 4m in size, by using a standard 5MP camera (say 2456 x 2054 pixels) you will only be able to resolve a single pixel to at best 4000/2054 ~ so 1.9mm. This is nowhere near the level of detail you need for your imaging system. So to improve your machine vision system, see what size of a defect you need to see and then work out your field of view (fov) relative to the pixels available. But bear in mind having a single-pixel resolution for your measurement is not good enough, as the system will fail the Measurement Systems Analysis (MSA) and Gauge R&R studies (These are tests used to determine the accuracy of measurements).

In this case, you need to be at a minimum of 20 pixels across the measurement tolerance – so 0.2/20 = 0.01mm. Therefore, you will need 10 microns per pixel for accurate, repeatable measurement – over 4m; this is 400,000 pixels. So the 5MP pixel camera original specified will never inspect one shot – the camera or the object will have to be moved in small fields of view or larger resolution cameras specified for the system to hit the inspection criteria.

Step 3 Vision algorithm optimisation
One of the major steps for improving a machine vision system is a thorough review of the vision system software and the underlying sequence of inspection for the image processing system. If the system is a simple, intelligent vision sensor, it might be a matter of reviewing the way the logical flow of the system has been set. Still, for more complex PC-based vision applications, there will be some fundamental areas which can be reviewed and checked.

1. Thresholding. Thresholding is a simple look-up table that places pixel levels into a binary 1 and 0 depending on where they sit above, below or in-between two levels. The resulting image is a pure black and white image, then used to segment an area from the background from another object. Dynamic thresholding is also available, which intuitively sets the threshold level based on the pixel value distribution. But because it uses the fundamental pixel grey level to determine the segmentation, it can be easily affected by changes in light, as this will ultimately change the pixel value and lead to changes required to the threshold level. This is one of the most common faults in a system which has not been set up with a controlled lighting level. So if you are reviewing a threshold level change, you should also check the lighting and filters used on the system, as seen in Step 1.

2. Calibration. Perhaps not calibration in the true sense of the word, but if the camera on your vision system has been knocked out of place (even slightly), it may have detrimental effects on the inspection outcome and the yields of the vision system. Re-calibration may need to be done. This could initially be checked, as most modern systems have a cross-hair type target built into the software to allow a comparison image from the original system install to be compared with a live image. Check the camera is in-line, and the field of view is still the same. Focus and light should also be consistent. Some measurement-based machine vision systems will come shipped with a calibration piece or a Dot and Square Calibration Target. These generally come with a NIST Traceability Certificate, so you know that the artefact is traceable to national standards. This calibration target should be placed, and the system checked against it.

So these 3 simple steps to improve your machine vision system may just help you increase your vision system’s yield and stop the engineering manager from breathing down your neck. This will ultimately enhance your production productivity and advance the systems in use. And remember, you should always look at ways to reduce false failures in a vision system; this is another way to boost your vision system performance and improve yield.

How to elevate Statistical Process Control (SPC) in vision system inspection (and how to connect to Industrial IoT!)

It’s the Production Managers dream in the era of Industry 4.0. A system that allows them to drill down on the assets in the production chain quickly and have immediate data on the throughput, results, and SPC data. This is all part of a goal to intelligently connect a company’s people, processes, places and assets to drive value across the whole organisation. That’s the goal of Industrial IoT (IIoT). And that’s what every manufacturer is striving for, having immediate, intelligent and critical data available on every process. This helps increase the efficiency of all manufacturing operations and allows plant-wide measuring and monitoring of machines’ quality, performance, and availability. Producers benefit from higher OEE and throughput while maintaining guaranteed levels of quality and cost.

The value of IIoT is that it immediately provides visibility of assets in a plant or in the field, and it transforms how a firm connects to the internet of things. The process of converting a factory to IIoT is to collect usage and performance data from existing systems and smart sensors, delivering advanced insights to unlock data-driven intelligence. Vision systems are an essential part of the manufacturing process. They have a significant impact on output and deliverables, so these are usually one of the first components connected to the factory information system. Companies are now establishing collaborative cultures in which engineers can make data-driven decisions that provide true economic benefit.

A fundamental part of machine vision inspection data is Statistical Process Control (SPC). Statistical Process Control (SPC) is a method for monitoring and controlling quality during the manufacturing process that is widely used in industry. During production, quality data in the form of Product or Process measurements are obtained in real-time. This information is then (at the very basic level) plotted on a graph with pre-set control limits. Control limits are set by the process’s capabilities, whereas the client’s requirements determine specification limits. Having this data connected to the IIoT system allows for more detailed analysis and alarms connected based on the moving data.

But how can I connect the vision system to the IIoT system?
There are many options. There are a large number of ways a vision system can send data to another system. This depends on the data (images, measurement data, settings, region of interest data etc.) required to be collected. Typically, the machine vision system will support a plug-in facility or a format convertor which allows the customisation of outputs to the IIoT system. But there are several immediate solutions that most vision systems will provide “out-of-the-box”, including
Digital I/O (small amount of data)
Fieldbus
PLC standard comms (industrial ethernet)
SQL Databases (SQL Server, SQLite, MySQL)
XML output
TCP/IP (traditional TCP/IP versus Industrial Ethernet)
Reports (Excel, PDF, Word)
Web API’s
In addition, many systems now communicate on an industrial basis using:
MQTT
SignalR
Web Sockets
UDP
The vision system may have a role to play in closed-loop control with the production process, allowing for moving parameters of a process to be controlled by the information returned from the vision system. This could already directly connect to the PLC or line controller for the vision system, e.g. by industrial ethernet. In this case, pulling the data into the factory IIoT may be as simple as communicating the machine PLC data to the factory server. It can be collated with the overall factory statistics and information into the global IIoT information exchange and display.

So next time you get a call from the production manager asking for a status report on the shop floor vision system and production statistics, just refer them to the company wide IIoT system, they’ll have all the data they need!

Robot vision trends in medical device production

Medical device manufacturing continues to drive the adoption of robot vision to increase throughput, increase yield and reduce the reliance on manual processes. Many production facilities have now turned to collaborative robots which allow operations in medical device production to be taken to the next level of optimisation.

Cobots with vision, tend to work hand in hand with humans balancing the imperative for safety with the need for flexibility and productivity. Robots no longer need to work alone. Collaborative robots are generally designed with inherent safety so they can work alongside humans where possible. Collaborative automation means greater speed and efficiency. Of course, a thorough risk-assessment is still required but the functionality within these robots leads them to be assessed safe once the surrounding conditions and operationality capability are tuned to the safety needs (including any gripper or end-effector design).

Cobots are designed for low payload applications such as handling small parts and inspection tasks so are ideal for medical production processes. Companies continue to look for ways to improve their efficiency by using robot with vision systems, allowing automation of current manual processes to produce productivity gains. Cobots provide a lower price point and lower integration cost to the customer, providing a faster return on investment that unlocks many more industrial tasks for medical device production automation.

Some examples of the current trends in medical device manufacturing for adopting robot vision solutions include:

Machine Tending

Cobot vision removes the need for many operators to be occupied tending to production machinery, feeding and general assembly operations. The cobot can open and close a machine door, pick and place parts for assembly, and even start a machining process by pushing the start button.

Palletising

The ability to stack and un-stack pallets is another requirement, with the vision system providing real-time off-sets and alignment for the pallet stack. A compact robotic cell that accurately loads products onto a pallet, reduces the need for rework while also allowing staff to work safely around the robot. Using cobots with vision within a robot cell allows safe interaction with staff members. Personnel can enter the robot’s operational area to speed up the pallet changes, providing a robotic solution that can work around staff without risking their safety.

Box Erection

Taking flat boxes off a stack and erecting them into a useable format as outer shipper cases is a repetitive and time-consuming action in industry to which cobots with vision are very well suited. Medical device manufacturers need not only to erect the boxes and casings, but print and confirm the serial batch numbers and use by date on the product, all of which can be completed by the same machine vision inspection system.

Functional Repetitive Testing

Manufacturers need to put their products through thousands of hours of test cycling to provide reliability, for example the continued movement of a syringe body or an asthma inhaler valve. Collaborative robots with vision can performs hundreds of hours of testing to support faster testing, improving reliability and quality.

Structured Bin Picking

Parts in a structured position allowing single picks from known datums can be easily completed with cobots and smart vision sensors. Pick and place into a known fixture or position allows integration of such solutions in complex automation production processes.

Unstructured Random Bin Picking

Bin picking from a random box of parts can now be accomplished with cobots and 3D vision. A point cloud of known data, identifying individually pickable parts is fed to the robot allowing parts which would have previously required a human picker to now to picked and placed into the next process.

Robots with machine vision will continue to dominate the foreseeable needs for automation of production in medical device manufacturing. With the drive for ever greater flexibility and the need to reduce manpower and increase reliability, this area of automation will continue to be a focus for the medical devices and pharmaceutical industries.

Why the Human Machine Interface (HMI) really does matter for vision systems

Control systems are essential to the operation of any factory. They control how inputs from various sources interact to produce outputs, and they maintain the balance of those interactions so that everything runs smoothly. The best control systems can almost appear magical; with a few adjustments here and there, you can change the output of an entire production line without ever stopping. This is just as true for the vision inspection process and machinery.

The human machine interface (HMI) in a quality control vision system is important to provide feedback to production, monitor quality at speed and provide easy to see statistics and information. The software interface provides the operator with all information they need during production, guiding them through the process and providing instructions when needed. It also gives them access to a wide array of tools for adjusting cameras or other settings on the fly, as well as useful data such as machine performance reports.

If a product has been rejected by the inspection machine it may be some time before an operator or supervisor can review the data. If the machine is fully autonomous and communicates directly with the factory information system the data might be automatically sent to the factory servers or cloud for later recall – this is standard practice in most modern production facilities. But the inspection machine HMI can also provide an immediate ability to recall the vision image data, detailed information on the quality reject criteria and can be used to monitor shift statistics. It’s important that the HMI is clearly designed, with ergonomics and ease-of-use for the shop floor operator as the main driver. For vision systems and machine vision technology to be adopted you need the buy-in from operators, the maintenance team and line supervisors.

The layout of the operator interface is important to give immediate data and statistical information to the production team. It is important to have an interface that can be easily read from a distance, and displays the necessary information in a single screen. During high-speed inspection operations (such as medical device and pharmaceutical inspection operations) it is not possible to see every product inspected, that’s why a neatly designed vision system display, showing the last failed image, key data and statistical process control (SPC) information provides a ready interface for the operator to drill down on process specifics which may be affecting the quality of the product.

It’s clear why the HMI in quality control vision systems are so crucial to production operations. They help operators see important manufacturing inspection data when necessary, recommend adjustments on the fly, monitor production in real-time and provide useful data about performance all at once.

Top 6 medical device manufacturing defects that a vision system must reject

What does every medical device manufacturer crave? Well, the delivery of the perfect part to their pharmaceutical customer, 100% of the time. One of the major stumbling blocks to that perfect score and getting a Parts Per Million (PPM) rate down to zero are the scratches, cracks and inclusions which can appear (almost randomly) in injection moulded parts. Most of the time, the body in question is a syringe, vial, pen delivery, pipette or injectable product which is transparent or opaque in nature, which means the crack, inclusion, or scratch is even more apparent to the end customer. Here we drill down on the top six defects you can expect to see in your moulded medical device and how vision systems are used to automatically find and reject the failure before it goes out of the factory door.

1. Shorts, flash and incomplete junctions.

Some of the failures are within the tube surface and are due to a moulding fault in production. For example, incomplete injections in mouldings create a noticeable short shot in the plastic ends of the product. The forming of junctions at the bottom end of the tube creates a deformation that must be analysed and rejected. Flash is characterised as additional material which creates an out of tolerance part.

2. Scratches and weld lines

These can be present on the main body, thread area or top shaft. They often have a slight depth to them and can be evident from some angles and less so from others. These could also be confused with weld lines which can run down the shaft or body of a syringe product, but this is also a defect that is not acceptable in production.

3. Cracks

Cracks in a medical device can have severe consequences to the patient and consumer; this could allow the pharmaceutical product to leak from the device, change the dosage or provide an unclean environment for the drug delivery device. Cracks could be in the main body or sometimes in the thread area, or around the sprue.

4. Bubbles, Black Spots and Foreign Materials

Bubbles and black spots are inclusions and foreign material, which are not acceptable on any product, but on a drug delivery device that is opaque or clear in nature, they stand out a lot! These sort of particulates are typically introduced through the moulding process and need to be automatically rejected. Often, the cosmetic nature for these sorts of inclusions is more the issue than the device’s functional ability based on the reject.

5. Sink marks, depressions and distortions

Sink marks, pits, and depressions can cause distortion in the components, thus putting the device out of tolerance or off-specification from the drawing. These again are caused by the moulding process and should be automatically inspected out from the batch.

6. Chips

Chips on the device have a similar appearance to short shots but could be caused on exit from the moulding or via physical damage through movement and storage. Chips on the body can cause out of tolerance parts and potential failure of the device in use.

But how does this automatic inspection work for all the above faults?

Well, a combination of optics, lighting and filters is the trick to identifying all the defects in medical plastic products. This combined with the ability to rotate the product (if it’s cylindrical in nature) or move it in front of many camera stations. Some of the critical lighting techniques used for the automated surface inspection of medical devices are:

Absorbing – the backlight is used to create contrasts of particulates, bubbles and cosmetic defects.
Reflecting – Illumination on-axis to the products creates reflections of fragments, oils and crystallisation.
Scattering – Low angle lighting highlights defects such as cracks and glass fragments.
Polarised – Polarised light highlights fibres and impurities.

These, combined with the use of telecentric optics (an optic with no magnification or perspective distortion), allows the product to be inspected to the extremities, i.e. all the way to the top and to the bottom. Thus, the whole medical device body can be examined in one shot.

100% automated inspection has come a long way in medical device and pharmaceutical manufacturing. Utilising a precision vision system for production is now a prerequisite for all medical device manufacturers.

How to run a Gauge R & R study for machine vision

When you’re installing a vision system for a measuring task, don’t be caught out with the provider simply stating that the pixel calibration task is completed by dividing the number of pixels by the measurement value. This does provide a calibrated constant, but it’s not the whole story. There is so much more to precision gauging using machine vision. Measurement Systems Analysis (MSA) and in particular Gauge R&R studies are tests used to determine the accuracy of measurements. They are the de-facto standard in manufacturing quality control and metrology, and especially relevant for machine vision-based checking. Repeated measurements are used to determine variation and bias. Analysis of the measurement results may allow individual components of variation to be quantified. In MSA accuracy is considered to be the combination of trueness (bias) and precision (variation).

The three most crucial requirements of any vision gauging solution are repeatability, accuracy and precision. The variance of repeated measurements, or repeatability, refers to how near the measurements are to each other. The accuracy of the measurements refers to how close they are to the genuine value. The number of digits that can be read from the measurement gauge is known as precision.

A Gauge R & R (also known as Gage R & R) repeatability and reproducibility is defined as the process used to evaluate a gauging instrument’s accuracy by ensuring its measurements are repeatable and reproducible. The process includes taking a series of measurements to certify that the output is the same value as the input, and that the same measurements are obtained under the same operating conditions over a set duration.

The “repeatability” aspect of the GR&R technique is defined as the variation in measurement obtained:

– With one vision measurement system
– When used several times by the same operator
– When measuring an identical characteristic on the same part

The “reproducibility” aspect of the GR&R technique is the variation in the average of measurements made by different operators:

– Who are using the same vision inspection system
– When measuring the identical characteristic on the same part

Operator variation, or reproducibility, is estimated by determining the overall average for each appraiser and then finding the range by subtracting the smallest operator average from the largest.

So, it’s important that vision system measurements are checked against all of these aspects, so a bias test, process capability and gauge validation. The overall study should be performed as per MSA Reference Manual 2010 fourth edition.

More information on IVS gauging and measuring can be found here.

Why using vision systems to capture the cavity ID in injection moulded parts helps you stay ahead of the competition

Machine vision systems are excellent at analysing products, capturing data and providing a large database of useful statistics for your operations and production manager to pore over. For products which are injection moulded we often get tasked with measuring key dimensions, gauging a spout or find the rogue short shot on a medical device. Normally this is at speed, with products running directly out of the Injection Moulding Machine (IMM), up a conveyor and into the vision input (normally a bowl feed, robot or conveyor), with the added bonus of images saved for reference as the automatic inspection takes place. If you’ve had a failure, it’s always a good idea to have the photo of the product to show to the quality team and for process control feedback. Let’s face it, the vision system is a goal keeper, and you need to feedback to the production team to help improve the process.

But what if the failure is intermittent and not always easy to capture? Your quality engineers may be scratching their heads, wondering why there is a product failure every now and then. Is there any way this can be tracked back to source? The neat answer is to complete cavity identification (ID) during the inspection process. The tooling for an IMM can include a cavity number so that each individual product has a unique reference. This is used for other forms of quality feedback, such as reviewing tool wear, tooling failures, short-shot imbalance and general troubleshooting for injection moulded products. So, if the cavity Identification number can be read at speed, saved and the data tracked against it, you start to see a picture of how your process is running, spikes in quality concerns related to a particular cavity, and ultimately full statistical process control of each tool cavity. Utilising precision optical character recognition allows vision systems to read each cavity number (or letter/or combination), to drill down the data to an individual cavity within the tool.

So next time the quality director comes down onto the shop floor asking what cavities are giving you problems, you’ll have all the data to hand (plus the photos to really wow them!).

Why bin picking is one of the most difficult vision system tasks (and how to overcome it!)

Autonomous bin picking, or the robotic selection of objects with random poses from a bin, is one of the most common robotic tasks, but it also poses some of the most difficult technological challenges. To be able to localise each part in a bin, navigate to it without colliding with its environment or other parts, pick it, and safely place it in another location in an aligned position – a robot must be equipped with exceptional vision and robotic intelligence.

Normally the 3D vision system scanner is mounted in a fixed, stationary position in the robotic cell, usually above and in-line with the bin. The scanner must not be moved in relation to the robot after the system has been calibrated. As a general rule of thumb, the more room is required for the bin picking application – including the space for robot movement, the size of the bin and parts, etc. – the larger model of the machine vision scanner required. So more resolution and pixels equates in simple terms to more precision and accuracy.

Calibration is completed with any suitable ball attached to the endpoint of the robotic arm or to the gripper. The ball needs to be made of a material that is appropriate for scanning, which means it needs to be smooth and not too reflective.

One of the problems with this approach is that the 3D vision system itself could cast a shadow on the bin and inhibit a high-quality acquisition of the scene. This problem is usually solved by making a compromise and finding the most optimal position for the scanner in relation to the bin or by manually rearranging the parts within the bin so that the vision system captures them all in the end. But is there another way?

A way to overcome this is to mount the 3D vision system on the robot itself. Of course, there are certain prerequisites to this approach (i.e. the robot can cope with the additional weight, there is room for mounting and there is cycle time available for movement), but there is some functional advantages to this approach.

For a successful calibration, the scanner must be mounted behind the very last joint (e.g. on the gripper). Any changes made to the scanner’s position after the calibration renders the calibration matrix invalid and the whole calibration procedure must be carried out again. This sort of calibration is done with a marker pattern – a flat sheet of paper (or another material) with a special pattern recognized the 3D vision system.

So what are the advantages? Well, you can’t scan your large bin with a smaller (and so lower cost) vision system scanner, because your scanner is mounted above it and its view is fixed. A small scanner mounted directly on the robotic arm allows you to get closer to the bin and choose which part of it to scan, thus potentially saving costs and helping with resolution.

Robotic mounted bin picking may also eradicate the need to darken the room where the robotic cell is located. The ambient light coming from a skylight might pose serious challenges to the deployed 3D vision system. A scanner attached to the robot can make scans of a bin from one side first and then from another, minimising the need to make any unusual adjustments to the environment.

It can also happen that the 3D vision system itself casts shadows on the bin and inhibits a high-quality acquisition of the scene. This problem is usually solved by making a compromise and finding the most optimal position for the scanner in relation to the bin or by manually rearranging the parts within the bin so that the vision system captures them all in the end. Robot mounted picking eliminates this problem as it enables the scanner to “look” at the scene from any angle and from any position.

In conclusion, there are many approaches to automated bin picking using 3D vision systems, each with their own unique approaches dependent on the environment, industrial automation needs, cost and the cycle time available for picking.

“Innovations in Pharmaceutical Technology” – IVS featured in leading international Pharmaceutical magazine

The articles covers how traditional image processing techniques are being superseded by vision systems utilising deep learning and artificial intelligence in pharmaceutical manufacturing.

Pharmaceutical and medical device manufacturers must be lean, with high-speeds, and an ability to switch product variants quickly and easily, all validated to ‘Good Automated Manufacturing Practice’ (GAMP). Most medical device production processes involve some degree of vision inspection, generally due to either validation requirements or speed constraints (a human operator will not keep up with the speed of production). Therefore, it is critical that these systems are robust, easy-to-understand and seamlessly integrate within the production control and factory information system.

Historically, such vision systems have used traditional machine vision algorithms to complete some everyday tasks: such as device measurement, surface inspection, label reading and component verification. Now, new “deep-learning” algorithms are available to provide an ability for the vision system to “learn”, based on samples shown to the system – thus allowing the quality control process to mirror how an operator learns the process. So, these two systems differ: the traditional system being a descriptive analysis, and the new deep learning systems based on predictive analytics.

Download the magazine from this link (Article Page 30):
https://tinyurl.com/yyj7qvdn

Find more details on IVS solutions for the medical device and pharmaceutical industries here:
https://www.industrialvision.co.uk/vision-systems/vision-sensors/medical-pharma-inspection-solutions

Deep Learning (AI) – Enhancing automated inspection of medical devices?

Integrated quality inspection processes continue to make a significant contribution to medical device manufacturing production, including the provision of automated inspection capabilities as part of real-time quality control procedures. Long before COVID-19, medical device manufacturers were rapidly transforming their factory floors by leveraging technologies such as Artificial Intelligence (AI), machine vision, robotics, and deep learning.

These investments have enabled them to continue to produce critical and high-demand products during these current times, even ramping up production to help address the pandemic. Medical device manufacturers must be lean, with high-speeds, and an ability to switch product variants quickly and easily, all validated to ‘Good Automated Manufacturing Practice’ (GAMP). Most medical device production processes involve some degree of vision inspection, generally due to either validation requirements or speed constraints (a human operator will not keep up with the speed of production). Therefore, it is critical that these systems are robust, easy-to-understand and seamlessly integrate within the production control and factory information system.

Deep learning

Historically, such vision systems have used traditional machine vision algorithms to complete some everyday tasks: such as device measurement, surface inspection, label reading and component verification. Now, new “deep-learning” algorithms are available to provide an ability for the vision system to “learn”, based on samples shown to the system – thus allowing the quality control process to mirror how an operator learns the process. So, these two systems differ: the traditional system being a descriptive analysis, and the new deep learning systems based on predictive analytics.
Innovative machine and deep learning processes ensure more robust recognition rates. Medical device manufacturers can benefit from enhanced levels of automation. Deep learning algorithms use classifiers, allowing image classification, object detection and segmentation at a higher speed. It also results in greater productivity, reliable identification, allocation, and handling of a broader range of objects such as blister packs, moulds and seals. By enhancing the quality and precision of deployed machine vision systems, this adds a welcome layer of reassurance for manufacturers operating within this in-demand space.
Deep learning has other uses in medical device manufacturing too. As AI relies on a variety of methods, including machine learning and deep learning, to observe patterns found in data, deep learning is a subfield of machine learning that mimics the neural networks in the human brain by creating an artificial neural network (ANN). Like the human brain solving a problem, the software takes inputs, processes them, and generates an output. Not only can it help identify defects, but it can, as an example, help identify missing components from a medical set. Additionally, deep learning can often classify the type of defect, enabling closed-loop process control.
Deep learning can undoubtedly improve quality control in the medical device industry by providing consistent results across lines, shifts, and factories. It can reduce labour costs through high-speed automated inspection. It can help manufacturers avoid costly recalls and resolve product issues, ultimately protecting the health and safety of those towards the end of the chain.

AI Limitations

However, deep learning is not a silver bullet for all medical device and pharmaceutical vision inspection applications. It may be challenging to adopt in some applications due to the Federal Drugs Administration (FDA)/GAMP rules relating to validation.

The main issue is the limited ability to validate such systems. As the vision inspection solution utilising AI algorithms needs sample data, both good and bad samples – it makes validating the process extremely difficult, where quantitative data is required. Traditional machine vision will provide specific outputs relating to measurements, grey levels, feature extraction, counts etc. which are generally used for validating a process. With deep learning, the only output is “pass” or “fail”.

This is a limiting capability of deep learning enabled machine vision solutions – the user has to accept the decision provided by the AI tool blindly, providing no detailed explanation for the choice. In this context, the vision inspection application should be reviewed in advance, to see if AI is applicable and appropriate for such a solution.

Conclusion

In conclusion, deep-learning for machine vision in industrial quality control is now widely available. Nevertheless, each application must be reviewed in detail – to understand if the most appropriate solution is to utilise traditional machine vision with quantifiable metrics or the use of deep-learning with its decision based on the data pool provided. As AI and deep learning systems continue to develop for vision system applications, we will see more novel ways of adapting the solutions to replace traditional image processing techniques.

You can find out more details on IVS deep learning vision systems here:
https://www.industrialvision.co.uk/applications/deep-learning-artificial-intelligence-ai

Industrial Vision Systems launches free ‘Vision To Automate’ guide

Industrial Vision Systems (IVS), a supplier of machine vision systems to industry, has today launched a free downloadable guide explaining and reviewing vision systems and machine vision. The new ‘Vision to Automate’ guide is designed to support and direct UK manufacturers who are looking to adopt smart factory systems such as robotics, vision systems, automation, and machine learning.

With manufacturers desperately keen to resume some normality when it comes to production post-COVID-19, this 32-page premium guide reviews the basics of machine vision and vision systems, including components, applications and return on investment. ‘Vision to Automate’ also drills down into how an automated factory floor can increase productivity by improving processes, provide greater flexibility, and increase the volume of parts. This, in turn, reduces costs through a reduction in re-testing and labour.

IVS is already witnessing an increasing number of factory floor managers looking to increase the number of collaborative robots operating side by side with human workers post-lockdown, to ease fears of picking up infections. ‘Vision to Automate’ explains how this crucial human-robot collaboration will support the flexible production of highly complex items in lower quantities.

The guide also dissects automated bin-picking robots, which allows vision and robotics to operate autonomously picking product from bins and totes to load machines, bag products or to produce sub-assemblies. This is an area which IVS believes will become common across factory floors as workplaces evolve post-lockdown.

Earl Yardley, director at Industrial Vision Systems, comments: “The key for us is to outline the basics. What is machine vision? How does it work? What can it be used for? ‘Vision to Automate’ answers those questions. Forward-thinking businesses are leveraging automation to support their organisation, and I believe we will continue to see critical changes to working practices and automation deployment. This will create new opportunities across manufacturing within many industry sectors. This includes cutting edge production ideologies with vision robotics and an increasing ability to reduce human to human contact with the deployment of autonomous robotics. We see a growing demand for vision-guided robot systems to maintain production capacity and reduce dependence on the human workforce which will further drive the adoption of flexible manufacturing for generations to come. Removing operators from some production operations will allow factories to reopen with reduced human to human contact, increasing yield and protecting the rest of what is likely to be an anxious workforce.”

Download for FREE via the IVS homepage.
www.industrialvision.co.uk