How to choose a camera for a machine vision application.

For the new year we thought it would be useful to go back to basics, to understand how to choose the correct camera for a machine vision application. With ever increasing options we want to drill down on the choices available and to understand why we use certain IVS cameras in our machines.

A camera’s ability to capture a correctly-illuminated image of the inspected object depends on the direct relationship between the sensor and the lens. The sensors job is to convert light (photons) from the lens into electrical signals (electrons). Typically, it will do this using either charge coupled device (CCD) or complementary metal oxide semiconductor (CMOS) technology which digitize the electrons into an image consisting of pixels. The image can then be sent to the processor for analysis. Low light creates dark pixels and bright light creates brighter pixels.

Parameters such as part size and inspection tolerances will help inform the required sensor resolution and it’s very important to ensure the right one is chosen for the required application. Higher degrees of accuracy in measurement will require higher levels of sensor resolution.

Depending on application requirements machine vision systems often consist of a number of cameras. These can be monochrome and/or colour, and capture, interpret and signal with the control system to deliver the required solution.

Line Scan (1D sensors)

Line scan cameras collect single lines of images and are typically used for inspecting moving objects which are transported past the camera. They can also be useful for moving, very large objects or cylindrical objects which can be rotated. Where colour information is important this can also be achieved with line scan cameras.

Since the camera is only capturing one line it can come with certain limitations. For example, illumination needs to be extremely precise and because the camera aperture needs to remain open most of the time, reducing the depth of field, it can make capturing objects at different distances more problematic.

Area Scan (2D sensors)

In area scan cameras the sensor has a large matrix of image pixels generating a two-dimensional image in one exposure cycle. Because area scan cameras are capturing a rectangular area it is typically easier to install high performance lighting with this set-up as opposed to line scan cameras.

How to choose a camera for a machine vision application.

Area scan cameras also make it possible to take short exposure images using strobe lighting that brings a lot of light to the sensor in a short period of time. As such area scan cameras are used in the majority of applications for image acquisition.

2D & 3D Imaging

3D machine vision helps to overcome complex, high accuracy, real-time challenges where the practical limitations of 2D vision means it simply can’t be used. In particular 2D Vision is limited in applications where:
● shape information is critical to performing a task
● there is sensitivity to lighting issues
● it is difficult to achieve high contrast levels
● object movement may impair image accuracy
● imaging three-dimensional shape, form or volume is a necessity
Of course, for many simple applications these issues are of no consequence and therefore 2D vision, using either line scan or area scan cameras, is perfectly acceptable.

How to choose a camera for a machine vision application.

However, 3D imaging is growing in importance within the machine vision industry. As outlined in previous posts, although it is more time, processor and software intensive, rapid advances in technology, algorithms and software mean that these systems are now more than capable of keeping up with production line throughput requirements.

How to choose a camera for a machine vision application.

Because they capture extra third dimension data reliably, 3D machine vision systems are immune to the aspects of lighting, contrast, and distance to object suffered by their 2D counterparts.

As such some of the key applications of 3D imaging include:
● Measurement of volume, height, thickness, holes, curves and angles
● Robot guidance, bin picking for placing, packing or assembly
● Quality control where 3D CAD models have been used
● Where understanding of 3D space and dimensions is required
● Inspecting low contrast objects

There are four separate commonly used methods for 3D imaging:
1. Scanning based triangulation describes the angular relationship between the object being scanned, the laser and the camera. The approach involves the projection of a laser onto the surface of an object which is then viewed by the camera from a different measured angle. Any deviations of the line represent shape variations and lines from multiple scans can be assembled into a depth map or point cloud, representing the 3D image. Often multiple cameras are used, tracking laser lines from different angles and then merging data sets into a single profile. This helps to overcome any issues with ‘shadowing’ a situation where a single laser line is blocked from passing through parts of an object, by other parts of the same object.
2. Stereo vision, as the name suggests, is based on using two cameras, much the same as a pair of human eyes. Using the triangulation technique the two captured 2D images are then used to calculate a 3D image.
3. Time of flight 3D cameras measure the time which a light pulse takes to reach the object and return for each image point. As a result they have limitations in respect to the distance they can be used from an object and resolution meaning they are only suitable for specific applications.
4. Structured light uses sophisticated light techniques to create structured patterns that encodes 3D information direct to the camera scene.

How to choose a camera for a machine vision application.

Whatever the task, it’s important to understand how all the elements of the machine vision system interact to create a robust and reliable solution. Starting with the camera is the best approach, and building up the system architecture around it.

Robot vision trends in medical device production.

Medical device manufacturing continues to drive the adoption of robot vision to increase throughput, increase yield and reduce the reliance on manual processes. Many production facilities have now turned to collaborative robots which allow operations in medical device production to be taken to the next level of optimisation.

Cobots with vision, tend to work hand in hand with humans balancing the imperative for safety with the need for flexibility and productivity. Robots no longer need to work alone. Collaborative robots are generally designed with inherent safety so they can work alongside humans where possible. Collaborative automation means greater speed and efficiency. Of course, a thorough risk-assessment is still required but the functionality within these robots leads them to be assessed safe once the surrounding conditions and operationality capability are tuned to the safety needs (including any gripper or end-effector design).

Cobots are designed for low payload applications such as handling small parts and inspection tasks so are ideal for medical production processes. Companies continue to look for ways to improve their efficiency by using robot with vision systems, allowing automation of current manual processes to produce productivity gains. Cobots provide a lower price point and lower integration cost to the customer, providing a faster return on investment that unlocks many more industrial tasks for medical device production automation.

Some examples of the current trends in medical device manufacturing for adopting robot vision solutions include:

Machine Tending

Cobot vision removes the need for many operators to be occupied tending to production machinery, feeding and general assembly operations. The cobot can open and close a machine door, pick and place parts for assembly, and even start a machining process by pushing the start button.

Palletising

The ability to stack and un-stack pallets is another requirement, with the vision system providing real-time off-sets and alignment for the pallet stack. A compact robotic cell that accurately loads products onto a pallet, reduces the need for rework while also allowing staff to work safely around the robot. Using cobots with vision within a robot cell allows safe interaction with staff members. Personnel can enter the robot’s operational area to speed up the pallet changes, providing a robotic solution that can work around staff without risking their safety.

Box Erection

Taking flat boxes off a stack and erecting them into a useable format as outer shipper cases is a repetitive and time-consuming action in industry to which cobots with vision are very well suited. Medical device manufacturers need not only to erect the boxes and casings, but print and confirm the serial batch numbers and use by date on the product, all of which can be completed by the same machine vision inspection system.

Functional Repetitive Testing

Manufacturers need to put their products through thousands of hours of test cycling to provide reliability, for example the continued movement of a syringe body or an asthma inhaler valve. Collaborative robots with vision can performs hundreds of hours of testing to support faster testing, improving reliability and quality.

Structured Bin Picking

Parts in a structured position allowing single picks from known datums can be easily completed with cobots and smart vision sensors. Pick and place into a known fixture or position allows integration of such solutions in complex automation production processes.

Unstructured Random Bin Picking

Bin picking from a random box of parts can now be accomplished with cobots and 3D vision. A point cloud of known data, identifying individually pickable parts is fed to the robot allowing parts which would have previously required a human picker to now to picked and placed into the next process.

Robots with machine vision will continue to dominate the foreseeable needs for automation of production in medical device manufacturing. With the drive for ever greater flexibility and the need to reduce manpower and increase reliability, this area of automation will continue to be a focus for the medical devices and pharmaceutical industries.

A guide to the different lenses used in industrial vision systems.

Optical lenses are used to view objects in imaging systems. Vision system lenses are imaging components used in image processing systems to focus an image of an examined object onto a camera sensor. Depending on the lens, they can be used to remove parallax or perspective error or provide adjustable magnifications, field of views, or focal lengths. Imaging lenses allow an object to be viewed in several ways to illustrate certain features or characteristics that may be desirable in certain applications. Alongside lighting, lens selection is paramount to creating the highest level of contrast between the features of interest and the surrounding background. This is fundamental to achieving the greatest possible image quality when working with vision systems.

In this respect the lens plays an important role in gathering light from the part being inspected and projecting an image onto the camera sensor, referred to as primary magnification (PMAG). The amount of light gathered is controlled by the aperture, which is open or closed to allow more or less light into the lens and the exposure time, which determines how long the image is imposed onto the sensor.

The precision of the image depends on the relationship between the field of view (FOV) and working distance (WD) of the lens, and the number of physical pixels in the camera’s sensor.

FOV is the size of the area you want to capture.

WD is the approximate distance from the front of the camera to the part being inspected with a more exact definition taking into account the lens structure.

The focal length, a common way to specify lenses, is determined by understanding these measurements and the camera/sensor specifications.

The performance of a lens is analysed in reference to its modulation transfer function (MTF) which evaluates resolution and contrast performance at a variety of frequencies.

Types of lens

In order to achieve the highest resolution from any lens/ sensor combination the choice of the former has become an increasingly important factor in machine vision. This is because of the trend to increasingly incorporate smaller pixels on larger sensor sizes.

An ideal situation would see the creation of an individual lens working for each specific field of view, working distance and sensor combination but this highly custom approach is impractical from a cost perspective.

As a result there are a multitude of optical/lens configurations available with some of the key varieties shown below:

Lens Type Characteristics Applications
Standard resolution lenses
  • sensor resolution of less than a megapixel
  • fixed focal lengths from 4.5 – 100 mm
  • MTF of 70 – 90 lp/mm
  • low distortion
  • most widely used
High resolution lenses
  • focal lengths up to 75 mm
  • MTF in excess of 120 lp/mm
  • very low distortion (<0.1%)
  • cameras with a small pixel size
  • precise measurement
Macro lenses
  • small fields of view approximately equal to camera’s sensor size
  • very good MTF characteristics
  • negligible distortion
  • lack flexibility – not possible to change iris or working distance
  • optimised for ‘close-up’ focusing
Large format lenses
  • required when camera sensor exceeds that which can be accommodated with C-mount lenses
  • often modular in construction including adapters, helical mounts and spacers
  • most commonly used in line scan applications
Telecentric lenses
  • collimated light eliminates dimensional and geometric image variations
  • no distortion
  • images with constant magnification and without perspective distortion whatever the object distance
  • to enable collimation front aperture of lens needs to be at least as large as the field of view meaning lenses for large fields of view are comparatively expensive
  • specialist metrology
Liquid lenses
  • Change shape within milliseconds
  • enables design of faster and more compact systems without complex mechanics
  • MTBF in excess of 1 billion movements
  • Longer working life due to minimal moving parts
  • where there is a need to rapidly change lens focus due to object size or distance changes
360˚
  • view every surface of object with as few cameras as possible
  • complex image shapes
360˚ – pericentric lenses
  • specific path of light rays through the lens means that a single image can show detail from the top and sides of an object simultaneously
  • cylindrical objects
360˚ – Catadioptric lenses
  • sides of inspected object observed over wide viewing angle, up to 45° at max., making it possible to inspect complex geometries under convenient perspective
  • small objects down to 7.5 mm diameter
360˚ – Hole inspection optics
  • a large viewing angle >82°
  • compatible with a wide range of object diameters and thicknesses
  • viewing objects containing holes, cavities and containers
  • imaging both the bottom of a hole and its vertical walls
360˚ – Polyview lenses
  • provide eight different views of side and top surfaces of object
  • enables inspection of object features that would otherwise be impossible to acquire with a single camera

You need enough resolution to see the defect or measure to an appropriate tolerance. Modern vision systems use sub-pixel interpolation, but nothing is better than having whole pixels available for the measurement required. Match the right lens to the right camera. If in doubt, over specify the camera resolution, most machine vision systems can cut down the image size from the sensor anyway.

Human Vision v Machine Vision (and what the operational benefits really are).

We get asked this all the time! What are the differences between using a human inspector and what can we expect from an intelligent vision system. Advances in artificial intelligence, in particular deep learning, are enabling computers to learn for themselves and so gaps in this area continue to decrease. But it’s still safe to say that vision systems work from logic, don’t get tired and don’t have an “off” day. And of course, some production processes are too high speed for an operator to inspect (such as in medical device and pharmaceutical manufacturing).

Some of the key characteristics in comparison between human and machine vision can be seen in the table below:

Human Vision Machine Vision
Speed The human visual system can process 10 to 12 images per second. High speed – hundreds to thousands of parts per minute (PPM)
Resolution High image resolution High resolution & magnification.
Interpretation Complex information. Best for qualitative interpretation of unstructured scene Follows program precisely. Best for quantitative and numerical analysis of a structured scene
Light spectrum Visible light – Human eyes are sensitive to electromagnetic wavelengths ranging from 390 to 770 nanometres (nm) requires additional lighting to highlight parts being inspected, but can record light beyond human visible spectrum. Some machine-vision systems function at infrared (IR), ultraviolet (UV), or X-ray wavelengths.
Consistency, reliability & safety Impaired by boredom, distraction & fatigue Continuous repeatable performance – 24/7
100% accuracy

As we can see in the table, machine vision offers some real benefits over human vision, and this is what an increasing number of industries are recognising and exploiting.

The operational benefits of machine vision

So, for the forward-thinking companies exploiting this technology, there are a number of operational benefits of machine vision including:

  • Increased competitiveness – higher productivity and output
  • Lower costs – Less downtime and waste, increased speed and error correction. Automatically identify and correct manufacturing problems on-line with machine vision forming part of the factory control network
  • Improved product quality – 100% quality check for maximum product quality
  • Improved brand reputation management – More stringent compliance with industry regulations, leading to fewer product recalls and a reduced number of complaints
  • Customer complaint handling process improvements – Image archiving and recording throughout the whole process
  • Increased safety – machine vision solutions contribute to a positive, safe working environment
  • Improvements to sustainable working practices – machine vision can improve the use of energy and resources, make material flow smoother, prevent system jams, reduce defects/waste and even save on space. It can also help to enable Just In Time processes through tracking and tracing products and components throughout the production process, avoiding component shortages, reducing inventory and shortening delivery time.
  • Higher innovation – Releasing staff from manual, repetitive tasks for higher value work can lead to greater creativity and problem-solving.

Limitations

Whilst it’s hopefully clear that there are many benefits to machine vision, there are also some limitations. Machine vision systems are able to deliver results equal to and often above human performance with greater speed, continuity and reliability over a longer time period.

However, we should remember that the way the machine ‘sees’ can only be as good as the conditions and rules that are created to enable it to do so. In this respect, it is important that the following basic rules are observed:

  • The inspection task has been described precisely and in detail, in a way appropriate for the special characteristics of machine ‘‘vision’’.
  • All permissible variants of test pieces (with regard to shape, colour, surface etc.) and all types of errors have been taken into account.
  • The environmental conditions (illumination, image capturing, mechanics etc.) have been designed in such a way that the objects or defects to be recognised stand out in an automatically identifiable way.
  • These environmental conditions are kept stable.

In a related point, machine vision can be limited by the relationship between the amount of data required to help train it and the processing power needed to efficiently process the data. However, deep learning, which enables machines to learn rather than train, requires much less data and so provides an opportunity for progress in this respect.

Why the Human Machine Interface (HMI) really does matter for vision systems.

Control systems are essential to the operation of any factory. They control how inputs from various sources interact to produce outputs, and they maintain the balance of those interactions so that everything runs smoothly. The best control systems can almost appear magical; with a few adjustments here and there, you can change the output of an entire production line without ever stopping. This is just as true for the vision inspection process and machinery.

The human machine interface (HMI) in a quality control vision system is important to provide feedback to production, monitor quality at speed and provide easy to see statistics and information. The software interface provides the operator with all information they need during production, guiding them through the process and providing instructions when needed. It also gives them access to a wide array of tools for adjusting cameras or other settings on the fly, as well as useful data such as machine performance reports.

If a product has been rejected by the inspection machine it may be some time before an operator or supervisor can review the data. If the machine is fully autonomous and communicates directly with the factory information system the data might be automatically sent to the factory servers or cloud for later recall – this is standard practice in most modern production facilities. But the inspection machine HMI can also provide an immediate ability to recall the vision image data, detailed information on the quality reject criteria and can be used to monitor shift statistics. It’s important that the HMI is clearly designed, with ergonomics and ease-of-use for the shop floor operator as the main driver. For vision systems and machine vision technology to be adopted you need the buy-in from operators, the maintenance team and line supervisors.

The layout of the operator interface is important to give immediate data and statistical information to the production team. It is important to have an interface that can be easily read from a distance, and displays the necessary information in a single screen. During high-speed inspection operations (such as medical device and pharmaceutical inspection operations) it is not possible to see every product inspected, that’s why a neatly designed vision system display, showing the last failed image, key data and statistical process control (SPC) information provides a ready interface for the operator to drill down on process specifics which may be affecting the quality of the product.

It’s clear why the HMI in quality control vision systems are so crucial to production operations. They help operators see important manufacturing inspection data when necessary, recommend adjustments on the fly, monitor production in real-time and provide useful data about performance all at once.

What are false failures in machine vision (and the art of reducing them.)

Over the last twenty plus years in the machine vision industry we’ve learnt a lot about customer expectation and the need to provide solutions which are robust, traceable and provide good value for money. One of the key elements of any installation or inspection machine is the mystical area of false failures of the vision system, and making sure this is reduced to the point of virtual elimination. A small change in the false failure rate in a positive direction can have a massive impact on yield, especially when the inspection takes place at 500 parts per minute!

For customer’s new to machine vision, it’s a concept they aren’t used to. The idea that you have to have a few false rejects events to happen in order to ensure that no false accepts take place. During the design phase of the vision set-up and inspection routine structure, we review a collection of “Good” parts as well as a pool of “Bad” parts. Statistically, this would imply an infinite number of each in order to calculate the set points for each measured feature. It is impossible to create an effective vision solution without this collection of components. We use various vision software methods to analyse all of the parts and features to create a collection of data sets using these good and bad parts.

This is where the system’s true feasibility will be determined. Once a vision tool has been created for any given feature, a population of data points will usually have some degree of uniqueness across the good and bad parts. If the data does not have two mutually exclusive distributions of results, some good parts must be rejected to ensure that no faulty parts are passed and allowed to continue to the customer. In other words, there is a cost to rejecting some good parts in order to ensure that a failed part is never sent to the customer.

You’ll notice that this chart explains the reasons for false rejections. More importantly, it emphasises that if a system is not rejecting good parts, it may be passing bad ones.

Of course, with all our years of experience in machine vision, we understand this issue – so all our systems and machines are designed so that there is enough difference between good and bad that the distributions are as distinct as possible, that’s the art. That is not so difficult to achieve in presence/absence type applications. It’s more difficult in gauging and metrology vision, where there’s some measurement uncertainty, and extremely difficult in surface flaw detection. Experience in the field of machine vision is everything when it comes to false failure reduction, and that’s what we provide you with our long-standing knowledge in vision systems.

How to run a Gauge R & R study for machine vision.

When you’re installing a vision system for a measuring task, don’t be caught out with the provider simply stating that the pixel calibration task is completed by dividing the number of pixels by the measurement value. This does provide a calibrated constant, but it’s not the whole story. There is so much more to precision gauging using machine vision. Measurement Systems Analysis (MSA) and in particular Gauge R&R studies are tests used to determine the accuracy of measurements. They are the de-facto standard in manufacturing quality control and metrology, and especially relevant for machine vision-based checking. Repeated measurements are used to determine variation and bias. Analysis of the measurement results may allow individual components of variation to be quantified. In MSA accuracy is considered to be the combination of trueness (bias) and precision (variation).

The three most crucial requirements of any vision gauging solution are repeatability, accuracy and precision. The variance of repeated measurements, or repeatability, refers to how near the measurements are to each other. The accuracy of the measurements refers to how close they are to the genuine value. The number of digits that can be read from the measurement gauge is known as precision.

A Gauge R & R (also known as Gage R & R) repeatability and reproducibility is defined as the process used to evaluate a gauging instrument’s accuracy by ensuring its measurements are repeatable and reproducible. The process includes taking a series of measurements to certify that the output is the same value as the input, and that the same measurements are obtained under the same operating conditions over a set duration.

The “repeatability” aspect of the GR&R technique is defined as the variation in measurement obtained:

– With one vision measurement system
– When used several times by the same operator
– When measuring an identical characteristic on the same part

The “reproducibility” aspect of the GR&R technique is the variation in the average of measurements made by different operators:

– Who are using the same vision inspection system
– When measuring the identical characteristic on the same part

Operator variation, or reproducibility, is estimated by determining the overall average for each appraiser and then finding the range by subtracting the smallest operator average from the largest.

So, it’s important that vision system measurements are checked against all of these aspects, so a bias test, process capability and gauge validation. The overall study should be performed as per MSA Reference Manual 2010 fourth edition.

More information on IVS gauging and measuring can be found here.

Why using vision systems to capture the cavity ID in injection moulded parts helps you stay ahead of the competition.

Machine vision systems are excellent at analysing products, capturing data and providing a large database of useful statistics for your operations and production manager to pore over. For products which are injection moulded we often get tasked with measuring key dimensions, gauging a spout or find the rogue short shot on a medical device. Normally this is at speed, with products running directly out of the Injection Moulding Machine (IMM), up a conveyor and into the vision input (normally a bowl feed, robot or conveyor), with the added bonus of images saved for reference as the automatic inspection takes place. If you’ve had a failure, it’s always a good idea to have the photo of the product to show to the quality team and for process control feedback. Let’s face it, the vision system is a goal keeper, and you need to feedback to the production team to help improve the process.

But what if the failure is intermittent and not always easy to capture? Your quality engineers may be scratching their heads, wondering why there is a product failure every now and then. Is there any way this can be tracked back to source? The neat answer is to complete cavity identification (ID) during the inspection process. The tooling for an IMM can include a cavity number so that each individual product has a unique reference. This is used for other forms of quality feedback, such as reviewing tool wear, tooling failures, short-shot imbalance and general troubleshooting for injection moulded products. So, if the cavity Identification number can be read at speed, saved and the data tracked against it, you start to see a picture of how your process is running, spikes in quality concerns related to a particular cavity, and ultimately full statistical process control of each tool cavity. Utilising precision optical character recognition allows vision systems to read each cavity number (or letter/or combination), to drill down the data to an individual cavity within the tool.

So next time the quality director comes down onto the shop floor asking what cavities are giving you problems, you’ll have all the data to hand (plus the photos to really wow them!).

Why bin picking is one of the most difficult vision system tasks (and how to overcome it!).

Autonomous bin picking, or the robotic selection of objects with random poses from a bin, is one of the most common robotic tasks, but it also poses some of the most difficult technological challenges. To be able to localise each part in a bin, navigate to it without colliding with its environment or other parts, pick it, and safely place it in another location in an aligned position – a robot must be equipped with exceptional vision and robotic intelligence.

Normally the 3D vision system scanner is mounted in a fixed, stationary position in the robotic cell, usually above and in-line with the bin. The scanner must not be moved in relation to the robot after the system has been calibrated. As a general rule of thumb, the more room is required for the bin picking application – including the space for robot movement, the size of the bin and parts, etc. – the larger model of the machine vision scanner required. So more resolution and pixels equates in simple terms to more precision and accuracy.

Calibration is completed with any suitable ball attached to the endpoint of the robotic arm or to the gripper. The ball needs to be made of a material that is appropriate for scanning, which means it needs to be smooth and not too reflective.

One of the problems with this approach is that the 3D vision system itself could cast a shadow on the bin and inhibit a high-quality acquisition of the scene. This problem is usually solved by making a compromise and finding the most optimal position for the scanner in relation to the bin or by manually rearranging the parts within the bin so that the vision system captures them all in the end. But is there another way?

A way to overcome this is to mount the 3D vision system on the robot itself. Of course, there are certain prerequisites to this approach (i.e. the robot can cope with the additional weight, there is room for mounting and there is cycle time available for movement), but there is some functional advantages to this approach.

For a successful calibration, the scanner must be mounted behind the very last joint (e.g. on the gripper). Any changes made to the scanner’s position after the calibration renders the calibration matrix invalid and the whole calibration procedure must be carried out again. This sort of calibration is done with a marker pattern – a flat sheet of paper (or another material) with a special pattern recognized the 3D vision system.

So what are the advantages? Well, you can’t scan your large bin with a smaller (and so lower cost) vision system scanner, because your scanner is mounted above it and its view is fixed. A small scanner mounted directly on the robotic arm allows you to get closer to the bin and choose which part of it to scan, thus potentially saving costs and helping with resolution.

Robotic mounted bin picking may also eradicate the need to darken the room where the robotic cell is located. The ambient light coming from a skylight might pose serious challenges to the deployed 3D vision system. A scanner attached to the robot can make scans of a bin from one side first and then from another, minimising the need to make any unusual adjustments to the environment.

It can also happen that the 3D vision system itself casts shadows on the bin and inhibits a high-quality acquisition of the scene. This problem is usually solved by making a compromise and finding the most optimal position for the scanner in relation to the bin or by manually rearranging the parts within the bin so that the vision system captures them all in the end. Robot mounted picking eliminates this problem as it enables the scanner to “look” at the scene from any angle and from any position.

In conclusion, there are many approaches to automated bin picking using 3D vision systems, each with their own unique approaches dependent on the environment, industrial automation needs, cost and the cycle time available for picking.

Industrial Vision Systems launches optical sorting machines to drive efficiency and minimise waste

Industrial Vision Systems (IVS), a supplier of inspection machines to industry, has launched a range of new optical sorting machines specifically for the high-speed sorting of small components such as fasteners, rings, plastic parts, washers, nuts, munitions and micro components. The devices provide automatic inspection, sorting, grading and classification of products at up to 600 parts per minute. The systems intercept and reject failed parts at high speed, discovering shifts in quality, and providing quality assurance through the production cycle.

The new Optical Sorting Machines from IVS utilise the latest vision inspection algorithms allowing manufacturers to focus on other activities while the fully automated sorting machines root out rogue products and make decisions on quality automatically. For classification checks, the systems use Artificial Intelligence (AI) and Deep Learning, providing the machines with an ability to “learn by example” and improve as more data is captured.

The glass disc of the machine provides 360-degree inspection enabling the system to act as the ‘eyes’ on the factory floor and record production trends and data. By intercepting and rejecting failed parts at high speed, it gives manufacturers the ability to provide 100% automatically inspected product to their customers, without human intervention.

With real-time data and comprehensive reporting to see defect rates, this enables engineers to immediately respond to problems and take corrective action before products are delivered to a customer.

Andrew Waller, director at Industrial Vision Systems, said: “Our machines allow manufacturers to stay ahead of their competitors. These new systems are designed for manufacturers of mass-produced, small products which previously would have struggled to sort quality concerns. We can perceive and detect defects others miss at high-speed. Our optical sorting technology takes vision inspection to the next level. Clear, ultra-high-definition images allow our new generation of systems to recognise even the hardest to spot flaws and to sort wrong batch parts. This allows our customers to achieve continuous yield reductions, categorise failures based on their attributes, and build better products.”

“Innovations in Pharmaceutical Technology” – IVS featured in leading international Pharmaceutical magazine.

The articles covers how traditional image processing techniques are being superseded by vision systems utilising deep learning and artificial intelligence in pharmaceutical manufacturing.

Pharmaceutical and medical device manufacturers must be lean, with high-speeds, and an ability to switch product variants quickly and easily, all validated to ‘Good Automated Manufacturing Practice’ (GAMP). Most medical device production processes involve some degree of vision inspection, generally due to either validation requirements or speed constraints (a human operator will not keep up with the speed of production). Therefore, it is critical that these systems are robust, easy-to-understand and seamlessly integrate within the production control and factory information system.

Historically, such vision systems have used traditional machine vision algorithms to complete some everyday tasks: such as device measurement, surface inspection, label reading and component verification. Now, new “deep-learning” algorithms are available to provide an ability for the vision system to “learn”, based on samples shown to the system – thus allowing the quality control process to mirror how an operator learns the process. So, these two systems differ: the traditional system being a descriptive analysis, and the new deep learning systems based on predictive analytics.

Download the magazine from this link (Article Page 30):
https://tinyurl.com/yyj7qvdn

Find more details on IVS solutions for the medical device and pharmaceutical industries here:
https://www.industrialvision.co.uk/vision-systems/vision-sensors/medical-pharma-inspection-solutions

Industrial Vision Systems launches smart Ai vision sensors for high-speed inspection

Industrial Vision Systems (IVS®), a supplier of machine vision systems to industry, has launched the IVS-COMMAND-Ai™ in-line inspection solution designed for high-speed automated visual inspection, helping reduce manufacturer fines and protecting brand reputations. The IVS-COMMAND-Ai Vision Sensors integrate directly with all factory information and control systems, allowing complete part inspection, guidance, tracking and traceability with additional built-in image and data saving.

For those applications requiring complex classification, the IVS-COMMAND-Ai system utilises the latest deep learning artificial intelligence (ai) vision inspection algorithms. New multi-layered “bio-inspired” deep neural networks allow the latest IVS® machine vision solutions to mimic the human brain activity in learning a task, thus allowing vision systems to recognise images, perceive trends and understand subtle changes in images which represent defects.

Designed for complex manufacturing industries such as medical devices, pharmaceuticals, food & drink and automotive, the IVS-COMMAND-Ai Vision Systems are fitted with adaptable HD smart cameras to provide inspection from all angles and at high precision. This allows production lines to review and alert any flaws and defects in real-time, providing instant factory information on compatible devices. It also possesses speeds of up to 60 frames per second and can quickly be integrated on-line to inspect high speed and static products.

By achieving a robust inspection performance, the new IVS-COMMAND-Ai Vision Systems oversees complex vision inspections such as presence verification, OCR and gauging through to surface, defect and quality inspection in one solution. Comprehensive Statistical Process Control (SPC) data also provides closed-loop control to further safeguard production.

All IVS vision sensors can be integrated onto production lines, assembly cells, workbenches, robots and linear slides. Their robust design allows vision sensor integration into any industrial production process for seamless inspection, identification or guidance.

Earl Yardley, director at Industrial Vision Systems, comments: “Our vision systems are very easy to program, are highly accurate, offer easy maintenance and provide peace of mind in final quality acceptance. However, the IVS-COMMAND-Ai vision systems take it a step further. It is the complete, robust quality control inspection vision sensor solution, and it is ready to be deployed in all manufacturing environments. It will improve yield and deliver immediate improvements to product quality; and at these critical times, reliability and consistency are vital.”

Vision Sensors