IVS® utilise deep learning and artificial intelligence for the classification of image and object characteristics in automated inspection. For over 20 years, we have provided vision systems that benefit from advancements in neural network image processing to give more precise and repeatable solutions that “learn” based on image data.
Machine vision AI is a perfect tool capable of resolving various issues that traditional vision sensors and smart cameras sometimes encounter – such as ambient light, product variances, and changes in part positions. IVS artificial intelligence (AI) can recognise discrepancies between recorded images of approved and undesirable items to automatically quantify and reject defects from the normal production process.
The system learns the normal condition through image collection and identifies areas that stray from normal appearance.
Stable detection from trained and tested deep learning data. Train with good and bad parts to teach the system.
Vision result ouputs to your requirements. See trend data and anomolies. Validation is simple with testing images combined with training.
Deep learning in the context of industrial machine vision teaches robots and machines to do what comes naturally to humans, i.e. to learn by example. New multi-layered “bio-inspired” deep neural networks allow the latest IVS® machine vision solutions to mimic the human brain activity in learning a task, thus allowing vision systems to recognise images, perceive trends and understand subtle changes in images that represent defects.
Machine vision performs well at the quantitative measurement of a highly structured scene with a consistent camera resolution, optics and lighting. Deep learning can handle defect variations that require an understanding of the tolerable deviations from the control medium; for example, where there are changes in texture, lighting, shading or distortion in the image. Our deep learning vision systems can be used in surface inspection, object recognition, component detection and part identification. AI deep learning helps in situations where traditional machine vision may struggle, such as parts with varying size, shape, contrast and brightness due to production and process constraints.
Traditional machine vision systems exell at repetitive tasks such as quality control, inspection, and measuring. They struggle, however, when presented with variations and complicated circumstances. Artificial intelligence, with its capacity to learn from data patterns and adjust in real-time helps overcome some of these limitiations. This collaboration enables machine vision systems to not only recognise established patterns, but also to comprehend complex deviations and changing patterns, similar to human cognitive processes.
Defect detection, which was previously confined to preset faults, now includes the identification of subtle anomalies, decreasing waste and improving product quality. Machine vision will become more robust as AI improves and data sets rise, revolutionising automated inspection across the board. The era of seeing and comprehending machines is here, promising a future in which precision and intelligence meet for extraordinary outcomes.
When you compare AI to rule-based vision systems requiring substantial programming and significant technical skill, machine vision solutions embedded with AI technology employ natural language processing to read and comprehend image labelling defined by a human. This allows a broader range of users to benefit from AI and tackle machine vision tasks that are too hard and time-consuming to program using rule-based algorithms. The image data is labelled and confirmed by an operator initially, but the system will then learn by example based on the class definitions.
Artificial Intelligence vision systems supplement rule-based machine vision. When a machine vision system receives a captured image, the AI software compares it to a library of “good” and “bad” reference images and returns a result. The outcome might be as simple as good/bad or accept/reject, or it can be more sophisticated according to the needs. This method of teaching machine vision to recognise patterns and draw conclusions from annotated reference photo captures enables them to distinguish between acceptable and unacceptable irregularities in items under examination, this is the basis of AI vision inspection.
The term “AI” is an addendum for a process that has been around for the last twenty years in machine vision but has only now become more prolific (or should we say, the technology has matured to the point of mass deployment). We provided vision systems in the early 00’s using low-level neural networks for feature differentiation and classification. These were primarily used for simple segmentation and character verification. These networks were basic and had limited capability. Nonetheless, the process we used then to train the network is the same as it is now, but on a much larger scale. Back then, we called it a “neural network”; now it’s termed artificial intelligence.
The term artificial intelligence is coined as the network has some level of “intelligence” to learn by example and expand its knowledge on iterative training levels. The initial research into computer vision AI discovered that human vision has a hierarchical structure on how neurons respond to various stimuli. Simple features, such as edges, are detected by neurons, which then feed into more complex features, such as shapes, which finally feed into more complex visual representations. This builds individual blocks into “images” the brain can perceive. You can think of pixel data in industrial vision systems as the building blocks of synthetic image data. Pixel data is collected on the image sensor and transmitted to the processor as millions of individual pixels of differing greyscale or colour. For example, a line is simply a collection of like-level pixels in a row with edges that transition pixel by pixel in grey level difference to an adjacent location.
Neural networks AI vision attempts to recreate how the human brain learns edges, shapes and structures. Neural networks consist of layers of processing units. The sole function of the first layer is to receive the input signals and transfer them to the next layer and so on. The following layers are called “hidden layers”, because they are not directly visible to the user. These do the actual processing of the signals. The final layer translates the internal representation of the pattern created by this processing into an output signal, directly encoding the class membership of the input pattern. Neural networks of this type can realise arbitrarily complex relationships between feature values and class designations. The relationship incorporated in the network depends on the weights of the connections between the layers and the number of layers – and can be derived by special training algorithms from a set of training patterns. The end result is a “web” of connections between the networks in a non-linear fashion. All of this is to try and mimic how the brain operates in learning.
Learn more on Artificial Intelligence (AI) Deep Learning here: