Skip to main content

Vision award shortlist announced

Five companies have been shortlisted for the Vision Award at the upcoming Vision trade fair, Messe Stuttgart has announced.

The entrants nominated were: Machine Vision Lighting from Japan; Austrian hyperspectral company Perception Park; Princeton Infrared Technologies from the US; American lens firm Tag Optics; and Swiss AI company ViDi Systems.

Imaging and Machine Vision Europe is proud to sponsor the award, the winner of which will be announced at the Vision show in Stuttgart, Germany, held from 8 to 10 November.

The five companies were chosen from a long list of 41 entries. A jury of machine vision experts – made up of Jens Michael Carstensen from Videometer, Michael Engel from Vision Components, Gabriele Jansen from Vision Ventures, Ronald Mueller from Vision Markets, Dr Christian Ripperda from Isra Vision, Martin Wäny from Awaiba, and Dieter-Josef Walter from Daimler – drew up the shortlist and will select the winner.

Odos Imaging won the award in 2014 for its 3D time-of-flight cameras for machine vision. ‘It was a great honour to win the Vision Award, particularly for a young company operating in an industrial marketplace,’ commented Dr Chris Yates, CEO of Odos Imaging, speaking recently to Imaging and Machine Vision Europe.

Warren Clark, managing director of Europa Science, which publishes Imaging and Machine Vision Europe, will present the winner with the award and €5,000 prize money at a ceremony at the trade fair.

Machine Vision Lighting

VISA-Method Lighting (Variable Irradiation Solid Angle) – overturning conventional wisdom about lighting, Shigeki Masumura

VISA-Method Lighting is able to give uniform illuminance of light irrespective of distance from an object. The light source produces irradiation conditions that are the same at every point on the surface of an object.

VISA stands for ‘Variable Irradiation Solid Angle’. The irradiation solid angle expresses the angular range of light irradiated at a point in terms of a cone whose vertex is at that point. The VISA lighting method is able to vary the irradiation solid angle.

The main feature of this light source is the ability to control the irradiation of light so that all points within the field of vision on the surface of an object are irradiated identically. This cannot be achieved with conventional lighting methods, or even with parallel light.

In contrast, the VISA-Method Lighting system produces the same shape and inclination of the solid angle, at different points – for all points on the surface of an object, for instance. In other words, the irradiation conditions are exactly the same at every point. In addition, the user can adjust the size and shape of the irradiation solid angle to any value within the solid angle range.

For all points on the object, the irradiation light conditions are determined according to the direction from which light is irradiated and the angular range. This ability to irradiate light at all points under precisely identical conditions is the cornerstone feature of VISA-Method Lighting.

The lighting method can expose features that would otherwise be difficult to detect, such as faint dents and scratches on sheets that are hard to see with the naked eye, or slopes on non-flat metallic and shiny surfaces. These types of feature can also be shown on rough surfaces.

Furthermore, it is possible to implement a multilayered structure by dividing up the irradiation solid angle according to other attributes of the light, such as wavelength band. In this way, it is possible to vary the attributes of light captured continuously within the observation solid angle, according to the incline of the direct light.

It is possible to convert the surface conditions of warped or rough surfaces into a colour gradation, to capture 3D information of bent metallic and shiny surfaces – something that has been difficult or impossible until now.

The units and terminology used in this report are based on the lighting standard JIIA LI-001-2013 from the Japan Industrial Imaging Association (JIIA).

http://www.mvl-inc.com/

Perception Park

Chemical colour imaging – the evolution of machine vision, Markus Burgstaller

 

Chemical colour imaging (CCI) takes complex hyperspectral data and turns it into a format that can be used by the machine vision community. Hyperspectral imaging systems based on a generic, intuitive configurable data processing platform make the scientific methods of hyperspectral analysis accessible to everyone and open up new application areas.

Machine vision technology has gone through a constant development process over the past decades. CCI takes this technological evolution to the next level by merging the advantages of spectroscopy and benefits of machine vision in a holistic approach.

Hyperspectral imaging is able to identify the chemical properties of materials. Each substance has a unique chemical fingerprint that shows up in the spectral information.

Chemical colour imaging offers – for the first time – analysis of chemical properties by means of real-time image processing. By adapting a hyperspectral camera with a real-time processing core, CCI turns the camera system into an easy-to-understand and intuitive configurable chemical colour camera. The chemical colours reflect the molecular properties of the scanned objects.

Perception Park has developed a CCI hardware adapter called Perception System. Beside its main use, the streaming of molecular information, the hardware abstracts fully the interfaced camera and makes it accessible through standard interfaces like GigE Vision or Camera Link. Individual electrical and optical disorders are corrected by means of a calibration package for each camera. Through an abstraction layer, each camera, regardless of the type, gets abstracted and standardised by the Perception System. This means the user can select camera technology independent from provider and enables standardised integration and application by CCI.

Chemical colour imaging enables machine vision related companies to benefit from new application fields using spectroscopic information. Since CCI is not limited to a specific spectral range or technology, the machine vision community has access to a wide range of spectroscopic techniques. For example, near-infrared spectroscopy is well-established for material characterisation based on molecular vibrations; spectroscopy in the visual domain gives precise colour measurements; interferometry can be applied to quantify coating thickness; gas detection is feasible by means of mid- and longwave-infrared spectroscopy; while UV spectroscopy can identify genetic information – it seems the list is endless.

http://www.perception-park.com/

Princeton Infrared Technologies

LineCam12 SWIR and visible line scan camera, Dr Martin Ettenberg

The LineCam12 is an advanced line scan camera with 14-bit digital data at 37k lines per second. It has USB3 and Camera Link outputs and operates in the shortwave infrared (SWIR) and visible spectrum, from 0.4µm to 1.7µm.

The indium gallium arsenide (InGaAs) camera currently comes in two varieties: 250µm tall pixels for spectroscopy, and 12.5µm square pixels for machine vision applications. The camera has incredible versatility enabling full wells from 75ke- to 100Me- with 128 steps of variation, as well as integration times from 10µs to 10s.

The SWIR/visible imager achieves low noise operation at 80e- even at the highest line rate, while still providing 6000:1 dynamic range. On-chip optical pixel binning is available by command to trade spectral resolution for increased signal level and greater speed. This feature is achieved by disconnecting every other detector pixel from the readout integrated circuits’ (ROIC) amplifiers, resulting in the signal being captured by neighbouring pixels. The optical binning also enables 48k lines per second at 512 pixel resolution in the same camera platform, and can be activated by a simple command to the camera.

The TEC-stabilised camera has 18 non-uniformity correction (NUC) tables, with 12 factory set and six user-defined tables to enable flexibility for a given environment (line rate, integration time and capacitor size). An input trigger is available to control integration time length and/or start time, as well as line rate, which further enables imaging flexibility.

The lattice-matched InGaAs array is backside illuminated and is the only SWIR line scan camera that can detect from 0.4µm to 1.7µm. The InGaAs array has 75 per cent quantum efficiency from 1.1µm to 1.6µm. The backside-illuminated device removes the need for bond pads or wires connecting the pixel to the readout circuit. This minimises the opportunity for stray reflections or blocked signals found in frontside-illuminated arrays with many wire bonds near the active imaging area.

The array can be customised so that optical filters can be placed on the active detector area; this is a feature that is nearly impossible to achieve in frontside-illuminated devices.

The camera, manufactured by Princeton Infrared Technologies, can be used with C- and F-mount lenses, as well as custom lenses with an M42 mount. The camera can be powered by the USB3 connector or an optional wall-mount plug.

http://www.princetonirtech.com/

Tag Optics

Tag Zip: ultrafast 3D z-inspection photography for machine vision applications, Christian Theriault

 

Tag Optics, creator of the world's fastest focusing lens, has raised the bar for 3D machine vision applications with its z-inspection photography platform, the Tag Zip. This new line of modular systems uses the company’s ultra-fast focusing technology, the Tag Lens, and pairs it with the latest developments in high-speed imaging sensors to give users excellent control over their imaging, machine vision, or inspection needs.

The Tag Zip captures full resolution images with x, y, and z information natively encoded within each camera frame, allowing for inspection of arbitrary user-defined regions of interest within 3D space and providing real-time positioning and measurement information. The Tag Zip breaks traditional barriers in 3D mapping and imaging and paves the way for new opportunities in machine vision.

Tag Optics’ Zip software combines continuous volumetric imaging with graphical-based computing to integrate high-speed x-y-z measurements and 3D shape recognition for vision applications. By combining the Tag Lens with the latest generation of high-speed imaging sensors, Tag Optics was able to create an ultra-fast computer vision system where each individual frame can be positioned at precise locations up to 10,000 times per second, or as fast as the camera will permit.

Thanks to a user-friendly, intuitive interface with advanced imaging options, the Tag Zip technology can meet the simplest to the most complex user needs in imaging applications. Unlike other 3D mapping and imaging methods that are limited to slow speeds, low image resolution, or require the system to stop moving in order to use structured light or holographic imaging to generate the x, y, z coordinate system, the Tag Zip represents a big advance in imaging optics. This novel high-precision focal length selection with microsecond resolution and full frame resolution allows users to generate an accurate 3D volumetric map and quantitative 3D data in real time.

Applying existing or newly developed algorithms, this technology can be used for 3D object recognition and spatial localisation within the 3D volume of the scene. Examples of future applications include detecting pedestrian and obstacles in the autonomous vehicle and robotic industries, to localisation of objects for robotic-aided manufacturing and high speed part inspection.

http://www.tagoptics.com/

ViDi Systems

Artificial intelligence-based visual analytics for machine vision: ViDi Suite 2.0, Reto Wyss

 

ViDi Systems provides the first industrial image analysis software library that uses state-of-the-art deep learning algorithms to enable computers, machines and robots to understand images that they encounter in the real world.

Nowadays, traditional computer vision solutions are limited in performance and can hardly manage changing or unpredictable environments. In contrast, solutions relying on machine learning require extensive supervised learning training and huge computational resources, thereby limiting the success of machine learning-based products to only a few applications. The gap between what can be done with artificial intelligence in the lab and what is actually done in real-world applications is huge. ViDi Systems bridges that gap by allowing machine vision companies across multiple domains – such as medical imaging, security, autonomous vehicles, and many more – to develop and market cutting-edge products for real people to use.

With a decade of machine learning and computational neuroscience research, ViDi Systems has developed a novel way to analyse images by understanding the well-hidden tricks nature uses to process and reason on visual stimuli. The software’s active deep learning architecture analyses images with exceptional performance. Using a single high-end Nvidia GPU, it only takes a couple of minutes for the system to learn a suitable model, and a millisecond to perform inferences in production mode. In order to train the system, the user needs to provide a representative set of images and label them.

ViDi Suite consists of three different tools. ViDi Blue is used to find and locate single or multiple features within an image regardless of the position, size and orientation of these features. ViDi Red learns the normal appearance of an image and from that point on is able to detect anomalies ranging from missing components to aesthetic defects. ViDi Green separates never seen images into classes based on a collection of trained images.

Nowadays, factories are making use of machine vision solutions for extending the capabilities of manufacturing machines through image processing and analytics. Using ViDi Suite makes it easy for them to tackle a much wider range of challenging inspection and classification problems that are beyond the capabilities of conventional image analysis tools. They can now deploy in production visual inspection processes what were up to now impossible to automate.

Thanks to ViDi Suite, users can drastically speed-up their time-to-market; they can do now in a single day what would have taken them several weeks of intense development, the company stated.

www.vidi-systems.com

Media Partners