Skip to main content

Vision Award shortlist announced

Messe Stuttgart has announced the shortlist for this year’s Vision Award, the prize for applied machine vision to be presented at Vision in Stuttgart when the trade fair takes place from 4 to 6 November.

Six companies have been shortlisted by the jury from a record 44 initial entries. They are: the Austrian Institute of Technology for its depth camera; Aphesa for its oil well inspection system; Gardasoft Vision for its lighting solution; Odos Imaging for its time-of-flight camera; Tag Optics for its varifocal lens; and Xapt for its multi-image sensor solution.

The winner will receive €5,000 to be presented at a ceremony during the trade fair. Imaging and Machine Vision Europe is proud to sponsor the award and Warren Clark, the title’s publishing director, will present the award at the show.

Details of the six shortlisted entries can be found below:

AIT Austrian Institute of Technology

A novel HDR depth camera for real-time 3D 360° panoramic vision of autonomous vehicles, Ahmed Nabil Belbachir

 

Mobile platforms are becoming increasingly attractive for fire fighting and search-and-rescue operations due to the risk of incidents and ever-growing challenges. The problem of current remote-operated robots is the difficulty in controlling them and their lack of situational awareness. Imagine an average car driver who has to reverse a heavy truck into a parking lot. It is very hard for him to estimate distances and clearance from the perspective of the driver’s cabin, despite his natural 3D view. The same problem applies to controlling a robot remotely in an environment littered with obstacles, like those found in a typical fire rescue mission. This can be improved by providing features such as 3D views of the robot’s surroundings, which would significantly raise situational awareness and improve manoeuvring efficiency and orientation considerably.

Available state-of-the-art technologies in this domain are mostly based on bulky and costly laser range sensors or on conventional cameras. Downstream navigation is overwhelmed by the volume of information and needs complex algorithms and high processing power. Conversely, bio-inspired visual systems, like the human perception system, select only the relevant information from the environment, making them more efficient.

Against this backdrop, experts at the Austrian Institute of Technology (AIT) developed Tuco-3D in cooperation with Thales France. The strength of this camera is the innovative bio-inspired dynamic vision sensor for visual-based autonomous navigation, which has a high performance-to-power consumption merit ratio, suitable for mobile platforms.

The technology is a patented and CE marked camera for real-time 3D 360° panoramic vision. It is designed to improve the capabilities of autonomous systems (unmanned vehicles, intelligent cars, robots, etc.) in visual environmental sensing, navigation and localisation. The centrepiece of the Tuco-3D camera is an innovative sensor head comprising a stereo arrangement of two dynamic vision line sensors that continuously rotate at 10 revolutions per second, generating 3D 360° distortion-free panoramic views in real time. The core innovation consists of a fast line camera (1µs temporal resolution), which is based on bio-inspired dynamic visual sensing using pixel-level analogue pre-processing of the visual information.

Thanks to the high temporal resolution and to the wide dynamic range of the pixels, the sensor can rotate quickly even in difficult lighting conditions. Exploiting the on-chip processing of the dynamic vision sensor, it provides panoramic edge depth maps, suitable for low-cost transmission in natural environments. The three main fundamentals of the camera are frame-free dynamic vision, panoramic scanning, and event-driven 3D reconstruction.

http://www.ait.ac.at/

Aphesa

High temperature, high pressure, colour, dual camera with live JPEG compression and HDR capability for oil well inspection, Arnaud Darmont

The oil and gas industry requires greater inspection of oil wells. Such wells can be up to 40,000 feet deep and contain dirty fluid under high pressure and temperature due to the depth. Until a few years ago there were no cameras dedicated for this market; the existing cameras were low performance monochrome analogue cameras and could not fulfil the requirements of today's technical challenges.

The typical defects to be inspected are cracks, pipe rusting, water injection, broken pipes, objects blocking the flow, broken valves, fallen tools to be recovered, and many others. The video must be transmitted to an operator at the surface and evidence must be recorded as still images or as video for further investigation. For certain applications the video is not transmitted and is recorded inside the camera.

The main challenges for cameras operating in these environments include: temperatures that can exceed 125°C and pressure that can reach 15,000 psi or 1,000 bar; no light, all lighting is provided by the camera; limited power, as the current has to travel along a 40,000ft long conductor; limited bandwidth; narrow pipes with bends and section changes; and fail operational and fail safe approaches on some design elements.

Due to the limited memory storage or transmission bandwidth, the images must be compressed. A live JPEG compression with an adjustable compression level and programmable quantification tables is implemented.

The camera is equipped with two automotive-grade colour image sensors and LED lighting. One sensor looks down the hole and the other view is from the side of the camera to inspect the pipe’s surface. The camera is capable of rotating 360 degrees to scan the pipe’s entire inner surface.

Several power optimisation, stand-by, and cooling techniques are used in order to reach the required temperature range.

The whole camera measures less than five centimetres in diameter and is more than three metres long (excluding weight bars). The camera head itself, developed by Aphesa, is only a fraction of that length.

The project is the result of two years of collaboration between Aphesa and several other companies, one being a top 10 oil and gas service company that specialises in well interventions. Aphesa is an image sensor, camera and HDR (high dynamic range) company, and develops cameras and lighting systems for customer specific applications.

http://www.aphesa.com/

Gardasoft Vision

Triniti – Expert control of machine vision lighting... made easy, Peter Bhagat

 

Responding to a demand in the machine vision market for intelligent and integrated LED lighting, Triniti is a new enabling technology from Gardasoft which provides expert control, operational performance data and full networking of LED lighting for vision systems – all within a plug-and-play environment.

Vision systems with Triniti-enabled LED lighting are easier to create, configure and commission, and offer increased functionality because complex control techniques are now available within the image processing software environment – and have been made very easy to implement.

As a system-enabling technology, Triniti embraces a collaborative approach with leading manufacturers of LED lighting and providers of machine vision software; Gardasoft has therefore adopted a licensing model for this new technology. Two of the world’s leading machine vision product manufacturers – CCS (http://www.ccs-grp.com/) and Smart Vision Lights (http://www.smartvisionlights.com/) – are the initial LED lighting manufacturers to have adopted Triniti, and the Triniti software API is already proven with image processing software from Cognex, Stemmer Imaging and National Instruments.

Triniti delivers the following user benefits: it enables non-expert users to use expert machine vision lighting techniques; it revolutionises integration of lighting parameters right through to application level software; it addresses applications in the rapidly growing plug-and-play sector of the market; and it provides a stability of brightness, long-term, that helps to enhance the reliability of machine vision systems, over many years.

Triniti comprises three key technology elements:

Integration of lighting - Triniti-enabled LED lights are integrated into machine vision networks and provide diagnostic and configuration benefits through imaging and application processing software.

Light identification and operational data - As part of a collaborative programme with leading machine vision LED light manufacturers, Triniti chips are mounted into partner lights or light-cabling, therefore enabling knowledge of light parameters, easy light connectivity, and light operational data.

Expert light control - The expert control functionality required for Triniti is provided by Gardasoft’s LED light controller technology which is incorporated within the core of Triniti systems.

Vision 2014 marks the official launch of Triniti-enabled LED lighting from CCS and Smart Vision Lights; along with this are the Triniti API extensions developed to work with leading image processing software – there will be live demonstrations of these market-ready products on the Gardasoft booth at the show.

http://www.gardasoft.com/

Odos Imaging

High-resolution time-of-flight 3D imaging: machine vision with depth, Dr Chris Yates, Chris Softley, Ritchie Logan

 

Odos Imaging is building a family of high-resolution 3D imaging systems based on pulsed time-of-flight technology. The Real.iZ-1K (1.3 megapixel) was the first system, released in 2014, with the Real.iZ-4K (4.2 megapixel) following in 2015. Simple pixel designs are common to both systems, combined with high peak power pulsed illumination, and proprietary digital processing. Each and every pixel can be used to measure both ambient light and range allowing the systems to generate separate images of the scene in both range and intensity modes.

The systems offer unrivalled flexibility, including all the features of a conventional machine vision camera, with the additional benefit of individual pixel range measurements. Odos Imaging expects to grow the family of Real.iZ time-of-flight systems, supporting a spectrum of price and performance points to meet the needs of both high-end and lower cost applications.

All Real.iZ systems include a GenICam-compliant interface operating over a GigE Vision transport layer, ensuring simple system integration, and compatibility with industry standard image processing libraries. A software development kit is provided for C++ and .NET languages on Windows and Linux operating systems.

User flexibility is central to the operation and integration of the Real.iZ systems. Each system is capable of providing at least three data points per pixel, corresponding to an intensity value, a range value, and a validity value (indicating validity of the range measurement). Each of the data components can be enabled or disabled as required by a specific application. Regions of interest can be specified in arbitrary locations within the imaging array such that only data corresponding to the region are output, thus providing a means to decrease data bandwidth and increase speed of acquisition. Onboard filters are available to threshold range images based on specific parameters; for example a range threshold could be set to 2m, such that all points closer than 2m would be marked as invalid and set to zero.

Class 1 laser illumination modules provide the active optical signal for the system and can be configured independently from the main sensor unit. The intensity of the pulse ensures indoor, outdoor and night-time operation is possible. Example applications are found in a many sectors, but most commonly in the logistics sector (mixed mode palletising, depalletising, parcel sorting), and precision agriculture and traffic sectors.

http://www.odos-imaging.com/

Tag Optics

The Tag lens, Christian Theriault

 

The Tag lens uses sound to shape light making it the world’s fastest varifocal device. This novel mechanism of action enables scanning speeds that are more than 1,000 times faster than other variable focus technologies, offering a new dimension for emerging applications in machine vision, imaging, and laser processing where controlling the focal position or depth-of-field is of paramount importance. In one simple turnkey solution, the Tag lens provides a computer-controlled platform that works with existing optical assemblies to add ultra-fast, high-precision focal length selection. At the same time, the lens gives the option for continuous high-speed scanning that effectively extends the depth-of-field of any optical system without sacrificing resolution or wavefront quality.

Most adjustable optical elements on the market are constrained by the notion that a physical change in a surface or interface is needed to redirect light. They redirect light by either shifting the location of a fixed lens (e.g. camera autofocus) or change the shape of the lens’ curvature similar to the human eye. Such mechanisms are slow as they require material to be moved or reshaped, fundamentally limiting the speed at which these systems can operate.

Tag Optics uses a fundamentally different approach and employs the scientific principle that index of refraction changes can also shape light. When sound travels through a material, it causes small coordinated density fluctuations that occur at well-defined locations. Since the index of refraction of a material is related to its density, these small fluctuations caused by sound create a well-defined user controllable index of refraction profile resulting in a high quality tunable optical lens.

Tag Optics’ Tag lenses are particularly well-suited to the machine vision industry due to their ability to provide a user defined depth-of-field control, especially when compared to fixed element or other technologies that employ diffractive methods such as light field, or phase mask approaches. While these technologies can provide particular benefits to imaging applications, they require extensive back-end software computation to recreate the images and in some cases even reduce the image resolution. Moreover, these solutions offer only a fixed extension in the depth-of-field and can require modified CCD elements to accommodate the required pixel density.

Tag Optics’ novel technology provides a tunable depth-of-field control and as such, can act as a window in the system (when off) or provide more than a 30-times increase in the original depth-of-field of the imaging assembly. When used in applications where speed is critical, the Tag lens can create an instantaneous z-projection by combining information from all foci within its range such that one exposure is all that is necessary to capture the full multidimensional data.

http://www.tag-optics.com/

Xapt

Xapt Eye-sect XL: viewing into nature’s eyes, Prof. Dr Lothar Howah

 

Industrial vision is mainly based on the functional principle of the human visual system. Instead of an eyeball, visual nerve, central nervous system and the visual cortex, vision systems use a lens, camera, cable and an evaluation unit with application specific algorithms to evaluate the scene and derive actions based on the results – the more pixels, the higher the resolution of the scene.

A human uses two eyes to record a scene and gain 3D information. The brain analyses the information of the overlapping area in our field of vision and generates 3D information, based on the correlation of two 2D scenes.

An industrial vision system uses multiple cameras to reach the same goal. Based on the same functional principle, the evaluation unit generates 3D information by analysing the overlapping areas from the multiple fields of view. The human increases his field of view by simply moving his eyeball, or his head. Similar to this, cameras are moved in industrial vision by placing them on robot arms. In both cases, time is required to move the ‘image sensors’.

In nature there exists another sensor system which is partially superior in reaction speed and 3D viewing to the human sensor system and also superior to the traditional technical sensor system. An insect has many thousands of small eyes (ommatidium) placed on a ball-shaped half shell, which lets the insect view multiple directions simultaneously without moving its head. Vision systems can be arranged in the same way, by replacing the ommatidia by small cameras and using the known system with data transmission and evaluation unit.

However, the technical problems begin when several hundred, let alone thousands, of cameras are connected to an evaluation unit. A star layout data connection would lead to mechanical problems because of the hundreds of wires and connectors required. Also a linear bus system like GigE could not transfer the high amounts of data and doesn’t support a direct communication for correlation of neighbouring image sensors.

The Xapt solution is a large number of relatively low resolution image sensors (500 x 700 pixels). Each sensor is complemented with a video processor and interconnected as an image processing crossroad in an unlimited network – like a huge compound eye. The internal data transfer between the neighbouring crossroads is made by a special, Xapt developed data highway. A camera based on this functional principle, where more than 100 image sensors are arranged in a row and packed in a small housing, is probably the longest camera in the world. However, this kind of ‘line scan camera’ doesn’t work like a traditional line scan camera. Instead of a line sensor, the Xapt Sensor works with 100 matrix sensors where neighbouring fields of view overlap and enable a stereoscopic perspective for each sensor pair.

With only a single interface the user receives the raw data from the sensors or the processed measurement value, such as strip position, width or stereoscopic edge pairs of an image scene for further processing. Alternatively, the camera can be used in a special line scan mode, with the Xapt sensor bar operating like a line scan camera with about 35,000 pixels per metre – from one camera, one line! Because of the unique design, the minimum distance of a strip is 150mm, which is comparable to multiple line scan camera systems. The maximum length of the sensor bar is 6,000mm, which is a great sensor for machines and equipment producing sheets of material and where installation space is limited.

http://www.xapt-gmbh.de/

Media Partners