Skip to main content

Imago releases camera based on neuromorphic sensor

Machine vision camera maker Imago Technologies has built an industrial camera based on a vision sensor that operates along similar principals to human vision, rather than capturing frames like a standard CMOS sensor.

The VisionCam incorporates a neuromorphic vision sensor from Paris-based Prophesee, formerly known as Chronocam. Prophesee’s technology has had interest from automotive OEMs, including Renault, as a sensor for use in driver assistance systems.

The technology was also shortlisted for the 2018 Vision Award at the Vision Stuttgart trade fair in November.

Prophesee released Onboard last year, the company’s embedded reference system for project evaluations. It is these Linux-based algorithms that run on the new VisionCam camera.

Philippe Berger, director of business development, industrial automation at Prophesee, commented: ‘Our preliminary discussions have already yielded numerous application possibilities in and around machines, like high-speed counting, vibration monitoring, man-machine teaming, and kinematic monitoring for predictive maintenance.’

The output of Prophesee’s sensor is not a sequence of images, but a time-continuous stream of individual pixel data, generated and transmitted conditionally, based on what is happening in the scene.

Every pixel in the sensor optimises its own sampling depending on the visual information it sees. If there are rapid changes in the scene, the pixel samples at a high rate; if nothing happens, the pixel stops acquiring redundant data and goes idle.

Speaking to Imaging and Machine Vision Europe at the Embedded World trade fair at the end of February in Nuremberg, Germany, where the camera was on display, Berger explained that working with the output of the camera is more like signal processing than image processing. He added that, for high-speed imaging, the cost of the Imago camera is much lower than that of expensive high frame rate cameras. 

Prophesee’s sensor achieves pixel acquisition and readout times of milliseconds to microseconds, resulting in temporal resolutions equivalent to conventional sensors running at tens to hundreds of thousands of frames per second.

Thanks to the time-based encoding of illumination information, intra-scene dynamic range of 143dB static and 125dB at 30fps equivalent temporal resolution is achieved.

Imago’s VisionCam prototype features a dual-core Arm Cortex-A15 processor, as well as typical machine vision interfaces.

Function samples of the prototype will be available at the beginning of 2019, to be followed by pre-series in the first half of 2019. The series will start in the second half of 2019.

Topics

Media Partners