Vision sensing tech that reacts to events in a scene launched

Share this on social media:

A Paris-based startup company, Chronocam, has unveiled a prototype of its vision CMOS sensing and processing technology that it says ‘eliminates the traditional speed verses date rate trade off in vision systems’.

CCAM (Eye)oT was shown at Semicon Europa in Dresden, Germany from 6-8 October. By being able to sense in real time the relevant dynamic scene context and acquire only what is necessary, the solution represents a dramatic advancement over conventional fixed frame rate camera technology, the company says.

The company was launched in 2014. Last month it closed a round of seed funding from Robert Bosch Venture Capital and CEA Investissement worth €750k.

The technology can achieve scene-dependent data compression that can be optimised in real-time according to varying application requirements. Image information is not acquired and transmitted frame-wise but continuously, and conditionally only from parts of the scene where there is new visual information. The result is an almost time-continuous but very sparse stream of events carrying the most useful full visual information.

With its innovative way to process vision information, CCAM (Eye)oT offers speeds of 100 kfps equivalent, a dynamic range of 120dB, sensor-level video compression of 100x, and power efficiency of 10mW, at the same time, according to the company.

The benefits of the Chronocam approach directly addresses the need for improvements in power efficiency and bandwidth in a range of vision-enabled application areas, as well as better integration in mobile platforms and sensor networks (IoT) and higher speed and safer control in autonomous vehicles, drones and robots.

Potential uses for Chronocam’s highly efficient way to meet demanding vision tasks include real-time 3D mapping, complex multi-object tracking, inspection, recognition and tracking devices, advanced driver assistance technology and low power ‘always-on’ visual input for smart devices, user interaction and situational awareness products.

Similar to the way the human eye interprets a scene, the Chronocam sensors are only driven by the relevant events happening in the scene. Each pixel is independent and asynchronous and optimises its own acquisition time according to the dynamics of the scene: the pixel does not sample when nothing happens and samples fast when something moves fast.

This contrasts to conventional image sensors which acquire visual information as a series of snapshots recorded at a fixed frame rate which has no relation to the dynamics of the scene (under-sampling) and regardless of whether this information has changed over time (over-sampling).

In a dynamic scene with change or motion, this acquisition method causes information loss and leads to redundancy in the recorded image data, requiring more and more resources and power-hungry processing.

‘Conventional vision technology is rooted in the still camera, frame acquisition era which doesn’t adequately address the needs of today’s dynamic distributed sensor networks in security and surveillance scenarios or IoT. These applications require real-time video streaming using extremely limited bandwidth and power resources,’ said Chronocam CEO Luca Verre.

‘With the CCAM (Eye)oT technology, we use a dynamic data-driven redundancy suppression technique, which means the video acquisition of the sensor is not controlled by a common frame clock driving all pixels in the sensing array but by each pixel continuously adapting its own individual sampling rate in response to the visual input it receives. The result is a resolution to the long-standing speed verses data rate trade off in vision technology with an approach that offers optimal speed, dynamic range and real time processing in one solution.’

Further information

Chronocam

Recent News

03 September 2020

Terahertz imaging company, Tihive, has been awarded €8.6m from the European Innovation Council's Accelerator programme to scale up its industrial inspection technology

19 May 2020

The National Institute of Standards and Technology and ASTM Committee E57 have released proceedings on a workshop to define the performance of 3D imaging systems for robots in manufacturing

12 May 2020

The sensors boast a pixel pitch of 5μm thanks to Sony's stacking technology using a copper-to-copper connection. They also deliver high quantum efficiency even in the visible range

06 April 2020

Zensors' algorithms analyse feeds from CCTV cameras to provide real-time data on the number of people in an area and whether safe distances are maintained between them