Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

CEA institute to demonstrate multi-task neural net for smart cities at CES 2018

Share this on social media:

A new multi-task deep neural network algorithm capable of performing advanced and efficient real-time analysis of video streams will be demonstrated at the consumer technology show CES 2018 in Las Vegas, USA on 9-12 January.

The algorithm, known as DeepManta, falls into a new category of artificial intelligence, called multi-task deep learning, and targets visual object recognition in smart cities, for example when identifying and counting vehicles on roads. It also has potentials in guiding blind people, video surveillance or aspect control of products on manufacturing lines. The algorithm could also be used to support applications in autonomous driving .

DeepManta will be exhibited by List, a research institute of France-based CEA Tech focused on smart digital systems. The flexible algorithm comprises a native multi-task architecture combined with enhancements to conventional deep learning algorithms, and is capable of extracting different types and levels of information simultaneously in real time.

‘DeepManta delivers one of the promises of AI: providing assistance to users by automatising and parallelising tasks that normally would require their full attention,’ said Stéphane David, industrial partnership manager at List. ‘It excels at each individual task, but requires much less overall memory and processing power than parallel architectures that use one algorithm per task.’

The system comprises a standard video camera connected to a laptop equipped with a powerful GPU. The video feed of the camera is processed by the algorithm and the result is broadcast with a very low latency onto the screen.

At CES 2018, List’s demonstration will feature different objects, such as miniature cars, moving into the camera’s field of view, where DeepManta will recognise them. When a car is identified, the algorithm generates a visual annotation, labelling the car with the logo of the brand and model information, and encompassing it with 2D and 3D boxes to locate it spatially in the video in real time.

The autonomous driving implications of DeepManta will also be demonstrated by Valeo, a global automotive supplier and a partner of List, at the show in Las Vegas.


Related news

Managing directors Jan-Erik Schmitt (left) and Oliver Sidla

20 August 2019

The Moon landing was 50 years old in 2019. Zeiss lenses were used to capture this image

23 December 2019

Recent News

17 January 2020

The lens-free technology from CEA-Leti, which digitally reconstructs microscope images and will be presented at SPIE Photonics West, opens up fast, automated cell screening

24 October 2019

Imec says the new production method promises an order of magnitude gain in fabrication throughput and cost compared to processing conventional infrared imagers

04 October 2019

Each pixel in Prophesee’s Metavision sensor only activates if it detects a change in the scene – an event – which means low power, latency and data processing requirements

18 September 2019

3D sensing company, Outsight, has introduced a 3D semantic camera that combines lidar ranging with hyperspectral material analysis. The camera was introduced at the Autosens conference in Brussels