CEA institute to demonstrate multi-task neural net for smart cities at CES 2018

Share this on social media:

A new multi-task deep neural network algorithm capable of performing advanced and efficient real-time analysis of video streams will be demonstrated at the consumer technology show CES 2018 in Las Vegas, USA on 9-12 January.

The algorithm, known as DeepManta, falls into a new category of artificial intelligence, called multi-task deep learning, and targets visual object recognition in smart cities, for example when identifying and counting vehicles on roads. It also has potentials in guiding blind people, video surveillance or aspect control of products on manufacturing lines. The algorithm could also be used to support applications in autonomous driving .

DeepManta will be exhibited by List, a research institute of France-based CEA Tech focused on smart digital systems. The flexible algorithm comprises a native multi-task architecture combined with enhancements to conventional deep learning algorithms, and is capable of extracting different types and levels of information simultaneously in real time.

‘DeepManta delivers one of the promises of AI: providing assistance to users by automatising and parallelising tasks that normally would require their full attention,’ said Stéphane David, industrial partnership manager at List. ‘It excels at each individual task, but requires much less overall memory and processing power than parallel architectures that use one algorithm per task.’

The system comprises a standard video camera connected to a laptop equipped with a powerful GPU. The video feed of the camera is processed by the algorithm and the result is broadcast with a very low latency onto the screen.

At CES 2018, List’s demonstration will feature different objects, such as miniature cars, moving into the camera’s field of view, where DeepManta will recognise them. When a car is identified, the algorithm generates a visual annotation, labelling the car with the logo of the brand and model information, and encompassing it with 2D and 3D boxes to locate it spatially in the video in real time.

The autonomous driving implications of DeepManta will also be demonstrated by Valeo, a global automotive supplier and a partner of List, at the show in Las Vegas.


Related news

Recent News

06 May 2021

The GTOF0503 sensor features a 5µm three-tap iToF pixel, incorporating an array with a resolution of 640 x 480 pixels

30 April 2021

The algorithm can deduce the shape, size and layout of a room by measuring the time it takes for sound from speakers to return to the phone's microphone

20 April 2021

The Kria K26 SOM is built on top of the Zynq UltraScale+ MPSoC architecture. It has 4GB of DDR4 memory and 245 IOs for connecting sensors

18 March 2021

CEA-Leti scientists have developed a lensless, infrared spectral imaging system for medical diagnostics. It plans to commercialise the technology through a start-up