CEA institute to demonstrate multi-task neural net for smart cities at CES 2018

Share this on social media:

A new multi-task deep neural network algorithm capable of performing advanced and efficient real-time analysis of video streams will be demonstrated at the consumer technology show CES 2018 in Las Vegas, USA on 9-12 January.

The algorithm, known as DeepManta, falls into a new category of artificial intelligence, called multi-task deep learning, and targets visual object recognition in smart cities, for example when identifying and counting vehicles on roads. It also has potentials in guiding blind people, video surveillance or aspect control of products on manufacturing lines. The algorithm could also be used to support applications in autonomous driving .

DeepManta will be exhibited by List, a research institute of France-based CEA Tech focused on smart digital systems. The flexible algorithm comprises a native multi-task architecture combined with enhancements to conventional deep learning algorithms, and is capable of extracting different types and levels of information simultaneously in real time.

‘DeepManta delivers one of the promises of AI: providing assistance to users by automatising and parallelising tasks that normally would require their full attention,’ said Stéphane David, industrial partnership manager at List. ‘It excels at each individual task, but requires much less overall memory and processing power than parallel architectures that use one algorithm per task.’

The system comprises a standard video camera connected to a laptop equipped with a powerful GPU. The video feed of the camera is processed by the algorithm and the result is broadcast with a very low latency onto the screen.

At CES 2018, List’s demonstration will feature different objects, such as miniature cars, moving into the camera’s field of view, where DeepManta will recognise them. When a car is identified, the algorithm generates a visual annotation, labelling the car with the logo of the brand and model information, and encompassing it with 2D and 3D boxes to locate it spatially in the video in real time.

The autonomous driving implications of DeepManta will also be demonstrated by Valeo, a global automotive supplier and a partner of List, at the show in Las Vegas.


Related news

Recent News

21 June 2019

Carnegie Mellon University showed a non-line-of-sight imaging technique able to compute millimetre- and micrometre-scale shapes of curved objects

11 June 2019

Researchers at Earlham Institute have developed a machine learning platform to categorise lettuce crops using computer vision and aerial images

10 June 2019

Graphene Flagship partner, Emberion, is launching a VIS-SWIR graphene photodetector at Laser World of Photonics. The technology has been shortlisted for an innovation award at the Munich trade fair

02 May 2019

UK start-up, Photonic Vision, has developed what it considers a disruptive approach to time-of-flight sensing for lidar, able to reach high sensitivity without avalanche multiplication