Driver assistance with time-of-flight at IS2013

Share this on social media:

Tags: 

Greg Blackman reports from the Image Sensors 2013 conference, including the latest from Toyota Central R&D Labs on its time-of-flight sensor for driver assistance

Toyota's Central R&D Labs in Japan has developed a prototype high-definition imaging lidar sensor for its Advanced Driver Assistance Systems (ADAS). The time-of-flight 3D sensor, which has a 100m range, could potentially be used in future Toyota vehicles for aspects like obstacle avoidance, pedestrian detection and lane detection.

Speaking at the Image Sensors conference (IS2013) in London (19-21 March), Dr Cristiano Niclass of Toyota Central R&D Labs said that the imaging lidar sensor provides more comprehensive coverage of the road for driver assistance than millimetre-wave radar or stereovision, both of which have drawbacks – millimetre-wave radar has excellent range, but low angular resolution and field of view, whereas stereovision has a better field of view, but only operates over a shorter distance.

The lidar sensor has a range of 100m and operates under bright sunlight, as well as in rain and adverse weather conditions, Dr Niclass said. The system uses a polygon mirror to direct the laser beam in six different directions and a time-of-flight sensor running at 202 x 96 pixels at 10fps. It is eye-safe, with a laser output power of 21mW.

The system was tested under a background illuminance of 70klux and a sky illuminance of greater than 100klux, and recorded an error of less than 15cm at 100m distance at 10fps.

Dr Niclass said the sensor would recognise most objects based on the 3D data. He said that at 10fps there would be some motion artefacts when driving at speed on a motorway, for instance, but added that the system is designed for driver assistance rather than autonomous driving, which is a lot more challenging.

The IS2013 conference covers all aspects of image sensor technology for consumer, broadcast, and industrial and scientific imagers. In his presentation, Jim Lewis, CEO of Swiss 3D imaging company, Mesa Imaging, challenged some of what he considered were myths surrounding the capabilities of time-of-flight (TOF) imaging. He suggested that TOF technology needed improvement in operating outdoors, where direct sunlight, longer operating ranges, variable scenes, and bad weather conditions all make acquiring accurate 3D data difficult. He also said there was more to building an industrial-grade TOF camera than perfecting the pixel – LEDs will degrade over time and the camera will have an inherent temperature variation, all of which complicates engineering a robust industrial camera.

Lewis said that to widen the adoption of TOF imaging in industry, depth map stability has to be improved, the technology has to be able to track fast moving objects better, and the dynamic range has to improve. Mesa Imaging’s latest TOF sensor is the Onyx, a 25k pixel analogue sensor.

As well as 3D imaging, the conference had presentations on multi-aperture imaging, broadcast image sensor technology, CMOS RGB Clear technology, digital cameras, and medical applications. The latest market data was also presented by Paul Danini, a market analyst at French market research firm, Yole Développement. He commented that the CMOS image sensor market was worth $6.6 billion in 2012, 60 per cent of which was represented by the mobile phone sector. He said that computer tablets and automotive were two areas expected to grow significantly in the future.

Related analysis & opinion

The first colour image, captured by a CMV20000 sensor, to be sent back by the hazard cameras on the Perseverance rover after its landing on Mars on 18 February. Credit: NASA/JPL-Caltech

26 February 2021

Greg Blackman speaks to Guy Meynants, formerly of Cmosis, and Paul Jerram, of Teledyne e2v, about the history of the image sensors onboard the Mars rover

09 December 2020

Imaging and Machine Vision Europe gathered a panel of experts to discuss uptake of 3D vision in robot automation. Greg Blackman reports on what was said

A point cloud of a National Research Council Canada artefact superimposed on a CAD model. Credit: NIST

31 July 2020

How do you choose a 3D vision system for a robot cell? Geraldine Cheok and Kamel Saidi at the National Institute of Standards and Technology in the USA discuss an initiative to define standards for industrial 3D imaging

28 February 2020

Paul Wilson, managing director of Scorpion Vision, describes what it takes to install a 3D robot vision system in a Chinese foundry

Depth map from the SceneScan. Credit: Nerian Vision

21 November 2019

Dr Konstantin Schauwecker, CEO of Nerian Vision, describes the firm’s stereo vision sensor for fast depth perception with FPGAs

Related features and analysis & opinion

The first colour image, captured by a CMV20000 sensor, to be sent back by the hazard cameras on the Perseverance rover after its landing on Mars on 18 February. Credit: NASA/JPL-Caltech

26 February 2021

Greg Blackman speaks to Guy Meynants, formerly of Cmosis, and Paul Jerram, of Teledyne e2v, about the history of the image sensors onboard the Mars rover

18 February 2021

Greg Blackman speaks to Imec’s Paweł Malinowski about the institute’s new quantum dot SWIR sensor

Hyperspectral imaging can be used to check for blemishes on food packaging lines. Credit: Brillopak

18 February 2021

Matthew Dale finds out how vision is enabling smaller batch sizes to be processed on packaging lines

The highlighted objects on the left show scaling errors, rotation errors or translation errors, while the objects on the right are a truer representation. Credit: Zivid

14 December 2020

Matthew Dale explores the new 3D vision tools that are enabling automated bin picking

09 December 2020

Imaging and Machine Vision Europe gathered a panel of experts to discuss uptake of 3D vision in robot automation. Greg Blackman reports on what was said

Engineers at KYB in front of a pick-and-place solution for handling steel metal cylinders. Credit: Pickit

03 August 2020

Car manufacturing has been hit hard by Covid-19, but the need for automation on production lines has not diminished, as Greg Blackman finds out

A point cloud of a National Research Council Canada artefact superimposed on a CAD model. Credit: NIST

31 July 2020

How do you choose a 3D vision system for a robot cell? Geraldine Cheok and Kamel Saidi at the National Institute of Standards and Technology in the USA discuss an initiative to define standards for industrial 3D imaging