Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Intelligent vehicle uses vision for perception capabilities

Share this on social media:

Researchers at VisLab, a spin-off company of the University in Parma, Italy, have developed a prototype vehicle equipped with various sensing technologies for perception, navigation and control, called BRAiVE (short for BRAin-drIVE). The intelligent capabilities of BRAiVE include crossing assistance, obstacle and pedestrian detection, parking assistance, road sign detection, and lane markings, as well as 'stop and go' and automatic cruise control.

The perception system is mainly based on vision, together with four laser scanners, 16 laser beams, GPS, an Inertial Measurement Unit (IMU) and full X-by-wire for autonomous driving. Ten cameras from Point Grey are used to detect information on the surroundings of the vehicle. The images are processed in real-time, together with information from the navigation system, to produce the necessary signals for steering and gas intake.

Images are acquired from the cameras via external trigger or in free-running mode, depending on the situation, using Format 7 Mode 0 (region of interest mode). The raw Bayer data is colour processed on-board the cameras, then streamed at S400 speed over the FireWire interface to the vision system. Stereo algorithms are then used to reconstruct the 3D environment, and provide information about the immediate surroundings. All sensors, actuation and control devices are perfectly integrated, giving passengers the feeling of riding in a normal car.

Four Point Grey Dragonfly2 cameras are mounted behind the upper part of the windshield, two with colour sensors and two with monochrome sensors. These cameras are used for forward obstacle/vehicle detection, lane detection and traffic sign recognition. Two Dragonfly2 cameras are mounted over the front mudguards, behind the body of the car looking sideways, and are used for parking and traffic intersection detection. An additional two Firefly MV cameras are integrated into the rear-view mirror to detect overtaking vehicles, and another two Dragonfly2 cameras monitor nearby obstacles during driving.

'We selected Point Grey cameras for a number of reasons, most notably their outstanding image quality, the ability to control parameters such as shutter, gain, and white balance with custom algorithms, and the compatibility with ultra-compact M12 micro-lens mounts, which have played a key role,' commented Dr Alberto Broggi, director of VisLab. Paolo Grisleri, responsible for vehicle integration, added: 'The camera size for integration in the rear-view mirrors, availability of third-party software for Linux, as well as good price/performance were other significant factors in our decision.'

'Artificial vision is a promising technology for cars, trucks, road construction, mining vehicles and indeed military vehicles, thanks to the low cost and great capabilities that vision sensors are currently demonstrating,' concluded Paolo Grisleri. He is confident that the technology embedded in this vehicle will constitute the basis for developing innovative concepts for the car industry and full vehicle autonomy in the next few years.

Recent News

26 September 2019

Rugby fans are now able to watch highlights from the Rugby World Cup, currently taking place in Japan, from angles and viewpoints not possible with conventional cameras, thanks to a multi-camera system from Canon

13 September 2019

A hyperspectral imaging system built by US research centre Battelle, using Headwall sensors, has been chosen as a finalist for the Department of Homeland Security’s Opioid Detection Challenge

23 July 2019

On the 50th anniversary of the Moon landing on 20 July 1969, Zeiss has described how, in less than nine months, it built the camera lens used to capture the iconic images during the Apollo 11 mission

18 July 2019

Researchers at Lund University in Sweden are using high-speed cameras to study how insects use visual information to control flight