Skip to main content

Layered visual system gives vehicle 'brains'

An innovative vision system that mimics the structure of the human brain could be the winning strategy for one of the automated vehicles in this year’s Darpa Urban challenge. The challenge will see vehicles navigating a pretend city using their own initiative, without a human driver to guide them, while adhering to strict Californian driving laws.

One of the competitors, Caltech, believes a vision system involving eight different cameras could be the answer to preventing its vehicle, Alice, from crashing into obstacles and other vehicles. Six of the cameras will face forward, one will face backwards, and one will be mounted on top with the ability to look left and right.

The cameras will feed into a computer system that contains different layers of processing. Lower layers will perform calculations based on information from the cameras, and this will then be fed up the hierarchy to computers that actually make the final decision.

‘It’s like having multiple parts of your brain telling you what to do,’ says Richard Murray, the leader of the project. The technique will provide the vehicle with greater flexibility by allowing it to evaluate problems for itself, he says.

The approach builds on a similar technique, developed by the Jet Propulsion Laboratory in the US to build extra-terrestrial rovers that explore the surfaces of other planets.

Topics

Media Partners