Skip to main content

SLAM: the main event

Greg Blackman reports from a KTN-organised image processing conference, where event cameras and the future of robotic vision were discussed

Autonomous cars, drones delivering packages, and virtual reality headsets might all be viable technologies and ones that have been shown to work, but they’re not yet found in everyday life. Part of the reason for this, according to Owen Nicholson, CEO and co-founder of Imperial College London spin-off Slamcore, relates to the simultaneous localisation and mapping (SLAM) algorithms used in much of this technology. ‘We need to get SLAM algorithms working with affordable hardware, and we’re still not there yet,’ he commented during an intelligent imaging event, jointly organised by the UK Knowledge Transfer Network (KTN) and the Institution of Engineering and Technology (IET), which took place in London on 1 March.

Nicholson pointed out that self-driving cars are not ready for mass deployment, drones crash when they are not under manual control, and that half of VR users suffer from motion sickness because of latency issues.

Within the field of robotics, SLAM algorithms have been developed and refined since the early 1990s. They are designed to construct a map of an unknown environment while simultaneously pinpointing where the robot is within its surroundings. Sensors identify features in the scene that can be recognised from different positions and used to triangulate the robot’s location.

In 2003, SLAM was shown to work with a single camera, and since then other sensor data, including that from depth sensors, has been used for robot guidance. Slamcore, which has had investment from Amadeus Capital, among other investors, is developing SLAM solutions fusing different sensor data.

The company is also writing algorithms for event cameras, technology that has been around for 10 years but has not made it out of the laboratory, and which Nicholson feels could offer real benefits for robotics.

Event cameras don’t have the concept of frames; rather they record a stream of events but only when something changes in the scene. Because there are no frames, most vision algorithms won’t work with the data.

Chronocam is one firm that has raised investment, most notably from Renault, for its event camera-based vision sensors, for which it won best start-up at the 2016 Inpho Venture Summit, an investment conference in Bordeaux, France.

On the software side, Slamcore co-founder Hanme Kim and colleagues at Imperial College London won the best paper award at the 2014 British Machine Vision Conference for work on simultaneous mosaicing and tracking with an event camera.

The benefits of event cameras are that they give high dynamic range and are able to cope with fast movement in the scene, but Nicholson said that the ‘real future of event cameras lies in their low power consumption’. He said there is an order of magnitude improvement in the data rate and power consumption of event cameras compared to standard cameras, because event sensors only report information when something in the scene changes.

Nicholson commented during his presentation that there is ‘still lots to do on event camera hardware’, and that ‘algorithms and hardware need to be built hand in hand’.

Chronocam describes its event camera-based sensors as 'bio-inspired vision technology', and during the event Andrew Schofield, a senior lecturer in the school of psychology at the University of Birmingham, described work undertaken at the Visual Image Interpretation in Humans and Machines (ViiHM) computer vision network, which aims to transfer understanding of biological vision to help solve problems in computer vision. ViiHM, funded by the UK Engineering and Physical Sciences Research Council (EPSRC), has presented grand challenges - a theoretical, technical and application challenge - for the computer vision and biological vision communities to develop a general purpose vision system for robotics.

The Intelligent Imaging event brought together academia and industry, with presentations on image processing in art investigation, defence applications, super-resolution microscopy, and space imaging. In his introduction, Nigel Rix, head of enabling technologies at KTN, commented that the UK has a good science and innovation base, but is less good at commercialising those innovations. The KTN aims to act as a bridge between academia and industry, providing funding for technology readiness levels of four to six.

Related article:

What can drones learn from bees? - Dr Andrew Schofield, who leads the Visual Image Interpretation in Human and Machines network in the UK, asks what computer vision can learn from biological vision, and how the two disciplines can collaborate better

Media Partners