Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

SLAM: the main event

Share this on social media:

Tags: 

Greg Blackman reports from a KTN-organised image processing conference, where event cameras and the future of robotic vision were discussed

Autonomous cars, drones delivering packages, and virtual reality headsets might all be viable technologies and ones that have been shown to work, but they’re not yet found in everyday life. Part of the reason for this, according to Owen Nicholson, CEO and co-founder of Imperial College London spin-off Slamcore, relates to the simultaneous localisation and mapping (SLAM) algorithms used in much of this technology. ‘We need to get SLAM algorithms working with affordable hardware, and we’re still not there yet,’ he commented during an intelligent imaging event, jointly organised by the UK Knowledge Transfer Network (KTN) and the Institution of Engineering and Technology (IET), which took place in London on 1 March.

Nicholson pointed out that self-driving cars are not ready for mass deployment, drones crash when they are not under manual control, and that half of VR users suffer from motion sickness because of latency issues.

Within the field of robotics, SLAM algorithms have been developed and refined since the early 1990s. They are designed to construct a map of an unknown environment while simultaneously pinpointing where the robot is within its surroundings. Sensors identify features in the scene that can be recognised from different positions and used to triangulate the robot’s location.

In 2003, SLAM was shown to work with a single camera, and since then other sensor data, including that from depth sensors, has been used for robot guidance. Slamcore, which has had investment from Amadeus Capital, among other investors, is developing SLAM solutions fusing different sensor data.

The company is also writing algorithms for event cameras, technology that has been around for 10 years but has not made it out of the laboratory, and which Nicholson feels could offer real benefits for robotics.

Event cameras don’t have the concept of frames; rather they record a stream of events but only when something changes in the scene. Because there are no frames, most vision algorithms won’t work with the data.

Chronocam is one firm that has raised investment, most notably from Renault, for its event camera-based vision sensors, for which it won best start-up at the 2016 Inpho Venture Summit, an investment conference in Bordeaux, France.

On the software side, Slamcore co-founder Hanme Kim and colleagues at Imperial College London won the best paper award at the 2014 British Machine Vision Conference for work on simultaneous mosaicing and tracking with an event camera.

The benefits of event cameras are that they give high dynamic range and are able to cope with fast movement in the scene, but Nicholson said that the ‘real future of event cameras lies in their low power consumption’. He said there is an order of magnitude improvement in the data rate and power consumption of event cameras compared to standard cameras, because event sensors only report information when something in the scene changes.

Nicholson commented during his presentation that there is ‘still lots to do on event camera hardware’, and that ‘algorithms and hardware need to be built hand in hand’.

Chronocam describes its event camera-based sensors as 'bio-inspired vision technology', and during the event Andrew Schofield, a senior lecturer in the school of psychology at the University of Birmingham, described work undertaken at the Visual Image Interpretation in Humans and Machines (ViiHM) computer vision network, which aims to transfer understanding of biological vision to help solve problems in computer vision. ViiHM, funded by the UK Engineering and Physical Sciences Research Council (EPSRC), has presented grand challenges - a theoretical, technical and application challenge - for the computer vision and biological vision communities to develop a general purpose vision system for robotics.

The Intelligent Imaging event brought together academia and industry, with presentations on image processing in art investigation, defence applications, super-resolution microscopy, and space imaging. In his introduction, Nigel Rix, head of enabling technologies at KTN, commented that the UK has a good science and innovation base, but is less good at commercialising those innovations. The KTN aims to act as a bridge between academia and industry, providing funding for technology readiness levels of four to six.

Related article:

What can drones learn from bees? - Dr Andrew Schofield, who leads the Visual Image Interpretation in Human and Machines network in the UK, asks what computer vision can learn from biological vision, and how the two disciplines can collaborate better

Company: 

Related analysis & opinion

28 August 2018

Technology that advances 3D imaging, makes lenses more resistant to vibration, turns a CMOS camera virtually into a CCD, and makes SWIR imaging less expensive, are all innovations shortlisted for this year’s Vision Award, to be presented at the Vision show in Stuttgart

29 May 2018

After speaking at AutoSens Detroit on 16 May, Nadav Haas, product manager at Israeli firm Newsight Imaging, gives his opinion on why the company’s CMOS solution is a good fit for lidar sensing in autonomous driving

20 June 2019

The UK is up to 20 per cent less productive than its major competitor countries because it is not investing in automation, Mike Wilson at the British Automation and Robot Association said at UKIVA's machine vision conference in Milton Keynes. Greg Blackman reports

22 June 2018

Robot bin picking has been worked on for a number of years, and while it has been shown to be possible it’s only now that the technology is coming to fruition. Greg Blackman looks at what was on display at Automatica

24 May 2018

Data is now a fiercely guarded asset for most companies and, as the European General Data Protection Regulation (GDPR) comes into force, Framos’ Dr Christopher Scheubel discusses potential new business models based on 3D vision data, following a talk he gave at the Embedded Vision Summit in Santa Clara this week

Related features and analysis & opinion

20 June 2019

The UK is up to 20 per cent less productive than its major competitor countries because it is not investing in automation, Mike Wilson at the British Automation and Robot Association said at UKIVA's machine vision conference in Milton Keynes. Greg Blackman reports

29 March 2019

Andrew Williams explores the vision solutions for robot bin picking

03 January 2019

Warehouses are becoming highly automated facilities that rely to a large extent on vision, as Andrew Williams discovers