Vision cleans up: Dyson robot vacuum navigates with imaging

Share this on social media:

Dyson's 360 Eye robot vacuum cleaner relies on vision to navigate its way around a home. Mike Aldred from Dyson spoke about the product's development at the UKIVA machine vision conference in Milton Keynes, and Greg Blackman was there to hear the presentation

Dyson's 360 Eye robot vacuum cleaner relies on vision to navigate its way around a home. (Credit: Dyson)

Dyson’s 360 Eye robot vacuum cleaner, released in 2016, has been 17 years in the making. Five man years’ worth of work was put into developing the tools –simulation and the like – with which the engineers could then develop the product, and around 75,000 in-home trials were conducted over the eight years before the robot vacuum was launched. Even after launch, Dyson is still trialling the product at the moment, Mike Aldred, lead robotics engineer at Dyson, said during a presentation he gave at the UK Industrial Vision Association’s (UKIVA) machine vision conference at the end of April in Milton Keynes, UK.

The 360 Eye is a robotic vacuum cleaner that moves around a room autonomously vacuuming as it goes. It uses a 360-degree panoramic camera and simultaneous localisation and mapping (SLAM) algorithms to navigate around a room.

‘We realised very early that vision is where we wanted to go [in developing the robot],’ Aldred commented during his talk. ‘Vision for us was not just about solving the problems on this product [360 Eye], but it gave us so much more functionality in products going forward. There’s a richness of information we cannot achieve from other sensing technologies.’

A panoramic camera was a necessity according to Aldred to avoid the robot being blinded when cleaning up against furniture and objects on the floor. The camera also has a field of view between 0 and 45 degrees to image the walls. This is to give features that the robot’s SLAM algorithm can use to locate itself in its environment – there is a lot of clutter and occlusion on the floor, so imaging the walls provides the robot with more useful information. And it’s important for the robot to know where it is in the room because, with its powerful vacuum, 360 Eye only has a runtime of 40 minutes, so it can’t afford to clean areas multiple times.

The sensor in the camera is VGA resolution, and only a 480 x 480 pixel segment of that is used. A toroid is projected onto the segment, with the final resolution being 128k pixels. ‘We can navigate more than effectively using just that image,’ Aldred said. ‘If we had a larger sensor we would be wasting processing time, either sub-sampling or throwing away information. We need everything we can in terms of processing power; the processor is a 10-year-old processor.’ The engineers also needed headroom on the processor to add functionality. ‘You should do what you need and no more,’ he advised.

During product development, the Dyson team built simulation tools to test the robot in silico. The team then set up a trolley and representative camera and optics, and drove it around hundreds of homes with a PlayStation controller. Those image sequences were fed into the simulation model to see if they could be used to locate where the robot was in a room. In this way, the engineers could work on the image capture systems, image control systems, and the navigation systems without physically having to develop the product.

Aldred noted that simulation has its place, if just to get the bugs out of the system, but that it doesn’t replace real-world trials. He said that Dyson spent 70-80 per cent of development time testing the product. During the 75,000 home trials, the team had to contend with people covering the robot to hide it, as well as pets and children running through the scene confusing the robot.

Image quality was a challenge, Aldred added, saying the team spent a long time looking at exposure control to handle light and dark rooms. This was solved by slowing down the robot in a dark room to avoid motion blur.

Aldred said that Dyson has plans to increase the functionality of the 360 Eye. This includes object recognition, to distinguish between a ball of fluff and wedding ring, for example. ‘We’re looking at everything from some of the neural network solutions to the more basic algorithms, but there is a lot of complexity in object recognition,’ he commented.

There is also work on contextual understanding; if the machine knows whether it’s in a kitchen or a bedroom it can change its behaviour accordingly. In addition, the team want the robot eventually to interact with its environment, to pick up or move objects as the robot vacuums around the room. Vision is a key enabler to that, Aldred concluded.

Company: 

Related analysis & opinion

20 February 2019

Jeff Bier, founder of the Embedded Vision Alliance, discusses the four key trends driving the proliferation of visual perception in machines

19 February 2019

Greg Blackman reports on CEA Leti's new image sensor, shown at Photonics West, which contains onboard processing and is able to image at 5,500 frames per second

15 November 2018

Greg Blackman reports on the buzz surrounding embedded vision at the Vision Stuttgart trade fair, which took place from 6 to 8 November

26 October 2018

Arndt Bake, chief marketing officer of Basler, looks back at the history of machine vision to see what lessons can be applied to the emerging embedded vision market

28 August 2018

Technology that advances 3D imaging, makes lenses more resistant to vibration, turns a CMOS camera virtually into a CCD, and makes SWIR imaging less expensive, are all innovations shortlisted for this year’s Vision Award, to be presented at the Vision show in Stuttgart

Related features and analysis & opinion

20 February 2019

Jeff Bier, founder of the Embedded Vision Alliance, discusses the four key trends driving the proliferation of visual perception in machines

19 February 2019

Greg Blackman reports on CEA Leti's new image sensor, shown at Photonics West, which contains onboard processing and is able to image at 5,500 frames per second

15 November 2018

Greg Blackman reports on the buzz surrounding embedded vision at the Vision Stuttgart trade fair, which took place from 6 to 8 November

31 October 2018

As the worldwide machine vision market continues to expand – with new trends emerging and new elements coming into play that could impact existing business models – companies are searching for those ever-important opportunities to stimulate growth.

One such trend is almost certainly embedded vision, although the technology behind it is not new, as Mark Williamson, managing director at Stemmer Imaging, noted: ‘Embedded vision is a big topic. However, it has been here a long time, because every smart camera that you buy is an embedded vision system.’

31 October 2018

Qualcomm Technologies' Snapdragon board is designed for mobile devices, but can be used to create other embedded vision systems. Credit: Qualcomm Technologies

Embedded computing promises to lower the cost of building vision solutions, making imaging ubiquitous across many areas of society. Whether this turns out to be the case or not remains to be seen, but in the industrial sector the G3 vision group, led by the European Machine Vision Association (EMVA), is preparing for an influx of embedded vision products with a new standard.

26 October 2018

Arndt Bake, chief marketing officer of Basler, looks back at the history of machine vision to see what lessons can be applied to the emerging embedded vision market

28 August 2018

Technology that advances 3D imaging, makes lenses more resistant to vibration, turns a CMOS camera virtually into a CCD, and makes SWIR imaging less expensive, are all innovations shortlisted for this year’s Vision Award, to be presented at the Vision show in Stuttgart

31 July 2018

Xilinx FPGAs have been central to Active Silicon over its 30 years in business. Colin Pearce founded the company on 5 September 1988, originally as a Xilinx FPGA consultancy, at around the time Xilinx launched FPGAs into Europe. Now, virtually all of Active Silicon’s products use Xilinx technology, with one of the latest based around a Xilinx Zynq system-on-chip, targeting the emerging embedded vision market.