Thanks for visiting Imaging and Machine Vision Europe.

You're trying to access an editorial feature that is only available to logged in, registered users of Imaging and Machine Vision Europe. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Time for tea: robot pours perfect cuppa

Share this on social media:

Researchers at the Department of Perception and Cognition at the Institute of Robotics and Mechatronics at the German Aerospace Centre (DLR) have been developing a robot designed to overcome uncertainty. One of the tasks 'Rollin' Justin' is being programmed to complete is to make a standard cup of tea or coffee, something that is surprisingly complex.

Rollin' Justin is a pan-European study supported by EUnited Robotics, the European Robotics Association. The interdisciplinary team of mechanical and electrical engineers as well as software developers have been working on the robot for two decades. Dr Michael Suppa, head of the department at the DLR, commented: 'What began as a mechatronics development has progressed to one almost entirely focused on the software that will deliver genuine perception and manipulation skills to robots.

'Looking back, the lightweight robotic arms, the robotic hands, even the stereo vision system and the mobile base, were all relatively straightforward to put together. Now, almost all our research is about software, so that Justin can bring its perception into play in a dynamic environment and deal with change. It must be able to assess risk and decide what to do next by using his sensory system to validate data. This is why we've set it the tea and coffee making challenge. Currently, it is being programmed to follow a set series of tasks to make those drinks, yet we are challenging it to make the drinks in much the same way you or I would.'

The first thing Justin must do is interpret the objects he can see with its stereo camera system and understand their significance to the task. Here Dr Suppa is working with researchers across Europe to further develop Justin's abilities. Dr Andrew Davison at Imperial College, London is co-operating with DLR's researchers in the sphere of navigation. 'Once the robot starts moving and the scene is updated, the map he builds just gets better and better, but it is very complicated and very graphics card processing power hungry,' said Dr Davison.

Dr Suppa agreed: 'Once we made Justin mobile, it got a whole lot more complicated and we had to go back the drawing board, algorithm-wise. We are constantly challenging Justin and making its life a little bit harder. First it had to find the objects required to make iced tea and understand which tasks to do in what order. Just to be really mean, we introduced transparent mugs, which flummoxed him for a while. Then we introduced a filter coffee machine and he had to use his sensors to realise how much to push. We just keep introducing more and more variants and making our scenario more complicated as a result.'

Even when Justin knows what objects he is working with and what it should be doing with them, it still has to decide where to grip the object and how tightly. The sensors in its fingers allow it to make little adjustments as its initial touch turns into a grip. As research partner, Dr Jeremy Wyatt at University of Birmingham explained: 'We aren't as visually guided as you might imagine. Trivial tasks are hard. Humans are very good at estimating the weight of an object before they interact with it, so they can answer the question, "how will this object move if I apply these forces?" Bit by bit, Justin is using learning as a predictor to both mass and the forces he needs to apply over distance. The ultimate would be a physics engine that mimics the real world, something all the scientists working on his development aspire to. Although the complexity is still mind numbing for this goal to be achieved, computer processing power has progressed so rapidly, it is now a goal that is seen to be achievable.'

Recent News

26 September 2019

Rugby fans are now able to watch highlights from the Rugby World Cup, currently taking place in Japan, from angles and viewpoints not possible with conventional cameras, thanks to a multi-camera system from Canon

13 September 2019

A hyperspectral imaging system built by US research centre Battelle, using Headwall sensors, has been chosen as a finalist for the Department of Homeland Security’s Opioid Detection Challenge

23 July 2019

On the 50th anniversary of the Moon landing on 20 July 1969, Zeiss has described how, in less than nine months, it built the camera lens used to capture the iconic images during the Apollo 11 mission

18 July 2019

Researchers at Lund University in Sweden are using high-speed cameras to study how insects use visual information to control flight