Robot bin picking has been worked on for a number of years, and while it has been shown to be possible it’s only now that the technology is coming to fruition. Greg Blackman looks at what was on display at Automatica
Visitors to Automatica in Munich this week would have been presented with various demos showing robot bin picking in action, a task that is only now starting to be deployed in factories.
Dr Olaf Munkelt, managing director of MVTec, drew attention to the progress made in bin picking during a panel discussion at the show, saying that four years ago bin picking wasn’t possible; it was too complex and needed too many sensors. But, now, speaking to exhibitors at the show, this has been solved to some extent.
Both Kuka and Fanuc have built their own vision sensors for bin picking and pick-and-place applications, Kuka with its 3D Perception sensor and Fanuc with its IRVision range of sensors.
Speaking to Imaging and Machine Vision Europe at Automatica, Sirko Prüfer, product manager for vision, perception and sensitivity at Kuka, noted that only four per cent of Kuka’s small robots – those with a payload of 1-10kg – are equipped with vision; Prüfer wants to increase this to 20 per cent.
He said that vision has grown in importance since Kuka launched its first vision solution at Automatica in 2010, and that there is a demand for greater flexibility in robotics. In China, Prüfer noted, a lot of industrial robot applications are only semi-automated where they could be fully automated by incorporating vision.
Vision has to be easier to set up and use to improve its uptake, Prüfer said. The advantage with Kuka’s solution is that it simplifies the engineering effort needed to install vision in a robot cell, because the sensor has been built specifically for Kuka robots.
Kuka’s 3D Perception sensor is a stereo camera system. It has a resolution of 1,280 x 960 pixels, operates at a frame rate of 200Hz, and has an accuracy of 200mm at a focal length of 65mm. It doesn’t use structured illumination, just ambient light. Depth images are calculated directly in the sensor through an Nvidia Tegra graphics card.
Prüfer put the cost of the vision solution – the sensor, the software, the cabling, etc – at less than €5,000, excluding the robot.
Kuka is positioning the sensor for the logistics market, such as in automated pharmaceutical distribution warehouses. Here, the camera doesn’t have to be extremely accurate, as the robot would typically use a vacuum gripper, which only needs to be able to latch onto a reasonably flat surface.
Kuka's 3D Perception sensor creating a depth image
Michael Keller, an application engineer at Fanuc, said that around 20 per cent of Fanuc's robots use vision in Germany, and this could be up to 30 per cent in Japan. Fanuc’s IRVision offers 2D, 3D laser scanner, and 3D area sensors, among other types. Scanning rates can reach 60fps, resolutions can be up to 1,280 x 1,024 pixels, and up to 16 cameras can be connected to one robot controller. Fanuc’s software package also supports more than 20 different vision processing functions.
The vision sensor was attached to the robot arm on Fanuc’s bin picking demo at Automatica, which has the advantage of only requiring a small field of view, as the camera is always positioned above the bin before taking an image. This is in contrast to setups that mount the camera above the bin and therefore have to cover a larger field of view.
The downside to mounting the camera on the robot arm is that the cable for the sensor has to be routed along the robot arm, and the robot has to stop to take each image.
Isra Vision launched two bin picking sensors at the show, both new additions to its range of IntelliPick3D products. The IntelliPick3D-Pro uses a laser light to capture detailed images in varying lighting conditions; it can detect parts from 15mm up to 2,000mm in size. Meanwhile, the PowerPick3D sensor uses four integrated cameras to capture a depth image from multiple angles to avoid occlusions. The sensor uses structured light and can run at 500ms per image capture.
Imaging company Photoneo was showing an early version of its new 3D camera based on what it is calling a mosaic shutter CMOS sensor. The camera has a resolution of 0.6 megapixels and can capture a 3D point cloud at 30fps, one of the fastest on the market. The company supplies its PhoXi 3D scanners for bin picking applications, the largest version of which can scan a 2 x 2-metre area.
Both Isra Vision’s and Photoneo’s imaging solutions for bin picking rely on CAD models of the part and the gripper in order to pick random components from a bin successfully. The application has to be modelled to be reliable and it then becomes a case of matching the pose of an object to the model of the part in order for the robot arm to grasp it.
Speaking on a panel discussion at Automatica about the use of artificial intelligence, Martin Hägele, department head of robotics at Fraunhofer IPA, said AI could make bin picking more reliable. Dr Ulrich Eberl from Siemens also pointed to studies by Google that taught robots to grip unknown objects using AI. The robot required 800,000 attempts to learn how to do this, but it potentially removes the need for CAD models. In addition, once the robot learnt how to grip the part, it could pass this knowledge on to other robots through cloud computing.
While AI is still in its infancy, giving robots the power to pick up unknown objects could be the next stage of random bin picking.
Image credit: Messe München