Skip to main content

Extending reach of robotics

In recent years, a growing number of companies around the world have begun to realise the merits of using machine and robotic vision technology for bin picking tasks. According to Allan Anderson, managing director of Clearview Imaging and chairman of the UK Industrial Vision Association, many UKIVA members are currently involved in bin picking applications using 3D vision technology. Applications are many and varied, ranging from packing veterinary catheters, to transferring delicate bags of mozzarella into secondary packaging, and picking engine blocks in the automotive industry.

As Anderson explained, the technology used for random bin picking typically uses a pair of cameras (stereovision) or 3D scanners, combining high resolution cameras with pattern projection. The output is a point cloud and depth map, with 3D matching algorithms then used to locate and position parts.

One of the leading companies in provision of machine and robot vision technology for bin picking is Slovakia-based outfit Photoneo, which manufactures two types of 3D scanning devices: the PhoXi 3D scanner family for scanning static objects and the MotionCam-3D for scanning moving objects. Both devices use structured light – laser pattern projection – while the MotionCam-3D also features an innovative method of 3D sensing moving objects, which the company calls parallel structured light. In order to use standard projection for imaging moving objects, Photoneo has developed a custom CMOS image sensor.

Branislav Pulis, VP of sales and marketing at Photoneo, said that the company is focused on smart robot handling applications, including bin picking. In addition to providing a wide range of scanning devices, the firm offers a software package for robot integrators which, Pulis said, enables them to install bin picking without any knowledge of 3D vision. The Bin Picking Studio includes all major bin picking components, such as robot-camera calibration, CAD-based object localisation, path planner for many robotic brands and collision avoidance systems.

The company’s robot vision technology is currently widely exploited in bin picking applications, with the firm also working as an integrator of such solutions in the Central European and Germany, Austria and Switzerland regions. ‘Predominantly, our systems are picking various parts in automotive production, from metallic A-pillars to plastic buttons,’ Pulis said. ‘In order to get the hands-on experience, we participate in many bin picking installations, and all the gained insight is then transferred to our product.’

Pulis stated that hundreds of the PhoXi 3D scanners are deployed in various automation applications, with bin picking one of the main ones. In Slovakia, the company’s technology is employed to pick automotive parts from eight boxes. ‘There are five layers of boxes and operators take one when it is empty. The ABB IRB1600 robot is used for picking small metal parts in a six-second cycle time. Considering that an operator takes boxes away manually, we also had to define a dynamic collision environment,’ Pulis explained. In his view, 3D vision will be a new cornerstone of the next industrial revolution, one in which Photoneo will actively contribute by providing what he described as eyes and a brain for robots.

‘We are changing the capabilities of robots: from machines performing predefined trajectories, to robots that are flexible and adaptive,’ he said.

Photoneo is working on a new visual cortex for robots, Pulis said, so that the robot will be able to understand its task and where it is in its environment.

Artificial intelligence

Elsewhere, German start-up Robominds has developed a vision system that combines a 3D camera and a powerful industrial processor with its Robobrain.Vision software based on artificial intelligence. Mark Stevens, head of marketing and business development at Robominds, explained that, when operating in tandem with Robobrain.Vision, robots are capable of grasping random, cluttered items, without the need to scan or teach the robot in advance.

‘Applications with overlapping objects and varying surfaces and geometries can also be managed using the vision system,’ he said.

Since its establishment, Robominds has focused on applications in the logistics sector. One example Stevens cited is a solution based on a Universal Robots UR5 robot working with Robobrain.Vision software for a mid-sized logistics company. ‘The application previously necessitated manual picking with a lot of down-time,’ he said.

Although he believes that collaborative robotics has come a long way in making the automation process easier and more cost-efficient, Stevens said that challenges still exist, and that projects still require consultation and integration know-how to execute the integration task. There are also challenges relating to the development of the ideal gripper.

‘We have solved many of the vision challenges,’ Stevens said. ‘But an ideally optimised gripper is also essential in ensuring that the items can be picked. We continue to experiment regularly with suction and parallel grippers with hundreds of different items.’

Based on Photoneo’s experience to date, Pulis agreed that one challenge currently facing the sector is to design the right gripper for handling objects. ‘Very often our customers think that the vision part is the most complicated, but usually 3D vision is not an issue. What is more challenging is a good design of the gripper. That is why we [Photoneo] also offer consultations for the gripper design and the entire application set-up,’ he said.

Endless possibilities

‘The emergence of 3D scanning technologies has provided robotic systems with the ability to identify and locate objects based on shape, enabling the reliable detection of objects with low contrast or complex geometries, especially in poor lighting conditions,’ commented Christian Benderoth, regional development manager for EMEA at LMI Technologies. The Vancouver, Canada-based outfit manufactures 3D machine vision sensors using laser triangulation and structured light or fringe projection.

‘When a robot can see discrete objects in 3D at production speed, it allows the vision-guided robot system to perform its task without the need for custom tooling,’ Benderoth added.

Generally speaking, Benderoth said that means that generic bins, racks and conveyor systems can be used with the robot adjusting dynamically to any variation in size or location. A 3D smart sensor’s built-in measurement tools also enable a robot to detect and manipulate objects of different geometries and sizes, contrasts and colours, and even touching and overlapping items.

As a result, he believes that robot work cells are smarter and can support different products, short runs and quick changeovers.

Benderoth observed that the most important challenge in this application area is definitely random bin picking.

‘In the past, random bin picking was separated into stages. Parts were first isolated, then detected and retrieved, with dedicated systems for each stage,’ he said.

‘Now, 3D smart sensor technologies allow a robot to be self-aware and therefore react to its environment, dramatically speeding up the bin picking process.

‘Robot self-awareness gives the sensor knowledge of kinematics, tool position, how to engage with the part, where the bin walls are located in relation to the sensor, and where the part is with six degrees of freedom. The most advanced 3D sensors can calculate all of these features and control robot motion directly,’ he added.

LMI has released the Gocator URCap plugin for mapping its Gocator snapshot sensors directly to a Universal Robots robot. ‘This complete robotic solution is capable of a wide variety of tasks, including pick-and-place to stack and unstack, similar to a palletising or de-palletising application, as well as random placement and picking from a conveyor and placing into structured bins according to the height of parts,’ Benderoth explained. ‘It also uses 3D information to put them in the appropriate bin, and set them at the appropriate clock angle using a part matching algorithm inside Gocator.’

As vision-guided robotic systems become more reliable and easier to use, Benderoth predicted that such solutions will become an increasingly cost-effective replacement for manual inspection, sorting and assembly. He pointed out that many end-users are now finding the increased productivity and cost savings are substantial enough to warrant complete system upgrades.

‘As demand for mass production grows globally, fast, flexible, error-proof assembly is the goal of many manufacturers. Applications where new product models are introduced frequently, where production runs are shorter, or where changeover is more common, will benefit the most from advanced vision-guided robot systems,’ Benderoth said.

‘We expect many new markets to emerge for vision-guided robots,’ he continued. ‘From agricultural applications with robots working in the fields, harvesting, feeding, weeding and transporting produce and grain, to collaborative scenarios where robots are working alongside humans in manufacturing plants, or in logistics and packaging operations – the possibilities are endless.’



Topics

Read more about:

Robotics, 3D imaging

Media Partners