Skip to main content

Bosch to cut engineering cost with AI

Wolfgang Pomrehn, collaborative robotics product manager at Bosch Rexroth, says deep learning could reduce the engineering effort of introducing new products to a manufacturing line by three quarters

Bosch is building an image database of typical parts made by the firm in order to train neural networks. The idea is to use artificial intelligence to lower the engineering effort needed to run inspection systems.

Bosch has about 270 production plants. The company manufactures many different parts, some of which have similar features. In the automotive sector, for example, Bosch makes a lot of parts used in combustion engines that look alike. We’re trying to create a database of images of those parts, so that if a new component is introduced that falls in a certain product category with slight variations, you immediately have a quality inspection system able to recognise the defects usually found on these types of part.

A sensor element designed to measure the amount of oxygen in the air sucked into an engine, for example, will have a complex form consisting of four or five different components assembled together – it has a complicated surface made up of metal and plastic parts. To inspect this sensor completely would require a check made on 20 different characteristics, which would need a set of 10 cameras, for instance, with 20 different illumination settings. To integrate this in a running production line involves implementation costs, depending on the complexity, of between €50,000 and €250,000. Engineering would account for between €25,000 and €150,000 of this cost. We plan to reduce the engineering cost to a quarter of this figure by using deep learning. The range would then roughly be €7,000 to €40,000.

An inspection system looking at the quality of QR codes and printed text, for example, should have quality criteria built into the analysis, rather than having to tune a code reader to find defects in dot codes for each batch. Bosch is collecting image data to train inspection systems to do this, along with looking at defects such as text reading, scratches on metal, bubbles in plastic, and incorrectly mounted O-rings.

Bosch’s APAS assistant robot is a mobile imaging station being used to collect image data. Credit: Bosch Rexroth

New products appear in shorter timeframes. This involves a lot of costly engineering in order to introduce new products on existing production lines. Bosch’s effort to make machine vision systems more flexible to cope with many different product variations will not only reduce the cost of re-engineering a machine vision system for each variant, but also improve the time-to-market of new products.

To get a robust system, we need many hundreds of terabytes of images – the more the better. A lot of Bosch systems are already equipped with cameras, so we can collect image data easily, in one respect. On the other hand, deep learning algorithms need data from different scenarios – you can’t only look at data code, for example; you also need images of the entire part so that the system can locate the code on a new part from previous knowledge, and not only by focusing the camera on the right spot. Therefore, a lot more different views of the object are needed, which involves additional effort to collect data.

Another challenge is that in highly sophisticated production lines like those found at Bosch, defects could occur only once in a million parts. If you want to detect defects, you need data on those defects, but very few defective parts are produced. Without sufficient data, we’re focusing on another kind of deep learning algorithm able to find deviations in images of the good parts. This is part of our research: collecting images of good parts and asking the vision system to detect defects.

We’re focusing on getting the images, labelling them, and optimising the system by widening the area of inspection of the parts – different parts need different kinds of imaging. The data is collected by a mobile imaging station consisting of a robot that presents the part to a set of cameras. Labelling the images is done offline, as is training the AI system. There are also prototype implementations of the AI system running in practice on production lines.

This work is part of an internal Bosch research project based on AI and deep learning. Our group is focused on quality inspection and how to implement these algorithms in practice. We’re not at a point where we can use deep learning algorithms completely, but we are already using part of the knowledge for complicated inspection in practice.

Interview by Greg Blackman

Are you working with imaging and neural networks in your manufacturing plant? Would you like to share your experience of using AI with the readers of Imaging and Machine Vision Europe? Email: greg.blackman@europascience.com

Media Partners