Bosch to cut engineering cost with AI

Share this on social media:

Tags: 

Wolfgang Pomrehn, collaborative robotics product manager at Bosch Rexroth, says deep learning could reduce the engineering effort of introducing new products to a manufacturing line by three quarters

Bosch is building an image database of typical parts made by the firm in order to train neural networks. The idea is to use artificial intelligence to lower the engineering effort needed to run inspection systems.

Bosch has about 270 production plants. The company manufactures many different parts, some of which have similar features. In the automotive sector, for example, Bosch makes a lot of parts used in combustion engines that look alike. We’re trying to create a database of images of those parts, so that if a new component is introduced that falls in a certain product category with slight variations, you immediately have a quality inspection system able to recognise the defects usually found on these types of part.

A sensor element designed to measure the amount of oxygen in the air sucked into an engine, for example, will have a complex form consisting of four or five different components assembled together – it has a complicated surface made up of metal and plastic parts. To inspect this sensor completely would require a check made on 20 different characteristics, which would need a set of 10 cameras, for instance, with 20 different illumination settings. To integrate this in a running production line involves implementation costs, depending on the complexity, of between €50,000 and €250,000. Engineering would account for between €25,000 and €150,000 of this cost. We plan to reduce the engineering cost to a quarter of this figure by using deep learning. The range would then roughly be €7,000 to €40,000.

An inspection system looking at the quality of QR codes and printed text, for example, should have quality criteria built into the analysis, rather than having to tune a code reader to find defects in dot codes for each batch. Bosch is collecting image data to train inspection systems to do this, along with looking at defects such as text reading, scratches on metal, bubbles in plastic, and incorrectly mounted O-rings.

Bosch’s APAS assistant robot is a mobile imaging station being used to collect image data. Credit: Bosch Rexroth

New products appear in shorter timeframes. This involves a lot of costly engineering in order to introduce new products on existing production lines. Bosch’s effort to make machine vision systems more flexible to cope with many different product variations will not only reduce the cost of re-engineering a machine vision system for each variant, but also improve the time-to-market of new products.

To get a robust system, we need many hundreds of terabytes of images – the more the better. A lot of Bosch systems are already equipped with cameras, so we can collect image data easily, in one respect. On the other hand, deep learning algorithms need data from different scenarios – you can’t only look at data code, for example; you also need images of the entire part so that the system can locate the code on a new part from previous knowledge, and not only by focusing the camera on the right spot. Therefore, a lot more different views of the object are needed, which involves additional effort to collect data.

Another challenge is that in highly sophisticated production lines like those found at Bosch, defects could occur only once in a million parts. If you want to detect defects, you need data on those defects, but very few defective parts are produced. Without sufficient data, we’re focusing on another kind of deep learning algorithm able to find deviations in images of the good parts. This is part of our research: collecting images of good parts and asking the vision system to detect defects.

We’re focusing on getting the images, labelling them, and optimising the system by widening the area of inspection of the parts – different parts need different kinds of imaging. The data is collected by a mobile imaging station consisting of a robot that presents the part to a set of cameras. Labelling the images is done offline, as is training the AI system. There are also prototype implementations of the AI system running in practice on production lines.

This work is part of an internal Bosch research project based on AI and deep learning. Our group is focused on quality inspection and how to implement these algorithms in practice. We’re not at a point where we can use deep learning algorithms completely, but we are already using part of the knowledge for complicated inspection in practice.

Interview by Greg Blackman

Are you working with imaging and neural networks in your manufacturing plant? Would you like to share your experience of using AI with the readers of Imaging and Machine Vision Europe? Email: greg.blackman@europascience.com

Company: 

Related analysis & opinion

27 March 2020

Newly elected EMVA president Chris Yates considers what the future might hold for machine vision

27 January 2020

Prior to speaking at the Embedded World trade fair, The Khronos Group’s president, Neil Trevett, discusses the open API standards available for applications using machine learning and embedded vision

13 January 2020

Vassilis Tsagaris and Dimitris Kastaniotis at Irida Labs say an iterative approach is needed to build a real-world AI vision application on embedded hardware

03 December 2019

Following speaking at Embedded Vision Europe, Pierre Gutierrez, lead machine learning researcher at Scortex, writes about the challenges of deploying deep learning on the factory floor

02 December 2019

Takashi Someda, CTO at Hacarus, on the advantages of sparse modelling AI tools

Related features and analysis & opinion

MVTec’s deep learning tool provides means to label data simply and efficiently

10 November 2020

Matthew Dale explores the software to simplify the management and labelling of deep learning data

MVTec’s Halcon software library includes a deep learning OCR tool with pre-trained fonts from a wide range of industries. Credit: MVTec

03 August 2020

Matthew Dale explores vision solutions for code reading and inspection in pharmaceutical production

27 March 2020

Newly elected EMVA president Chris Yates considers what the future might hold for machine vision

Visualisation of the movements of road users in the city of Umeå. Besides counting the tracking over multiple counting lines, Viscando also analysed the tendency for traffic conflict. Credit: Viscando

12 February 2020

Greg Blackman explores the vision and AI technology transforming our roads, as the transport sector gathers for the Intertraffic Amsterdam trade fair

27 January 2020

Prior to speaking at the Embedded World trade fair, The Khronos Group’s president, Neil Trevett, discusses the open API standards available for applications using machine learning and embedded vision

26 October 2020

Greg Blackman tries to put a figure on the cost of vision technology

15 September 2020

Greg Blackman speaks to Simon Beveridge, managing director of UK-based vision integrator Siga Vision, about installing vision systems during Covid-19

11 March 2020

Raghava Kashyapa, CEO of Indian vision firm Qualitas Technologies, writes about the benefits and challenges of implementing machine vision in the cloud – and what this holds for manufacturing in India