Skip to main content

Changing the face of machine vision?

What will tomorrow’s factories look like? One point of view from Jeff Bier, founder of the Embedded Vision Alliance, is that – whatever else happens – imaging will become ubiquitous in manufacturing. There will no longer be dedicated inspection stations – indeed cameras might not even be sold as separate pieces of hardware; rather vision will be a function of the production machines and will be found everywhere.

This might sound like both an encouraging and a scary proposition for the machine vision industry – and one that probably is still fairly distant – but with the cost of hardware coming down it is a very real possibility.

‘Vision has been primarily a technology used in factory settings, military applications and a few other areas. That’s now changing radically,’ Bier commented. ‘[Vision is] very versatile, which has always been true, but it’s been impractical to deploy because it’s been too expensive. That’s changed because of improvements in digital chips.’

The Raspberry Pi 3, which sells for $35, is often quoted as an inexpensive platform with plenty of processing power for computer vision. By tuning such a board specifically for vision, and mass producing it for a volume application, Bier noted that the cost could be brought down further – he suggests to around $17 a unit. ‘Where can’t you put something that costs $17 a unit?’ he remarked.

‘This is the fundamental thing that’s changing,’ he continued, going on to compare vision technology with digital wireless, which has gone from being expensive and only used in certain areas 20 years ago, to one that, today, most people carry around with them. ‘Just like with digital wireless, as [imaging] technology reduces in price, all kinds of new applications in warehouses and factories become feasible that weren’t before. So, not only do you have traditional machine vision systems that are doing inspection, you’re starting to see things like vision-guided robots that can work alongside humans on the factory floor.’

Amazon deploys robots guided by machine vision in its warehouses. Car manufacturers are also now building vision technology into vehicles. ‘Just 10 or 20 years ago, the idea that an automobile would have computer vision embedded in it was ridiculous. Today, the idea that within 10 years any new automobile won’t have computer vision in it is almost equally unthinkable, considering that more than a million people a year die in road traffic accidents, most of which are caused by driver error,’ Bier said. This proliferation of vision in the automotive sector is mainly down to advances in electronics and especially integrated circuits.

In the machine vision world, the G3 standardisation initiative, which develops and maintains industrial vision standards, has established a study group investigating possible standards for embedded systems, thereby recognising the importance of system on chip (SoC) as a future potential platform for industrial vision.

Building embedded systems accounts for around half of Active Silicon’s revenue, according to Colin Pearce, managing director of the machine vision company, an area Pearce expects to grow in the future.

‘You can now make a small system with enough processing power to do something useful with vision – and that doesn’t cost too much – which you couldn’t do five years ago,’ he commented. Active Silicon has developed embedded systems for medical devices in the field of vision-aided surgery and ophthalmology, both using multiple USB 3.0 cameras. ‘A recent medical embedded system we’ve developed, you just couldn’t do it five years’ ago – the computer would have been too large and consumed too much power. Whereas now you can get a pretty powerful system that can do something useful while using only 35W.’

The majority of the embedded systems that Active Silicon develops are based on the higher-end Intel mobile processors. The company works with a standard called COM Express, a mezzanine format used heavily in industrial embedded systems. It has a plug-in processor module, for which Active Silicon designs a custom carrier card with all the peripherals and connectors.

One reason why companies change to embedded technology is the lower hardware costs. ‘Industrial computers typically provide an overhead of hardware and functionality that is not necessary for [all] applications,’ explained Dr Thomas Rademacher, product manager at camera maker Basler. ‘However, a compact processing board – which is also available in industrial standards and with long-term availability – focuses more on what is really needed and saves money without making concessions on performance. Furthermore, the whole vision setup can be optimised for the application, much more than is possible with a standard vision system.’

Other advantages are that embedded boards are smaller, lighter weight and consume less energy. In addition, most embedded boards with an operating system (OS) are based on Arm processors using an open Linux version, which – according to Rademacher – can be the preferred environment for software development because of advantages in stability, speed and size of the OS.

The small size, weight and energy consumption opens up the potential to build new portable equipment, such as medical handheld devices. There are also traffic applications like automatic number plate recognition, or security and retail uses for capturing images of faces or movement. Code reading in industrial applications is another area where embedded technology can play a role – smart cameras can be thought of as self-contained embedded devices, carrying out the image processing onboard. Embedded systems can also be used to acquire images and run pre-processing algorithms, before sending the cleaned up data to a PC for more comprehensive analysis.

‘In the future, more vision work will be designed using system-on-chip processors, especially hybrid FPGA and Arm combinations,’ commented Patrick Schick, product manager at camera producer IDS Imaging Development Systems. The Xilinx Zynq series is one such SoC with an Arm processor and FPGA. ‘Nevertheless, in most cases, there will still be a machine vision computer carrying out some of processing. It clearly depends on the application,’ he continued.

With more complex analysis, or with many images from different cameras, then Schick feels there will be a PC or more powerful machine doing the processing.

Drivers for IDS’ cameras run on Arm processors supporting instruction sets seven and eight. ‘Our goal is that all the software we develop has to run on all of these systems, independent of hardware,’ Schick said.

The downside of embedded boards, according to Schick, is that they lack bandwidth, which is particularly important for industrial vision. Also, ‘using a direct interface to embedded boards seldom helps, as here there are limitations in cable length,’ he added – meters in length, which is often required for industrial vision can’t generally be reached.

Pearce noted that designing a custom industrial embedded system of the kind that Active Silicon produces is generally not considered for factory automation, where such a system would not warrant the investment, although he added that intelligent cameras are now used extensively in factories.

Nevertheless, the trend for embedded processing is changing the way machine vision is engineered and applied. ‘Companies selling these rather expensive inspection systems for manufacturing have to be mindful of the fact that increasingly – not in all cases, but in some – the same job is going to be done by much less expensive hardware,’ commented Bier. ‘You can either run from it and try to protect your niche, which rarely works, or you can embrace it and say it’s going to be less about hardware and making money from expensive cameras, and more about the software and the enterprise IT aspect of the application. For the companies that do embrace this, it will open up all kinds of new opportunities to use vision where we never could afford to before.’

Value in software

As the cost of hardware comes down, Bier feels the value will shift to the software in industrial applications because the volumes are lower.

‘Hardware is not the challenge, because the consumer sector is driving the cost of hardware down. The algorithms are where the challenge lies,’ he said.

Most machine vision software packages are able – or have versions that are able – to run on embedded platforms. ‘You need a flexible architecture of the software, like MVTec’s Halcon Embedded, to port it to various hardware platforms and operating systems,’ commented Johannes Hiltner, product manager of Halcon at MVTec Software.

Hiltner said there are differences between running machine vision software on embedded devices and on standard systems, the main one being less memory space and lower computing performance – although he added that the hardware is improving.

But in the wider context of embedded computer vision outside the factory, Bier believes that algorithm development will be the bottleneck in the proliferation of vision technology. ‘As we broaden our horizons from traditional quality inspection, the problems will get harder from an algorithmic point of view, and they were already pretty hard,’ he said. ‘It takes expertise to develop these algorithms and these experts are in short supply. That’s going to be the limiting factor in how many of these applications can be developed and deployed – much more so than the availability of hardware.’

Designing new vision algorithms is an iterative process and one that is highly specialised. This is especially true for those that operate in real-world conditions, such as sensors installed in cars where lighting and the arrangement of objects in space are not controlled.

Bier points to the emergence of a technique called deep learning, which is gaining traction in computer vision, along with other domains such as speech recognition and financial fraud detection. Here, the algorithms are not specialised for a certain task, explained Bier, but rather are generalised learning machines where, assuming there is enough data, the algorithm can be trained to discriminate between almost anything: fraudulent or non-fraudulent behaviour; a potato from a tomato; a 28-year-old’s face from a 32-year-old’s face; it’s merely a matter of having enough data to train the algorithm.

Those at the forefront of deep learning technology are the internet companies – Google recently acquired French startup Moodstocks, developers of deep learning algorithms for image recognition, while a week earlier Twitter purchased London-based Magic Pony for a reported $150 million. Magic Pony’s technology uses machine learning to enhance images and video.

Bier commented: ‘This [deep learning] is a promising, but also a somewhat threatening development. It’s promising because it means that rather than every problem needing to have a hand-crafted algorithm that could take years of effort, now there is the potential to use a much more generalised structure, as long as there is enough data. This is a big part of how the algorithm development bottleneck will be broken, and it will enable proliferation of vision into many new applications, including many new factory applications.

‘The threatening part,’ he continued, ‘is that if you are one of those experts who have built a career around developing algorithms, it’s pretty disconcerting to find there’s a more general way of solving the problem. Instead of algorithm expertise, we need people who can collect data and marshal that data through the training process, which of itself is a bit of a black art.’

He added that, realistically, it’s not an all or nothing proposition and the traditional kinds of computer vision will continue to be used, in many cases in conjunction with deep learning.

‘When people put their minds to it, and when they have enough data, we’ve yet to see many problems that deep learning is unable to solve,’ Bier said. ‘The smart thing to do is to wade in and understand how deep learning works, and use it where it fits best. There will probably be some hybrid approaches where an application is solved by a combination of deep learning and traditional algorithms.’

The Embedded Vision Alliance’s website www.embedded-vision.com contains more information about deep learning techniques, including a keynote presentation given by Yann LeCun, director of AI research at Facebook, at the 2014 Embedded Vision Summit on convolutional networks. The Embedded Vision Alliance holds its summit every year.

‘There are nowhere near enough skilled engineers who understand computer vision to build all of these fantastic applications,’ Bier commented. ‘Deep learning is a huge help with that. It’s not a magic bullet, but it’s a huge help.’

In the meantime, both Hiltner at MVTec and Rademacher at Basler point to standards as a necessary step to further the use of embedded vision. Rademacher commented: ‘As GigE Vision and USB3 Vision have shown, an interface standard optimised for easy integration, supported by many manufacturers, brings an enormous benefit for the customer during development and camera integration phase, and might be the key factor for the further growth of embedded vision.’

Hiltner also noted that MVTec is taking a leading role in developing standards that are relevant for the embedded realm.

While more intelligence is being integrated onboard cameras or embedded boards, Schick at IDS feels that there will still be a central station for managing the systems in a factory. ‘There is a question about how to manage all these devices [in a factory], which still has to be solved,’ he said. ‘Factories will get smarter, but there still has to be a central managing unit that ties all the intelligent devices together. For many assembly lines there will still be a PC that is doing machine vision, even if there are more intelligent cameras running some vision algorithms.’ 



Topics

Read more about:

Embedded vision

Media Partners