Skip to main content

Multicore image processing

The move to multicore computing is omnipresent in most industries and the imaging and machine vision industry is also taking advantage of the technology. This could increase throughputs and bring down response times of the camera systems having to deal with increasing amounts of data.

Multicore image processing uses two or more computer cores to process images. In other words, multiple cores share the processing of a task from an imaging system, and this can be done in one of two ways.

Firstly, there is multi-threading, where the software launches different threads to do different tasks. For example, one core might be used to process the results from one camera, another core processes the results from another camera and so on. The other more advanced method is when single tool algorithms are mapped across multiple cores.

Either way, the best way to use multiple cores is automatically, with a lot of software systems already having this functionality. For example, a piece of machine vision software might provide Automatic Operator Parallelisation (AOP), which automatically detects the number of available CPUs and then automatically splits, for example, an image into the according number of logical pixels, passes these on to the processing threads, and after processing automatically combines them to the resulting image. This happens without extra copying of image data.

Automatically counting and using the number of cores available is an important trait within multicore image processing software, as Mark Williamson, UK sales and marketing director of Stemmer Imaging, explains. ‘Intel is already talking about building eight and 16 core processors,’ he says. ‘An intelligent piece of code should ask how many cores there are and then start its work across that number of cores, so it automatically scales up as the hardware changes and more cores might be added in future processors.’

Benefits and challenges

The overall benefits of moving to a multicore system are to minimise the response time and to increase the throughput of an imaging system, says Williamson. ‘Multicore enables the user to take advantage of the latest PC processor architectures, i.e. it enables the algorithms and applications to run faster or do more.’

Pierantonio Boriero, product line manager at Matrox Imaging, says: ‘Within the imaging and machine vision industry, going faster means being able to inspect more objects per minute, and/or have the application perform more analysis operations.’

But making the switch to multicore is not always as simple as one might expect, as Inder Kohli, product manager at Dalsa, explains: ‘Most of the image processing libraries and applications on the market were not designed with multiple cores in mind. Moreover, not all image processing tasks are well suited for partitioning on multicore processors, and thus performance doesn’t scale with the available number of processing cores.’

The portability to move across different architectures also poses potential problems. ‘The image processing tasks that involve large amounts of data movement also achieve limited improvements in processing speeds,’ says Kohli. ‘Processor vendors employ different micro-architectures to implement multiple cores on the same die. As a result, multicore imaging libraries are developed and optimised for specific processor architectures. This limits the portability of the libraries across different processor architectures.’

Reining in the code

Developers need to make sure that the code is set up to run on multiple cores too, as Williamson explains: ‘If a developer has written their application as a single thread or the vision software is not multicore compatible, it means it can only use a single core of the processor. In fact, we had a customer using a 3.0GHz Pentium 4 who had written his own code and had moved to a dual core processor running at 2.6GHz. His application ran slower even though the processor capacity was 2 x 2.6GHz = 5.2GHz. Half the processor was doing nothing. He was using software that was multicore compatible, but had written his code only to do things one thing after the other. Once we showed him what to do he was able to get the speed improvements.’

Industrial Vision Systems’ NeuroCheck software. The image processing algorithms automatically use the functionality of the latest dual or quad core processors without any input from the end user of the system.

Whether a move to multicore is beneficial also depends on the complexity of the algorithms, as Boriero says: ‘Within basic image processing, there are advantages to adopting a multicore processing approach. If you can avoid algorithms with complex heuristics that rely on the frequent synchronisation of results from different sources and apply brute-force number crunching instead across many cores, you can get quicker results. But if an application relies on higher level algorithms, for example when reading a character string using a feature-based approach, the effort versus benefit of parallelisation might be counter-productive.’

And for those middle-of-the-road algorithms, Boriero adds: ‘There are also in-between algorithms where it is not clear whether multicore processing will give significant benefits, compared to the amount of effort required to change that algorithm to make it work over multiple cores. But developers are moving up the chain and the higher level algorithms are slowly being moved to multicore processing.’

Dr Lutz Kreutzer, manager of PR and marketing at MVTec Software, adds: ‘For customers, it is important when comparing vision libraries concerning the possibilities of speeding up with parallel processing, to also consider the absolute time of an operator without parallelisation. This is because the overall performance comes from the speed of the algorithm itself, plus the effectiveness of parallel programming.’

But there are alternatives where users can benefit from multicore technology, without going through reams of code, as Boriero explains: ‘There are other ways to use multicore systems, without recoding algorithms. For example, before parallelisation of image processing functions, people could benefit from multiple cores at the application level if there was a set of concurrent and unrelated tasks to pass through the CPU. The user can assign one task to one core and another task to another core.’

So, algorithm rejigging aside, what are the other challenges when changing to multicore? Workloads also need to be carefully managed and some software makes the job simple. Earl Yardley, director of Industrial Vision Systems, explains: ‘In the case of our NeuroCheck software... the implementation has already been completed, and the image processing algorithms automatically use the functionality of the latest dual or quad core processors – without any input from the end user of the system. As far as possible, an intelligently implemented multi-threading architecture ensures a symmetrical utilisation of hardware resources in the autonomous automatic inspection. A significantly enhanced performance for the end user is the result of the chosen architecture.’

Boriero says: ‘If the application was designed to be used across multiple cores, the user should be careful of the system’s workload. There might be competition between the system tasks so the workload needs to be redistributed evenly. MIL has functions that control how multiple cores are used. If a system has four cores, for example, a MIL application can run on three cores and leave the fourth core available for other system tasks.’

Thor Vollset, CEO at Tordivel, also warns of potential workload problems. ‘The primary challenge is to organise the image processing task so that they can be distributed,’ he says. ‘Depending on the software environment, this can be very challenging. In a point-and-click framework like Scorpion Vision, it is easy – just tell the software to utilise multiple CPUs.’

But the hardware, too, is responsible for managing the workloads, which is a challenge for the computer makers, as Williamson explains. ‘In the future, the architecture of the PC has to change so that the memory is accessible to multiple cores without a bottleneck. For example, if you have one memory store attached to one processor, the bandwidth between those devices will halve if you add another processor, because the route over which the data has to travel has not doubled in size.’

Applications

The application arena for multicore image processing is ubiquitous, according to Kohli: ‘Virtually all areas of the imaging and machine vision industry will benefit from multicore processing. However, the extent to which benefits can be derived depends on the specific function being used.’

Boriero adds ‘Multicore image processing is application specific; it depends on the algorithms being used. Algorithms that require the result of the previous step before continuing cannot be parallelised. Of course, those algorithms will run on a system of multiple cores, but they can’t be accelerated.’

But there will be an increasing need for multicore image processing in the future, as Williamson says: ‘With algorithms getting more complex and camera speeds and resolutions increasing, there is always a demand for more processing power as more applications become viable as the processing power becomes available. A good example is multi-camera inspection solutions, but the biggest opportunity is with high-speed web inspection, which has always been limited by processing capacity.’

Web inspection is a high throughput form of image processing, where anything that it produced continuously is imaged, such as within printing runs. Such systems are usually imaged with line scan cameras and can produce around 800MB/s of data. Williamson adds: ‘The challenge with web inspection is that the PCs tend not to be fast enough, so multiple PCs are used.’

Vollset adds: ‘Multi-thread image processing is just another step in providing more computing power to machine vision applications. This is a continuous step-step process – it will be especially important in 3D robot vision and 3D gauging.’

Where to buy

There are several vendors offering multicore image processing, including: MVTec’s machine vision software Halcon, with version 9.0 coming out in January 2009; Industrial Vision Systems’ NeuroCheck 6.0 software based on the Microsoft .NET architecture; Stemmer’s Common Vision Blox, which is multithread compatible; the Matrox Imaging Library with some functions recoded or optimised to do multicore image processing; TemplateFinder3, which is Scorpion Visions’ built-in pattern matching vision tool. According to Kohli, Dalsa plans to unveil a multicore optimised version of its flagship imaging library Sapera Essential during the early part of 2009.

MVTec’s machine vision software Halcon 8.0. Version 9.0 is scheduled for release in January 2009.

Future thinking

This effort by the vendors is one they believe will pay off, with many predicting a rise in multicore image processing in the future, mainly because of the increase in multicore computers. Dr Wolfgang Eckstein, managing director of MVTec Software, says: ‘The number of cores in a computer will definitely increase in the future. This will give the opportunity to approach more challenging tasks, which have been too time-consuming up to now. To allow this, a continuous development of the software is necessary to minimise the overhead and to support all operators for AOP. This will require a huge amount of resources which only a few of the leading companies will be able to provide.’

Williamson also sees the architectures of these multicore PCs altering. ‘We will be seeing new chip architectures that remove memory bottlenecks so the data can get in and out of all these processor engines,’ he says.

So it looks like there is still a long way to go for multicore processing. Not only do the programmers need to change their way of thinking from single to multicores, but the hardware also needs to change to cope with the increasing amounts of data churning through their systems. And this has further consequences, as Williamson adds: ‘With these architectures, we may see a blurring between the GPU (graphics card) and the CPU. Already we are seeing GPUs used in machine vision, and the current GPUs have 128 or more processors. I think with operating system developments, the line between GPU and CPU will become less clear.’



Topics

Read more about:

Image processing

Media Partners