Skip to main content

Multicore next big thing for vision

According to Moore's Law the number of transistors on a chip will double every two years. The transistor count is still increasing, but processors aren't getting any faster. Speaking at National Instruments' Automation and Vision Forum on 4 December 2008, Robert Morton, managing director of NI's UK and Ireland office in Newbury, where the event took place, identified multicore processing and FPGA technology as both being key to increasing image processing speeds.

Morton noted that, whereas in the past a PC upgrade would lead to imaging equipment running faster, this can no longer be relied upon. FPGA and multicore technology aim to address that. For example, an algorithm identifying pixel defects will run 1.9 times faster on a dual core processor than on a single core.

Giving the keynote presentation at the event, Don Braggins, director of the UK Industrial Vision Association (UKIVA), identified the progress that machine vision has made over the past 25 years. Braggins cited the development of pattern matching algorithms as one of the major breakthroughs for machine vision, as it allowed changes in ambient lighting to be dealt with much more effectively. Economic viability of machine vision has also greatly improved, allowing cameras to be placed at every process stage.

Multicore technology looks set to increase processing speeds of image algorithms. However, Mike Bailey, senior systems engineer for vision and automation at NI, warned that deciding on what data can be split across cores and where it can be split is difficult and the code to do so must be thread safe, i.e. the code must be able to function during simultaneous execution by multiple threads. 'Not everything works well with multicore processing,' Bailey said. For instance, pattern matching is difficult to split across cores, because if the pattern is divided in two then the algorithm won't be able to find it.

Bailey suggested pipelining as one way to utilise multicore technology. This involves designating the various steps of image processing (e.g. image acquisition, filtering, analysis, and logging the image) to different cores instead of running them in sequence. 'This won't speed up the individual steps,' he said, but it will speed up the process as a whole.

However, Bailey noted that pipelining will only take image processing so far. 'At some point, vision integrators are going to have to work with multicore graphical programs that take the burden away from the end user,' he said. Programming for dual core may be realistic, but as the number of cores increase so does the programming complexity. Intel is developing an 80-core processor and splitting an image into 80 cores and then reforming it is not going to be easy. NI's LabView 8.6 is compatible with multicore technology and the way it operates is inherently parallel. 'Using LabView makes it much more obvious to see how the functions are split compared to programming in C,' Bailey explained.

Speaking at the event, Mark Williamson, head of product strategy at Stemmer Imaging, noted that the data transfer rate of line scan cameras, reaching speeds of 800Mb/s, are still PC limited, which is why multicore processing is so exciting for machine vision applications. Williamson also identified GenICam as being important to the future of camera interfacing and imaging. 'It's [GenICam] the thing that's going to allow us to have interoperability between cameras and other systems, such as lighting,' he stated.

The day also hosted presentations from independent vision experts, such as Alrad Imaging, as well as those on using NI's frame grabbers and software packages, including Vision Builder for Automated Inspection and Vision Assistant. Hands-on sessions, giving practical software demonstrations, were also run.

Topics

Read more about:

Business

Media Partners