Skip to main content

Timing is everything

The achievement of Messrs Armstrong, Aldrin and Collins on 20 July 1969 is one of the defining moments of the 20th century. The moon landing, watched by millions, was a feat of engineering that saw America claim a victory over the Soviets in the space race and marked a landmark in outer space exploration.

The computing power required to send man to the moon seems archaic by today’s standards and it’s difficult to make direct comparisons with current computing technology. The Apollo Guidance Computer (AGC) was developed for the Apollo space programme by MIT’s Instrument Laboratory and was installed in both the command and lunar modules to collect flight information and control the navigation functions. The particular system used on Apollo 11 had 2,048 words of RAM (3,840 bytes), 36,864 words of read-only memory (69,120 bytes), and a maximum of about 85,000 CPU instructions executed per second. To put this more into context of how little computing power this was, the original IBM PC, the IBM PC XT, released just over a decade later in 1981, had eight times more memory and ran at four times the clock speed.

In terms of software, the AGC ran a real-time operating system that could multi-task eight functions at one time. It is also one of the earliest modern examples of an embedded system, a computer system designed to carry out set functions, as opposed to a general-purpose computer for multiple tasks.

Today, embedded systems are found throughout modern society, from MP3 players to guidance systems in aircraft to medical equipment. Embedded systems have also been used for a number of years within machine vision – PCI cards with embedded processors were around 10 to 12 years ago, according to Mike Bailey, senior systems engineer at National Instruments (NI). These early systems would run an embedded version of Windows operating system (OS) or real-time OS, include a frame grabber as part of the board and run specific software. Embedded systems have developed somewhat, with many running a real-time OS and are purely programmed to carry out, in this context, a specific vision application.

What makes a system embedded?

‘There are three main characteristics that define an embedded vision system,’ states Tristan Jones, UK technical marketing team leader at NI: ‘Determinism and real-time performance; communication and integration with external systems, such as other parts of a manufacturing line; and performance matching the vision requirements.’

Determinism is how accurately information is retrieved. The speaking clock, for example, is very deterministic; it will tell the exact time every set number of seconds without fail. A deterministic system is one where the response to a particular event is bounded, in that the response will happen within a set period of time. If a system isn’t deterministic, such as those running Windows OS, the time taken to respond to an event may vary, (this variation is termed jitter).

Having a deterministic system is very important for a lot of vision applications, especially for systems installed on a production line that rely on carrying out set tasks within a certain timeframe. NI’s PXI hardware, vision software and LabVIEW have been used in the development of an embedded vision system for inspecting plastic bottles for recycling. Bailey stipulates that Windows OS is not suitable for this type of application, as the operating system will be carrying out other functions in addition to those controlling the vision system. A real-time system is much more suitable as it is not there to respond to the user – it is there to carry out a given task.

The recycling system inspects bottles as they drop off a conveyor in front of a camera. The bottles are backlit and sorted, with air nozzles directing PETE colourless plastic down different shoots while the remainder drops through unaffected. The bottles fall at roughly nine feet per second. If the system registers the product too late the bottles aren’t sorted correctly and will have to be sorted by hand further down the line. Running a real-time OS as part of the embedded system allows the timing specifications for the application to be met and NI uses real-time operating systems throughout its range of embedded products, from its smart cameras, to its compact and embedded vision systems. The OS can also be run on a PXI system or even a standard PC can be engineered to run a real-time OS.

A rugged solution

Defence contractor Raytheon, which was tasked with manufacturing the Apollo Guidance Computer hardware, was subject to various constraints when designing the system. The fuel budget of the mission was tight so the equipment had to be light and consume as little power as possible (it weighed around 70 pounds and had a power supply of 2.5A of current at 28V DC). It also had to withstand the tough conditions encountered during space flight, including extremes of temperature and heavy vibration.

Similar constraints are found in many vision applications and some of the advantages of embedded vision systems are that, with the correct hardware, they can be designed to be small, rugged and mobile. The power input can be customised to run off a 12V battery, for instance, as opposed to a PC that requires a mains power supply. Andrew Buglass, product manager at Active Silicon, comments: ‘The [embedded vision] system is generally designed to be small and rugged, in that it can withstand environments with which standard desktop computers wouldn’t be able to cope.’

Active Silicon, a manufacturer of specialist digital imaging products and technologies, has designed embedded vision systems for various applications, including a machine vision system for sorting casino chips using a colour recognition algorithm; for surveillance applications, whereby the systems have been installed in varied applications from passenger aircraft to street surveillance; and in medical applications for conducting eye examinations.

‘The criteria for the embedded system for casino chip sorting was that it had to be reliable, portable and easy to service,’ comments Buglass. Also, the design had to be such that it didn’t look like a PC – there is no monitor or keyboard, just a small LCD and several keys to control the user interface. ‘The system had a clear and well-defined purpose, which was to control a lighting system and camera to sort casino chips,’ he says.

The 4Sight X embedded system from Matrox Imaging.

Active Silicon has recently released an update to the system, and, according to Buglass, as hardware and embedded options have become more powerful and more readily available, the embedded system now looks completely different to the original system: ‘It’s a quarter of the size with the components more densely packed on dedicated hardware. It’s now much more of an embedded system, because of the nature of the hardware inside.’

According to Fabio Perelli, product manager at Montreal-based Matrox Imaging, one of the main reasons why customers choose an embedded system is due to space constraints: ‘A PC can’t be engineered into a wire bonder [used in electronics manufacture],’ he says. A wire bonder is a small machine designed to take up as little room as possible on the factory floor. Power dissipation is also reduced – the 4Sight X, part of the Matrox 4Sight family of embedded systems, dissipates 40 to 50W compared to a PC that requires a 300W power supply. ‘If the system has to be cooled, it is beneficial to use a system that dissipates the least amount of power,’ comments Perelli.

Although a PC-based solution is potentially cheaper, Perelli says that the form factor and longevity are two benefits of an embedded vision solution over using a PC. Matrox guarantees the same configuration in its embedded systems for five years, meaning customers don’t need to continuously validate new hardware versions and there are no compatibility issues.

Matrox works with Intel to provide processors for its embedded vision systems, which are usually released a number of months after the processors for mainstream PCs. ‘These processors are supported for a number of years, which is important for the vision industry, as customers prefer to remain with the same setup for prolonged periods,’ notes Perelli. He gives inspection in the pharmaceutical industry as an example, in which vision systems have to be certified with the Food and Drug Administration (FDA) in the US. ‘Every change to the system involves recertifying it, which could take up to a year in each case, and so for these customers it’s not practical to keep updating the system.’

Welding applications

A vision system using a field-programmable gate array (FPGA) to carry out certain preprocessing and other simple tasks can also be characterised as an embedded system. An FPGA is a hardware platform that can be reprogrammed using software. ‘FPGA technology could be classified as the ultimate embedded system,’ comments Bailey of National Instruments, which supplies its FlexRIO FPGA.

Meta Vision Systems, a British manufacturer of laser vision systems for welding applications, has developed its Smart Laser Sensor (SLS) consisting of a CMOS image sensor, an FPGA and a DSP (digital signal processor), all housed within the sensor head. The SLS is comparable to smart cameras, also a form of embedded system, in which the image processing is integrated within the camera thereby simplifying installation and operation for the end user.

‘Meta Vision Systems was approached by a customer that wanted to use laser seam tracking for welding applications, but couldn’t afford it at the price of existing systems,’ comments Dr Robert Beattie, managing director at Meta Vision Systems. Traditional laser seam tracking systems are made up of a laser head containing a laser stripe projector and a camera. The images would be relayed back to a PC as part of a traditional image processing system – the PC analyses the images and uses this to control the welding machine. The SLS carries out image processing inside the sensor head, generating results and sending it to the welding machine via Ethernet cable.

‘The original motivation for the project was for weld guidance and adaptive fill,’ says Beattie. Guidance involves measuring the shape of the weld joint and establishing where to position the welding torch. In addition, the sensor allows aspects of the weld to be measured, such as the cross-sectional area, which can then be used to control welding parameters.

The same sensor technology can also be used for weld inspection, such as pipe bevel inspection. When two pieces of piping are joined, the ends are machined to a certain shape to produce a close fit, which is called bevelling the pipe. The sensor can be used to inspect the end of the pipe prior to welding to ensure the bevel is the right shape. The same sensor can then be used to inspect for defects in the completed weld.

‘The classic architecture for this kind of application is an FPGA carrying out the front-end number crunching and a DSP using the data for high-level image processing,’ explains Beattie. The SLS is no different, with the FPGA extracting the laser stripe from the image, which is then analysed by the DSP.

According to Beattie, it is marginally harder to program an embedded sensor compared to setting up a system with a PC. However, the reasons for using an embedded sensor stem back to real-time performance, which can’t really be achieved using a Windows operating system.

Meta Vision Systems splits its product range into standard products, into which the SLS falls, and custom systems. The standardised approach is suited to applications that are relatively well defined. However, the company installs larger systems as well, in which some details of the application may be unknown before installation. In these circumstances, the company sticks to the traditional PC-based approach, which allows customisation of the system if necessary.

Embedded vision is here to stay

‘Most vision systems are now embedded systems,’ states Bailey of NI. ‘There is a definite move away from having a desktop computer running vision tasks along with other tasks, certainly within industry and time-driven applications, like a production line.’ Image processing is less time-critical in a research environment and vision systems will often be run on a desktop PC. However, Bailey feels that where real-time systems can now cope with things like multicore processing, the reasons for staying with Windows are fewer and fewer. ‘The only advantage of using a PC is being able to view the image, but even that is diminishing with technologies like Hypervisor software,’ he says. NI’s Hypervisor technology, for dual or multicore systems, allows multiple operating systems to run on the same machine, so a real-time embedded system can run in combination with Windows for visualising the results.

Active Silicon’s Buglass feels that advances in embedded hardware have provided solutions to a number of applications that, in the past, wouldn’t have been feasible. For example, Active Silicon produces a security system with eight simultaneous video channels for installation on lampposts or the side of buildings. ‘Two years ago the system, composed of a standard single board computer with PCI cards, was very large weighing 30kg, which made it difficult to mount on poles and walls,’ Buglass says. ‘The software was very good, but the hardware needed to run the system wasn’t suitable or readily available.’ Through its embedded system, Active Silicon could offer a box less than a quarter of the original size, which was lighter, more reliable, and far more rugged than the original solution.

‘The advent of modern, more powerful embedded processors and architectures have made a lot of these applications much more viable,’ comments Buglass. ‘People are moving toward using embedded systems as the hardware becomes available.’



Topics

Read more about:

Embedded vision

Media Partners