Skip to main content

The ABC of PCBs

Everyone is fallible – you, me and the guy who checks the electronics in your car. But few mistakes could be as potentially fatal as a failure to spot a short circuit in the ABS (anti-lock braking system) of a motor vehicle.

Human inspection is notoriously unreliable – especially when detecting errors in such small components. The human attention span is roughly 20, and at most 40 minutes, after which accuracy will take a rapid dive. And it is a job that no one would envy, spending hour after hour in front of a projection of the magnified circuit, looking for the tiniest of faults.

Thankfully, most companies are now using imaging systems to carry out inspection. They are better than the perfect employee – working at one hundred per cent concentration day after day, with no distractions. In addition, they are faster, and their results are repeatable: unlike humans, different processing software is unlikely to disagree about what counts as a defect.

A Flir camera used to inspect electrical components

In the high-precision, sterile environment of printed circuit board (PCB) manufacturing, they also prove to be much cleaner than humans: according to Gunnar Jonson, director of product marketing at JAI, human operators are the ‘biggest pollutant’ on the factory floor, carrying dirt and dust on their shoes and clothes.

Within PCB inspection, possibly the largest application area in this industry, imaging systems are used throughout the entire process. Before production, vision systems check that leads and vias are intact on the unpopulated board, and during manufacturing, cameras guide robotic arms to manipulate components and place them in the correct position on the circuit board. Finally, a last inspection checks for short circuits due to soldering, and that components are still in place.

Scaling down to even smaller sizes, similar processes are also used in the semiconductor industry to produce microchips, where cameras determine the quality of the wafer before production, by checking for impurities.

The cameras are also used to guide robotic arms in the high-precision processes during production. Large wafer sheets are diced into small chips, and cameras control the alignment to make sure they are cut in the right places. After production, cameras place the chip into the final package, and attach wires as thin as human hairs to provide communication with the other components.

A number of technological advances have allowed these applications to flourish, not least of all the availability of affordable, high-resolution cameras, providing detection of defects that otherwise would have been impossible to see with the human eye. Often, one development leans on another, and in this case a higher lens quality was necessary to zoom in on components. This, in turn, has allowed smaller microchips to develop.

‘PCB boards and microchips are a lot more compact, with smaller details,’ says Jonson. ‘By having finer details, you can have a finer envelope on the PCB, which improves yield for the manufacturer.’ Obviously there is a price, and in this case it is the speed at which the images can be captured and processed.

Jonson also believes that three-chip colour cameras, and GigE Vision will have a big impact in the near future. GigE vision could increase the throughput of images, already high, and eliminate the need for costly frame grabbers.

This is not a view shared by everybody: GigE Vision is a relatively new technology, and is not in wide use. Mike Phillips, CEO of Envisage Systems, says: ‘It has not had an impact yet. We use FireWire, with the advantage of real-time control of the cameras.’

One of the advantages of FireWire is that it allows easy manipulation of different lighting schemes for the detection of different types of defects. Illumination itself has developed in recent years, with LED lighting proving to be the most cost-effective and reliable solution available.

Lighting was particularly important when Envisage took on a contract with BI Technologies to inspect glass-covered components that suffered from a large amount of reflection. To combat this, the solution used up to 12 cameras associated with different lighting schemes to cover every possible angle.

Mike believes that the development of smart cameras is much more relevant than a change of interface – a view that is far from universal. Envisage traditionally used PC-based systems, but smart cameras have everything on board the camera, giving a much more compact setup.

‘It certainly simplifies jobs, but has not necessarily made them cheaper,’ says Philips. ‘The systems literally consist of an IO box connected to a camera, with no need for a computer or rack.’ Again, there is a price: although they still have very high throughput speeds, they are slower than other systems.

However, rather than the components, it seems to be the software and processing side of applications that have undergone the greatest development. Robert Ringe, vice president of OSC USA, believes pattern recognition has increased massively in speed, greatly increasing its importance in many applications.

Pattern recognition is normally used in more aesthetic applications than the electronics industry, to check the patterns on ceramic tiles, for example. However, with this increase in power and speed, OSC has created a system that automates the placement of disk drive heads, determining their position and orientation by analysing ‘strange-looking, asymmetric patterns’.

Automation is a lot faster than manual methods, which has lead to a lower price of the disk heads. In addition, it has meant they are much smaller than was otherwise possible, resulting in greater memory density.

OSC has also taken advantage of improvements in optical character recognition (OCR). Once the heads are placed and glued they are checked and given a serial number, determined by their quality. A machine will then pick up the heads, read the number and sort them into 16 trays based on this. ‘OCR has improved tremendously – we’ve come to rely on it,’ said Ringe.

The software used to program automated systems is vitally important to their success: a computer can only detect a defect if it is told how to do so. As Earl Yardley, director of Industrial Vision Systems, says: ‘The accuracy is only as good as the limits that are applied.’ Most suppliers would expect to fine-tune the code after it has been put in use, to cover defects that occur in such small quantities that they had been previously overlooked.

A modular approach to programming, such as that provided by LabView from National Instruments, is proving to be increasingly popular, providing an easy, graphical interface to integrate parts and program them to communicate. Ian Bell, technical marketing manager at National Instruments, says that an automated system for scribing wafers took just three man-years to develop using LabView, as opposed to 20 man-years by more traditional methods.

A vast increase in processing power has made many things possible. Almost everyone has benefited from the battle between Intel and AMD producing greater processing speed. Yet, potentially, the biggest development comes in the form of parallel computing. Image processing algorithms are numerically intensive, and require a lot of processing power. Parallel processing would allow different processes to be run simultaneously on different processes.

It is something of which both National Instruments and Envisage are keen to take advantage. ‘Using LabView, a graphical interface, [parallel programming] is easy, almost natural,’ says Bell. ‘It is a challenge for suppliers to apply this and use it, but it is a challenge NI is well set up to address. It used to be that the processor wasn’t fast enough – now the problem is building the software to take advantage of the fast processors.’

If it seems obvious to you that the biggest benefit of these improvements will be faster, more efficient inspection, you would be mistaken. Instead, the increased power will be used to provide multiple checks using different techniques, further improving the repeatability and accuracy of the inspection – something that any driver will be grateful for the next time they put on the brakes on an icy road.

X-ray imaging is now being used to find breaks in silicon chips as well as bones. The BedeScan, from Bede X-ray Metrology, provides images of large silicon wafers to highlight defects that would cause shattering in the high temperatures of the furnace during manufacturing.

According to David Jacques, BedeScan product manager: ‘The output is an image that requires very, very simple interpretation – like looking at a photograph of a broken bone.’

Defects under the surface of the wafer couldn’t be detected with the usual optical techniques, so x-ray diffraction is used instead. The cost of the defects is not just the wastage of the silicon. Once the wafer has shattered, the furnace has to be shut down and cleaned, significantly slowing down the process.

The requirements for x-ray diffraction are very different from visible imaging. The levels of detection are much lower – sometimes the system is literally counting photons, so the signal-to-noise ratio must be very high, and the readout must be as fast as possible.

X-ray cameras have not developed as quickly as visible cameras, so there is a much smaller range of resolutions available. Consequently, scanners can’t adapt the resolution to the area of wafer that needs to be scanned, and the level of detection necessary.

For example, to simply locate defects in a 30cm-wide wafer, a resolution of 100_m would be ample, in around 3,000 separate steps. However, once they are detected, it would be favourable to concentrate on the small areas around the defects, in higher resolution, to provide more detail.

Advances are being made, with the expectation that these cameras will be ready for sale in around two years.

The BedeScan uses x-ray diffraction to detect defects in wafers


Read more about:


Editor's picks

Media Partners