Skip to main content

You are the passport

‘Ticketless travel’, whereby individuals could get on a plane without needing to show any form of identification, could become a reality as airports turn to imaging and machine vision technologies to automate the identification of passengers and improve security. However, the biggest stumbling block to ticketless travel may not be the technology (although what is currently installed is certainly inadequate) but government regulation.

In January, London Gatwick airport in the UK updated its end-to-end biometric solution to MFlow Track v3.0 from Human Recognition Systems, which uses iris recognition to identify individuals from a distance, from the moment they check-in, up until they board their flight.

The system uses machine vision technology to take a photograph of the passenger’s iris, which is then stored on the system. ‘The iris, out of all the commercially available biometrics, is the most individual. So it is a very strong authentication of a person’s identity,’ said Jim Slevin, aviation managing director for Human Recognition Systems in the UK. Separate images are then taken of the passenger from a distance, while they are in the departure lounge and as they are about to board the flight, to ensure that it is the same individual. ‘It is very passenger-friendly. The camera finds the eye, instead of the eye finding the camera,’ explained Slevin. ‘For an environment where you are putting very high numbers of passengers through, you haven’t got time to explain to them how to use a system. It has to be completely automated, and it has to be fast,’ he added. ‘You also can’t have somebody trying to tilt a camera to find the reflection in order to be able to capture the eye, as in previous generations of technology. It just would not work in that circumstance.’

In addition to the iris, airports commonly use the face or the hand as features for recognition. To identify a person using biometric information, a picture, whether it is of the hand, face, or iris, is first converted into a template. ‘Once the image is captured, through either an open standard or proprietary algorithm, it is converted into some sort of template,’ Slevin outlined. When a second image is taken, the templates of both images are then compared. ‘When you re-present whatever biometric you’ve used, it will again capture and convert the image into the template, and then it will compare template to template,’ added Slevin.

EGates, which are automated imaging systems used at border control to identify passengers against their passports, have been popping up in more and more airports over recent years. However, many airports are opting for cheaper systems based on CCTV technology rather than machine vision, which is leading to increased queuing times, according to Mark Williamson, director of corporate market development at Stemmer Imaging, who is also chairman of the UK Industrial Vision Association (UKIVA): ‘We [UKIVA] wrote to the UK Border Agency to say “you are using the wrong technology here; you’re using CCTV, and in this case it should be machine vision.”’

CCTV technology is not as fast as machine vision for identification, because CCTV cameras need to automatically adjust to the changing environment in order to provide a good quality picture, explained Williamson: ‘You’ll have auto-iris lenses: so as it gets dark, the iris opens up and gets brighter; if the sun suddenly gets bright, the image will not be of a very good quality for a few seconds as it adjusts itself to get it right.’ This is not practical when checking passports, as queuing times increase because the system has to constantly re-adjust for variations between the appearance of passengers and the lighting. ‘It adjusts when you get people with different coloured skin, or at different times of day, for example; it takes time. The person has to wait for, say, 15 seconds while it keeps trying to adjust itself.’

With machine vision, a series of different settings can be pre-set, which can be captured in quick succession of one another. ‘We can pre-set five different image levels, grab them all immediately one after another – so five images in 0.1 of a second – and then the computer can go through and pick the best image,’ said Williamson.

It is a lack of understanding of the technology that is causing CCTV to be used at eGates, according to Williamson. ‘The people that are building these systems are coming from the security end of the market, and they do not understand the challenges and the available technology.’ Cost is a factor influencing many decisions, he pointed out: ‘They will look at a machine vision camera and say: “That’s way too expensive, I’m going to use a CCTV camera.” And, a CCTV camera might be £100 to £200, and a machine vision camera might cost £1,000. But how much is the exterior − £10,000 or £15,000? With machine vision you could double the throughput.’

However, because passport images are taken using CCTV cameras, there is an argument as to the actual benefit that machine vision adds. ‘If a solution is comparing against an image that came from relatively poor capture in the first place, the value that [machine vision] adds at that stage is lower,’ according to Human Recognition Systems’ Slevin. However, if both images are taken using machine vision, the verification process will be much faster. ‘In eGates that we have deployed, which are not for e-passport verification but for identification through the airport departure process, it does add real significant value [to use machine vision], because the only thing we compare it to is the image we captured with machine vision in the first place,’ Slevin pointed out.

The MFlow Track that was upgraded at London Gatwick airport in January was part of a self-service bag drop and automated boarding trial, in which passengers check in and deposit hold luggage through self-service bag drops. The trial signifies how airports are moving ever more towards automating the verification process, and was a step towards Human Recognition Systems’ future vision for ‘ticketless travel’. The goal is that passengers will no longer need to carry travel documents because their identities will be verified by machine vision from the moment the ticket is bought, to when the passenger boards the aircraft.

‘One of our visions is the ability for an individual to travel without any forms of identification other than themselves − so ultimately no passports, no more barcodes or other types of tickets,’ said Slevin.

For this to happen, the imaging technology and its enabling communication infrastructure needs both to reduce in price and increase in performance, explained Slevin: ‘All of those technologies, the platform that it’s on, and the capturing technology, has to become reduced in cost and increased in performance.’ And, it is the consumer market that will drive down cost and allow for improved identification recognition technologies. ‘The biggest difference that’s happened in the video world over the last ten years or so has been the move by consumers to digital cameras − because what we’ve seen is that the quality of digital images has increased enormously,’ said Slevin. ‘There doesn’t seem to be any slowdown in that pace − it’s still improving all the time. It is that that helps to drive us to pervasive captures of identities.’

However, it is not always issues of cost or the state of the technology, but standards and regulations that have to be addressed in order to move forwards. Currently, airports follow a threat-based model for security, meaning that every passenger is given the same level of screening. ‘The security that is being deployed [in airports] is universal − everybody gets the same treatment,’ explained Slevin.

‘There is limited degree of randomness, but essentially everybody’s bag is screened in the same way, the passenger goes through the same metal detector and also goes through the same body scanner.’ However, according to Slevin, there is a push to move towards a risk-based model, whereby the technology would differentiate passengers based on the risks they pose: ‘There is an industry desire to move from a threat-based model to a risk-based model for security. Imaging technology could then be deployed to identify individuals and then base a risk assessment associated with that individual. Then, [the airport] can modify the screening for you as an individual.’

To change to this approach, regulations have to be understood and changed, according to Slevin: ‘The trend towards the risk model is not necessarily technology-led at this point in time. In order to move to a risk-based screening approach, the hurdles you have to overcome to begin with are more security and political.’

Both the Transportation Security Administration (TSA) in the USA and the European Commission are primarily based on the threat-based model, and switching to the risk-based model will be a long process, according to Slevin: ‘Both of those models are predominantly based on the notion of threat detection. From a political point of view, you’ve got 28 European states that would have to agree to move away from the threat-based model to a risk-based one,’ he said. ‘So there is a huge political piece that’s got to happen prior to anybody doing any more thinking about, or trialling, the deployment mechanisms of this.’

Furthermore, the risk-based model would need to be properly defined in order to decide the right technologies to use. ‘It would have to be agreed what the risk-based model actually is, before we can then get down into the logistics of how it would work, and therefore what the concepts of operation are,’ Slevin pointed out. ‘The regulatory aspects have to be understood before you can get down finalising decisions on enabling technologies.’ O



In restricted areas in airports, analysis of behaviour is commonly used to differentiate between ordinary and suspicious activity.

On airfields, vehicles are fairly commonplace, such as for maintenance or to collect luggage; however, it is important to be able to recognise whether a vehicle is changing from its usual activity, explained Jim Slevin, aviation managing director for Human Recognition Systems. ‘It may be legitimate for [the vehicle] to be travelling in one direction along a line, but in cases where it is travelling in the wrong direction, the [MFlow Track] system will alert security,’ he said.

‘The clever part is to try and work out, “is that behaviour matching a specific pattern?” The analytics have to be able to tell the difference, and they have to be directional.’ Another example is to be able to identify and differentiate between normal and suspicious objects. ‘If you have a person walking or crawling over an area you would want detection. But if it is wildlife, you don’t want the system to alarm all the time because it becomes a nuisance and people will just turn the system off,’ Slevin said.

‘The advanced nature of it means that [the system] has to learn and understand what is going on in the environment and be able to dial those factors out.’

 

Robotic surveillance

An airport is a huge area that needs to be constantly monitored to maintain security. A new optical pointing system, the RobotEye from Ocular Robotics, has the potential to replace several camera systems and still cover the same area. The robotic system, which uses optical components from Edmund Optics, is able to move a camera’s field of vision at high speed, while the camera itself remains stationary.

‘We’ve been able to remove the bulk of the weight you need to move to redirect the view of the camera,’ said Mark Bishop, CEO of Ocular Robotics. ‘That means we can point to six to 10 separate locations within a second − so from a single camera system you can create multiple camera feeds.’

Once the positions are set, the network operator can then manage the scenes depending on what is going on in each area. ‘You can add positions, reduce the number of positions, change the priority − so if something interesting happens in one of the views, the system is then able to visit that location more often to increase the rate at which you’re getting information about that scene,’ added Bishop.

By using analytical software with the system, it is possible to track several suspicious objects in separate scenes at the same time. ‘You’re free to use the capabilities of the analytics on the back end of the system to move the views around and potentially monitor several moving targets at the same time.’

Bishop referred to an incident that occurred at Sydney airport in 2009, in which a group of motorcycle gang members invaded the airport and killed a male passenger. ‘In the end, they couldn’t convict anyone because they moved out of the view of the cameras,’ explained Bishop. ‘With our system, you can continue to move the view and follow; the view would be continually moved to follow that area of interest, rather than disappearing out of view as soon as it left that camera.'

Media Partners