The "hacking" of iris recognition has caused a flurry of news stories in recent days, ranging from the technology-focused media, all the way up to the BBC. The stories are based on a recent Black Hat conference paper that claims iris images have been reconstructed from iris templates (IrisCodes) and used to carry out an attack on a commercial iris recognition system, with a success rate of around 80%. The inference is that iris recognition is no longer as secure as once believed.
This development, according to the paper's authors (the Biometric Recognition Group-ATVS at the Universidad Autonoma de Madrid, and researchers at West Virginia University), is significant because it had been assumed that the IrisCode did not contain enough information to allow the reconstruction of a workable iris.
"Not so," says John Daugman, Professor of Computer Vision and Pattern Recognition at Cambridge, who developed and patented the first algorithm for iris recognition, which remains in widespread use worldwide. (Although he does believe this news will be a wake-up call to some manufacturers whose literature may claim this is the case…)
This is a classic 'hill-climbing' attack that is a known vulnerability for all biometrics.
Daugman says the vulnerability in question, which involves using an iterative process to relatively quickly reconstruct a workable iris image from an iris template, is a classic "hill-climbing" attack that is a known vulnerability for all biometrics.
Daugman told Planet Biometrics: "I think that the primary vulnerability is the disclosure of an IrisCode template, which this attack depends upon completely. Of course if such an IrisCode template can be obtained, then it could be used directly in a digital attack. There would be no advantage in first converting it back into an image, and then launching an analogue attack using that image."
Daugman continued: "This attack also depends on having the ability to generate an IrisCode template from an image, and to do so repeatedly and iteratively. This is only possible with access to the encoding algorithm or to a device which implements it."
Of course this is what the researchers did using a VeriEye algorithm from Neurotechnology. However, most iris recognition algorithm developers do not openly give access to the SDK required to perform such a task, and as Daugman notes: "The result will be specific to that algorithm."
Perhaps then, this will be an interesting dilemma for Neurotechnology to solve, who of course has made its successful algorithm public for several years.
So if a hill-climbing attack is possible, and the attack doesn't really surprise industry experts, then what does this mean for iris recognition?
According to Daugman: "I think the key is to maintain cryptographic security on IrisCode templates."
Of course, as Daugman told Planet Biometrics, it is important to remember that the analogue image of a person's eye is not really a secret in the first place, albeit quite difficult to obtain. He commented: "In countries whose populations tend to have very darkly pigmented irises (as India), it is somewhat difficult to capture a good iris image surreptitiously using conventional cameras; rather, NIR (near-infrared) illumination and NIR cameras are required."
Artificial or Alive?
Of course, on top of cryptographic security there is the major issue of artifice detection. Most higher-quality iris recognition systems employ countermeasures against spoof attacks to detect whether they are being presented with a live eye, or, in this case, a piece of paper with an image on it.
The industry freely admits that the business of countermeasures against "spoofing" represents the classic arms race, so often played out by security system manufacturers and hackers.
At least, it seems, a well designed modern system wouldn't likely accept the sort of image described in the research presented.