About The Blog

Debate at the intersection of business, technology and culture in the world of digital identity, both commercial and government, a blog born from the Digital Identity Forum in London and sponsored by Consult Hyperion

Advertisers

Technorati

  • Add to
Technorati Favorites

License

  • Creative Commons

    Attribution Non-Commercial Share Alike

    This work is licensed under a Creative Commons Attribution - Noncommercial - Share Alike 2.0 UK: England & Wales License.

    Please note that by replying in this Forum you agree to license your comments in the same way. Your comments may be edited and used but will always be attributed.

« Criminal inconvenience | Main | My multiples »

Recognising the problem

By Dave Birch posted Nov 1 2010 at 11:15 PM
[Dave Birch] An interesting series of talks at Biometrics 2010 reminded me how quickly face recognition software is improving. The current state of the art can be illustrated with some of the examples given by NIST in their presentation on testing.
  • A 1:1.6m search on 16-core 192Gb blade (about $40k machine) takes less than one second, and the speed of search continues to improve. So if you have a database of a million people, and you're checking a picture against that database, you can do it in less than second.
  • The false non-match rate (in other words, what proportion of searches return the wrong picture) best performance is accelerating: in 2002 it was 20%, by 2006 it was 3% and by 2010 it had fallen to 0.3%. This is an order of magnitude fall every four years and there's no reason to suspect that it will not continue.
  • The results seem to degrade by the log of population size (so that a 10 times bigger database delivers only twice the miss rate). Rather fascinatingly, no-one seems to know why, but I suppose it must be some inherent property of the algorithms used.

We're still some way from Hollywood-style biometrics where the FBI security camera can spot the assassin in the Superbowl crowd.

What is often overlooked is that biometric systems used to regulate access of one form or another do not provide binary yes/no answers like conventional data systems. Instead, by their very nature, they generate results that are “probabilistic”. That is what makes them inherently fallible. The chance of producing an error can be made small but never eliminated. Therefore, confidence in the results has to be tempered by a proper appreciation of the uncertainties in the system.

[From Biometrics: The Difference Engine: Dubious security | The Economist]

So when you put all of this together, you can see that we are heading into some new territory. Even consumer software such as iPhoto has this stuff built in to it.

face-rec

It's not perfect, but it's pretty good. Consumers (and suppliers) do, though, have an unrealistic idea about what biometrics can do as components of a bigger system.

But Microsoft's new gaming weapon uses "facial and biometric recognition" that creates a 3D model of a player. "It recognises a 3D model that has walked into the room and automatically logs that player in," Mr Hinton said... "It knows when they are sneakily trying to log into their older brother's account and trying to cheat the system... You can't do it. Your face is the ultimate detection for the device."

[From Game console 'rejects' under-age players | Herald Sun]

This sounds sort of fun. Why doesn't my bank build this into its branches so that when I walk in?

There's about to be some fallout, if you ask me. Here's why. You have no control over what pictures of you other people post on the Internet. Suppose there's a picture of me in a mosque somewhere and I don't want other people to know that I am Muslim. I can resolve not to mention it on my blog, not to post pictures of me in mosques, perhaps I might even be able to persuade my friends not to post any pictures of me at prayer. But someone I don't know, and who doesn't know me, takes a picture that has me in it and posts in on the web somewhere.

Meanwhile, someone has set their spider off crawling the Internet. My face is one of the faces loaded from LinkedIn, or our corporate web site, or a conference site, or wherever. The spider finds my face in the mosque picture and adds it to the catalogue. Now, the "secret" is out, and catalogued, and there's nothing that can be done about it. Nothing.

Leon, Mexico, a city with more than 1 million citizens, is being equipped with iris scanners to provide an increased level of biometric security in public and private areas.

The iris and face scanners being installed can analyze approximately 50 people per minute while they walk past devices. This means that it can monitor an entire room and keep a constant watch over who is present, sending identification information to relevant authorities, Singularity Hub reports.

[From Mexican city being equipped with iris scanners ID News Canada]

We should bear in mind that this technology is available to the drug cartels as well, so if they're not getting the feed from this system, they'll soon make their own. In fact, pretty much anyone will be able to have their own system like this, and they won't even have to install the cameras themselves.

A website which pays the public to monitor live commercial CCTV footage online has been launched in Devon. Internet Eyes will pay up to £1,000 to subscribers who regularly report suspicious activity such as shoplifting

[From BBC News - CCTV site Internet Eyes hopes to help catch criminals]

Remember those distributed tasks that we used to download as screensavers? Any day now we'll be able to download a Crimewatch screensaver that scans the CCTV feeds while we're not using our computers and looks for the top 10 most wanted. And debt management companies will be able to look for defaulters and the DWP will be able to look for deadbeat dads. And so on.

Unless we introduce a firm plan for online anonymity pretty soon, we're not going to have any anonymity at all. What I mean by this is that I cannot see any plausible roadmap that delivers offline privacy other than wandering around all day wearing comedy disguises (cf: Dubai) and they will only get us so far, before voice analysis, gait analysis and so on take their toll: the falling costs of biometrics and the exponential power of Big Brother (ie, us) not only remove privacy as a possibility, they do so in fairly short order. This may have the unexpected consequence of driving more interpersonal and corporate interaction into virtual worlds, because it is only in virtual worlds that the technology available in any reasonable timescale can deliver individual privacy.

One could imagine a flight to virtual communities, where mathematics (in the form of cryptography) provides a defence against crime and disorder that the metal barriers of a gated community cannot.

[From Digital Identity: Why virtual identities are real to some of us]

Which makes it all the more of a priority that any framework that we develop to manage identities in cyberspace is centred on privacy.

These opinions are my own (I think) and are presented solely in my capacity as an interested member of the general public [posted with ecto]

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d8341c4fd753ef0133f582f688970b

Listed below are links to weblogs that reference Recognising the problem:

Comments

The comments to this entry are closed.