About The Blog

Debate at the intersection of business, technology and culture in the world of digital identity, both commercial and government, a blog born from the Digital Identity Forum in London and sponsored by Consult Hyperion

Advertisers

Technorati

  • Add to
Technorati Favorites

License

  • Creative Commons

    Attribution Non-Commercial Share Alike

    This work is licensed under a Creative Commons Attribution - Noncommercial - Share Alike 2.0 UK: England & Wales License.

    Please note that by replying in this Forum you agree to license your comments in the same way. Your comments may be edited and used but will always be attributed.

« Having another go | Main | How smart? »

Theoretically private

By Dave Birch posted Feb 10 2011 at 9:21 PM

The Institute for Advanced Legal Studies hosted an excellent seminar by Professor Michael Birnhack from the Faculty of Law at Tel Aviv University who was talking about "A Quest for a Theory of Privacy".

He pointed out that while we're all very worried about privacy, we're not really sure what should be done. It might be better to pause and review the legal "mess" around privacy and then try to find an intellectually-consistent way forward. This seems like a reasonable course of action to me, so I listened with interest as Michael explained that for most people, privacy issues are becoming more noticeable with Facebook, Google Buzz, Airport "nudatrons", Street View, CCTV everywhere (particularly in the UK) and so on. (I'm particularly curious about the intersection between new technologies -- such as RFID tags and biometrics -- and public perceptions of those technologies, so I found some of the discussion very interesting indeed.)

Michael is part of the EU PRACTIS research group that has been forecasting technologies that will have an impact on privacy (good and bad: PETs and threats, so to speak). They use a roadmapping technique that is similar to the one we use at Consult Hyperion to help our clients to plan their strategies for exploiting new transaction technologies and is reasonably accurate within a 20 year horizon. Note that for our work for commercial clients, we use a 1-2 year, 2-5 year, and 5+ year roadmap. No-one in a bank or a telco cares about the 20 year view, even if we could predict it with any accuracy -- and given that I've just read the BBC correspondents informed predictions for 2011 and they don't mention, for example, what's been going on in Tunisia and Egypt, I'd say that's pretty difficult.

One key focus that Michael rather scarily picked out is omnipresent surveillance, particularly of the body (data about ourselves, that is, rather than data about our activities), with data acted upon immediately, but perhaps it's best not go into that sort of thing right now!

He struck a definite chord when he said that it might be the new business models enabled by new technologies that are the real threat to privacy, not the technologies themselves. These mean that we need to approach a number of balances in new ways: privacy versus law enforcement, privacy versus efficiency, privacy versus freedom of expression. Moving to try and set these balances, via the courts, without first trying to understand what privacy is may take us in the wrong direction.

His idea for working towards a solution was plausible and understandable. Noting that privacy is a vague, elusive and contingent concept, but nevertheless a fundamental human right, he said that we need a useful model to start with. We can make a simple model by bounding a triangle with technology, law and values: this gives three sets of tensions to explore.

Law-Technology. It isn't a simple as saying that law lags technology. In some cases, law attempts to regulate technology directly, sometimes indirectly. Sometimes technology responds against the law (eg, anonymity tools) and sometimes it co-operates (eg, PETs -- a point that I thought I might disagree with Michael about until I realised that he doesn't quite mean the same thing as I do by PETs).

Technology-Values. Technological determinism is wrong, because technology embodies certain values. (with reference to Social Construction of Technology, SCOT). Thus (as I think repressive regimes around the world are showing) it's not enough to just have a network.

Law-Values, or in other words, jurisprudence, finds courts choosing between different interpretations. This is where Michael got into the interesting stuff from my point of view, because I'm not a lawyer and so I don't know the background of previous efforts to resolve tensions on this line.

Focusing on that third set of tensions, then, in summary: From Warren and Brandeis' 1890 definition of privacy as the right to be let alone, there have been more attempts to pick out a particular bundle of rights and call them privacy. Alan Westin's 1967 definition was privacy as control: the claims of individuals or groups or institutions to determine for themselves when, how and to what extent information about them is communicated to others.

This is a much better approach than the property right approach, where disclosing or not disclosing, "private" and "public" are the states of data. Think about the example of smart meters, where data outside the home provides information about how many people are in the home, what time they are there and so on. This shows that the public/private, in/out, home/work barriers are not useful for formulating a theory. The alternative that he put forward considers the person, their relationships, their community and their state. I'm not a lawyer so I probably didn't understand the nuances, but this didn't seem quite right to me, because there are other dimensions around context, persona, transaction and so on.

The idea of managing the decontextualisation of self seemed solid to my untrained ear and eye and I could see how this fitted with the Westin definition of control, taking on board the point that privacy isn't property and it isn't static (because it is technology-dependent). I do think that choices about identity ought, in principle, to be made on a transaction-by-transaction basis even if we set defaults and delegate some of the decisions to our technology and the idea that different persona, or avatars, might bundle some of these choices seems practical.

Michael's essential point is, then, that a theory of privacy that is formulated by examining definitions, classsifications, threats, descriptions, justifications and concepts around privacy from scratch will be based on the central notion of privacy as control rather than secrecy or obscurity. As a technologist, I'm used to the idea that privacy isn't about hiding data or not hiding it, but about controlling who can use it. Therefore Michael's conclusions from jurisprudence connect nicely connect with my observations from technology.

An argument that I introduced in support of his position during the questions draws on previous discussions around the real and virtual boundary, noting that the lack of control in physical space means the end of privacy there, whereas in virtual space it may thrive. If I'm walking down the street, I have no control over whether I am captured by CCTV or not. But in virtual space, I can choose which persona to launch into which environment, which set of relationships and which business deals. I found Michael's thoughts on the theory behind this fascinating, and I'm sure I'l be returning to them in the future.

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c4fd753ef014e5f222837970c

Listed below are links to weblogs that reference Theoretically private:

Comments

The comments to this entry are closed.