About The Blog

Debate at the intersection of business, technology and culture in the world of digital identity, both commercial and government, a blog born from the Digital Identity Forum in London and sponsored by Consult Hyperion

Advertisers

Technorati

  • Add to
Technorati Favorites

License

  • Creative Commons

    Attribution Non-Commercial Share Alike

    This work is licensed under a Creative Commons Attribution - Noncommercial - Share Alike 2.0 UK: England & Wales License.

    Please note that by replying in this Forum you agree to license your comments in the same way. Your comments may be edited and used but will always be attributed.

91 posts categorized "Privacy and Security"

What do they want us to do?

By Dave Birch posted May 26 2011 at 8:08 AM

What do the politicians, regulators, police and the rest of them want us (technologists) to do about the interweb tubes? It might be easier to work out what to do if we had a clear set of requirements from them. Then, when confronted with a problem such as, for example, identity theft, we could build systems to make things better. In that particular case, things are currently getting worse.

Mr Bowron told the MPs this week that although recovery rates were relatively low, the police detection rate was 80 per cent. However, the number of cases is rising sharply with nearly 2m people affected by identity fraud every year.

[From FT.com / UK / Politics & policy - MP calls cybercrime Moriarty v PC Plod]

So, again, to pick on this paricular case, what should be done?

Mr Head also clarified his position on the safety of internet banking, insisting that while traditional face-to-face banking was a better guarantee against fraud, he accepted that society had moved on. “If you take precautions, it’s safe,” he said.

[From FT.com / UK / Politics & policy - MP calls cybercrime Moriarty v PC Plod]

Yet I remember reading in The Daily Telegraph (just googled it: 20th November 2010) there was a story about an eBay fraud perpetrated by fraudsters who set up bank accounts using forged identity documents, so face-to-face FTF does not, as far as I can see, mean any improvement in security at all. In fact, I'm pretty sure that it is worse than nothing, because people are easier to fool than computers. I would argue that Mr. Head has things exactly wrong here, because we an integrated identity infrastructure should not discriminate between FTF and remote transactions.

I think this sort of thing is actually representative of a much bigger problem around the online world. Here's another example. Bob Gourley. the former CTO of the U.S. Defense Intelligence Agency, poses a fundamental and important question about the future identity infrastructure.

We must have ways to protect anonymity of good people, but not allow anonymity of bad people. This is going to be much harder to do than it is to say. I believe a structure could be put in place, with massive engineering, where all people are given some means to stay anonymous, but when a certain key is applied, their cloak can be peeled back. Hmmm. Who wants to keep those keys

[From A CTO analysis: Hillary Clinton's speech on Internet freedom | IT Leadership | TechRepublic.com]

So, just to recap, Hillary says that we need an infrastructure that stops crime but allows free assembly. I have no idea how to square that circle, except to say that prevention and detection of crime ought to be feasible even with anonymity, which is the most obvious and basic way to protect free speech, free assembly and whistleblowers: it means doing more police work, naturally, but it can be done. By comparison, "knee jerk" reactions, attempting to force the physical world's limited and simplistic identity model into cyberspace, will certainly have unintended consequences.

Facebook's real-name-only approach is non-negotiable – despite claims that it puts political activists at risk, one of its senior policy execs said this morning.

[From Facebook's position on real names not negotiable for dissidents • The Register]

I've had a Facebook account for quite a while, and it's not in my "real" name. My friends know that John Q. Doe is me, so we're linked and can happily communicate, but no-one else does. Which suits me fine. If my real name is actually Dave bin Laden, Hammer of the Infidel, but I register as John Smith, how on Earth are Facebook supposed to know whether "John Smith" is a "real" name or not? Ludicrous, and just another example of how broken the whole identity realm actually is.

For Facebook to actually check the real names, and then to accept the liabilities that will inevitably result, would be expensive and pointless even if it could be achieved. A much better solution is for Facebook to help to the construction and adoption of a proper digital identity infrastructure (such as USTIC, for example) and then use it.

The implementation of NSTIC could force some companies, like Facebook, to change the way it does business.

[From Wave of the Future: Trusted Identities In Cyberspace]

That's true, but it's a good thing, and it's good for Facebook as well as for other businesses and society as a whole. So, for example, I might use a persistent pseudonymous identity given to me by a mobile operator, say Vodafone UK. If I use that identity to obtain a Facebook identity, that's fine by Facebook: they have a certificate from Vodafone UK to say that I'm a UK citizen or whatever. I use the Vodafone example advisedly, because it seems to me that mobile operators would be the natural providers of these kinds of credentials, having both the mechanism to interact FTF (shops) and remotely, as well as access to the SIM for key storage and authentication. Authentication is part of the story too.

But perhaps the US government’s four convenient “levels of assurance” (LOAs), which tie strong authentication to strong identity proofing, don’t apply to every use case under the sun. On the recent teleconference where I discussed these findings, we ended up looking at the example of World of Warcraft, which offers strong authentication but had to back off strong proofing.

[From Identity Assurance Means Never Having To Say “Who Are You, Again?” | Forrester Blogs]

Eve is, naturally, absolutely right to highlight this. There is no need for Facebook to know who I really am if I can prove that Vodafone know who I am (and, importantly, that I'm over 13, although they may not be for much longer given Mr. Zuckerberg's recent comments on age limits).

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]

Tough choices

By Dave Birch posted Apr 2 2011 at 12:20 AM

The relationship between identity and privacy is deep: privacy (in the sense of control over data associated with an identity) ought to be facilitated by the identity infrastructure. But that control cannot be absolute: society needs a balance in order to function, so the infrastructure ought to include a mechanism for making that balance explicit. It is very easy to set the balance in the wrong place even with the best of intentions. And once the balance is set in the wrong place, it may have most undesirable consequences.

An obsession with child protection in the UK and throughout the EU is encouraging a cavalier approach to law-making, which less democratic regimes are using to justify much broader repression on any speech seen as extreme or dangerous.... "The UK and EU are supporting measures that allow for websites to be censored on the basis of purely administrative processes, without need for judicial oversight."

[From Net censors use UK's kid-safety frenzy to justify clampdown • The Register]

So a politician in one country decides, say, that we should all be able to read out neighbour's emails just in case our neighbour is a pervert or serial killer or terrorist and the next thing we know is that Iranian government supporters in the UK are reading their neighbours emails and passing on their details to a hit squad if the emails contain any anti-regime comments.

By requiring law enforcement backdoors, we open ourselves to surveillance by hackers and foreign intelligence agencies

[From slight paranoia: Web 2.0 FBI backdoors are bad for national security]

This is, of course, absolutely correct, and it was shown in relief today when I read that...

Some day soon, when pro-democracy campaigners have their cellphones confiscated by police, they'll be able to hit the "panic button" -- a special app that will both wipe out the phone's address book and emit emergency alerts to other activists... one of the new technologies the U.S. State Department is promoting to equip pro-democracy activists in countries ranging from the Middle East to China with the tools to fight back against repressive governments.

[From U.S. develops panic button for democracy activists | Reuters]

Surely this also means that terrorists about to execute a dastardly plot in the US will be able to wipe their mobile phones and alert their co-conspirators when the FBI knock on the door and, to use the emotive example, that child pornographers will be able to wipe their phones and alert fellow abusers when the police come calling. Tough choices indeed. We want to protect individual freedom so we must create private space. And yet we still need some kind of "smash the glass" option, because criminals do use the interweb tubes and there are legitimate law enforcement and national security interests here. Perhaps, however, the way forward to move away from the idea of balance completely.

In my own area of study, the familiar trope of “balancing privacy and security” is a source of constant frustration to privacy advocates, because while there are clearly sometimes tradeoffs between the two, it often seems that the zero-sum rhetoric of “balancing” leads people to view them as always in conflict. This is, I suspect, the source of much of the psychological appeal of “security theater”: If we implicitly think of privacy and security as balanced on a scale, a loss of privacy is ipso facto a gain in security. It sounds silly when stated explicitly, but the power of frames is precisely that they shape our thinking without being stated explicitly.

[From The Trouble With “Balance” Metaphors]

This is a great point, and when I read it it immediately helped me to think more clearly. There is no evidence that taking away privacy improves security, so it's purely a matter of security theatre.

Retaining telecommunications data is no help in fighting crime, according to a study of German police statistics, released Thursday. Indeed, it could even make matters worse... This is because users began to employ avoidance techniques, says AK Vorrat.

[From Retaining Data Does Not Help Fight Crime, Says Group - PCWorld]

This is precisely the trajectory that we will all be following. The twin pressures from Big Content and law enforcement mean that the monitoring, recording and analysis of internet traffic is inevitable. But it will also be largely pointless, as my own recent experiences have proven. When I was in China, I wanted to use Twitter but it was blocked. So I logged in to a VPN back in the UK and twittered away. When I wanted to listen to the football on Radio 5 while in Spain, the BBC told me that I couldn't, so I logged back in to my VPN and cheered the Blues. When I want to watch "The Daily Show" from the UK or when I want to watch "The Killing" via iPlayer in the US, I just go via VPN.

I'm surprised more ISPs don't offer this as value-added service themselves. I already pay £100 per month for my Virgin triple-play (50Mb/s broadband, digital TV and telephone, so another £5 per month for OpenVPN would suit me fine).

Two-faced, at the least

By Dave Birch posted Mar 22 2011 at 12:23 PM

The end of privacy is in sight, isn't it? After all, we are part of a generation that twitters and updates its path through the world, telling everyone everything. Not because Big Brother demands it, but because we want to. We have, essentially, become one huge distributed Big Brother. We give away everything about ourselves. And I do mean everything.

Mr. Brooks, a 38-year-old consultant for online dating Web sites, seems to be a perfect customer. He publishes his travel schedule on Dopplr. His DNA profile is available on 23andMe. And on Blippy, he makes public everything he spends with his Chase Mastercard, along with his spending at Netflix, iTunes and Amazon.com.

“It’s very important to me to push out my character and hopefully my good reputation as far as possible, and that means being open,” he said, dismissing any privacy concerns by adding, “I simply have nothing to hide.”

[From T.M.I? Not for Sites Focused on Sharing - NYTimes.com]

We'll come back to the reputation thing later on, but the point I wanted to make is that I think this is dangerous thinking, the rather lazy "nothing to hide" meme. Apart from anything else, how do you know whether you have anything to hide if you don't know what someone else is looking for?

To Silicon Valley’s deep thinkers, this is all part of one big trend: People are becoming more relaxed about privacy, having come to recognize that publicizing little pieces of information about themselves can result in serendipitous conversations — and little jolts of ego gratification.

[From T.M.I? Not for Sites Focused on Sharing - NYTimes.com]

We haven't had the Chernobyl yet, so I don't privilege the views of the "deep thinkers" on this yet. In fact, I share the suspicion that these views are unrepresentative, because they come from such a narrow strata of society.

“No matter how many times a privileged straight white male tech executive tells you privacy is dead, don’t believe it,” she told upwards of 1,000 attendees during the opening address. “It’s not true.”

[From Privacy still matters at SXSW | Tech Blog | FT.com]

So what can we actually do? Well, I think that the fragmentation of identity and the support of multiple personas is one good way to ensure that the privacy that escapes us in the physical world will be inbuilt in the virtual world. Not everyone agrees. If you are a rich white guy living in California, it's pretty easy to say that multiple identities are wrong, that you have no privacy get over it, that if you have nothing to hide you have nothing to fear, and such like. But I disagree. So let's examine a prosaic example to see where it takes us: not political activists trying to tweet in Iran or Algerian pro-democracy Facebook groups or whatever, but the example we touched on a few weeks ago when discussing comments on newspaper stories: blog comments.

There's an undeniable problem with people using the sort-of-anonymity of the web, the cyber-equvalent of the urban anonymity that began with the industrial revolution, to post crap, spam, abuse and downright disgusting comments on blog posts. And there is no doubt that people can use that sort-of-anonymity to do stupid, misleading and downright fraudulent things.

Sarah Palin has apparently created a second Facebook account with her Gmail address so that this fake “Lou Sarah” person can praise the other Sarah Palin on Facebook. The Gmail address is available for anyone to see in this leaked manuscript about Sarah Palin, and the Facebook page for “Lou Sarah” — Sarah Palin’s middle name is “Louise” — is just a bunch of praise and “Likes” for the things Sarah Palin likes and writes on her other Sarah Palin Facebook page

[From Sarah Palin Has Secret ‘Lou Sarah’ Facebook Account To Praise Other Sarah Palin Facebook Account]

Now, that's pretty funny. But does it really matter? if Lou Sarah started posting death threats or child pornography then, yeah, I suppose it would, but I'm pretty sure there are laws about that already. But astrosurfing with Facebook and posting dumb comments on tedious blogs, well, who cares? If Lou Sarah were to develop a reputation for incisive and informed comment, and I found myself looking forward to her views on key issues of the day, would it matter to me that she is an alter-ego. I wonder.

I agree with websites such as LinkedIn and Quora that enforce real names, because there is a strong "reputation" angle to their businesses.

[From Dean Bubley's Disruptive Wireless: Insistence on a single, real-name identity will kill Facebook - gives telcos a chance for differentiation]

Surely, the point here is that on LinkedIn and Quora (to be honest, I got a bit bored with Quora and don't go there much now), I want the reputation for work-related skills, knowledge, experience and connections, so I post with my real name. When I'm commenting at my favourite newspaper site, I still want reputation - I want people to read my comments - but I don't always want them connected either with each other or with the physical me (I learned this lesson after posting in a discussion about credit card interest rates and then getting some unpleasant e-mails from someone ranting on about how interest is against Allah's law and so on).

My identity should play ZERO part in the arguments being made. Otherwise, it's just an appeal to authority.

[From The Real “Authenticity Killer” (and an aside about how bad the Yahoo brand has gotten) — Scobleizer]

To be honest, I think I pretty much agree with this. A comment thread on a discussion site about politics or football should be about the ideas, the argument, not "who says". I seem to remember, from when I used to teach an MBA course on IT Management a long time ago, that one of the first lessons of moving to what was then called computer-mediated communication (CMC) for decision-making was that it led to better results precisely because of this. (I also remember that women would often create male pseudonyms for these online communications because research showed that their ideas were discounted when they posted as women.)

It isn't just about blog comments. Having a single identity, particularly the Facebook identity, it seems to me, is fraught with risk. It's not the right solution. It's almost as if it was built in a different age, where no-one had considered what would happen when the primitive privacy model around Facebook met commercial interests with the power of the web at their disposal.

that’s the approach taken by two provocateurs who launched LovelyFaces.com this week, with profiles — names, locations and photos — scraped from publicly accessible Facebook pages. The site categorizes these unwitting volunteers into personality types, using a facial recognition algorithm, so you can search for someone in your general area who is “easy going,” “smug” or “sly.”

[From ‘Dating’ Site Imports 250,000 Facebook Profiles, Without Permission | Epicenter | Wired.com]

Nothing to hide? None of my Facebook profiles is in my real name. My youngest son has great fun in World of Warcraft and is very attached to his guilds, and so on, but I would never let him do this in his real name. There's no need for it and every reason to believe that it would make identity problems of one form or another far worse (and, in fact, the WoW rebellion over "real names" was led by the players themselves, not privacy nuts). But you have to hand it to Facebook. They've been out there building stuff while people like me have been blogging about identity infrastructure.

Although it's not apparent to many, Facebook is in the process of transforming itself from the world's most popular social-media website into a critical part of the Internet's identity infrastructure

[From Facebook Wants to Supply Your Internet Driver's License - Technology Review]

Now Facebook may very well be an essential part of the future identity infrastructure, but I hope that people will learn how to use it properly.

George Bronk used snippets of personal information gleaned from the women’s Facebook profiles, such as dates of birth, home addresses, names of pets and mother’s maiden names to then pass the security questions to reset the passwords on their email accounts.

[From garlik - The online identity experts]

I don't know if we should expect the public, many of who are pretty dim, to take more care over their personal data or if we as responsible professionals, should design an infrastructure that at least makes it difficult for them to do dumb things with their personal data, but I do know that without some efforts and design and vision, it's only going to get worse for the time being.

"We are now making a user's address and mobile phone number accessible as part of the User Graph object,"

[From The Next Facebook Privacy Scandal: Sharing Phone Numbers, Addresses - Nicholas Jackson - Technology - The Atlantic]

Let's say, then, for sake of argument, that I want to mitigate the dangers inherent in allowing any one organisation to gather too much data about me so I want to engage online using multiple personas to at least partition the problem of online privacy. Who might provide these multiple identities? In an excellent post on this, Forum friend Dean Bubley aggresively asserts

I also believe that this gives the telcos a chance to fight back against the all-conquering Facebook - if, and only if, they have the courage to stand up for some beliefs, and possibly even push back against political pressure in some cases. They will also need to consider de-coupling identity from network-access services.

[From Dean Bubley's Disruptive Wireless: Insistence on a single, real-name identity will kill Facebook - gives telcos a chance for differentiation]

The critical architecture here is pseduonymity, and an obvious way to implement it is by using multiple public-private key pairs and then binding them to credentials to form persona that can be selected from the handset, making the mobile phone into an identity remote control, allowing you to select which identity you want to asset on a per transaction basis if so desired. I'm sure Dean is right about the potential. Now, I don't want to sound the like grumpy old man of Digital Identity, but this is precisely the idea that Stuart Fiske and I put forward to BT Cellnet back in the days of Genie - the idea was the "Genie Passport" to online services. But over the last decade, the idea has never gone anywhere with any of the MNOs that we have worked for. Well, now is the right time to start thinking about this seriously in MNO-land.

But mark my words, we WILL have a selector-based identity layer for the Internet in the future. All Internet devices will have a selector or a selector proxy for digital identity purposes.

[From Aftershocks of an untimely death announcement | IdentitySpace]

The most logical place for this selector is in the handset, managing multiple identities in the UICC, accessible OTA or via NFC. I use case is very appealing: I select 'Dave Birch' on my hansdset, tap it to my laptop and there is all of the 'Dave Birch' stuff. Change the handset selector to 'David G.W. Birch' and then tap the handset to the laptop again and all of the 'Dave Birch' stuff is gone and all of the 'David G.W. Birch' stuff is there. It's a very appealing implementation of a general-purpose identity infrastructure and it would a means for MNOs to move to smart pipe services. But is it too late? Perhaps the arrival of non-UICC secure elements (SEs) mean that more agile organisations will move to exploit the identity opportunity.

How smart?

By Dave Birch posted Feb 17 2011 at 12:37 PM

I had an interesting conversation with the CTO of a multi-billion company at the Mobile World Congress in Barcelona. He, like me, felt that something has been going wrong in the world of identity, authentication, credentials and reputation as we try to create electronic versions of physical world legacy constructs instead of starting from a new sets of requirements for the virtual world and working back. He was talking about machines, though, not people.

Robots could soon have an equivalent of the internet and Wikipedia. European scientists have embarked on a project to let robots share and store what they discover about the world. Called RoboEarth it will be a place that robots can upload data to when they master a task, and ask for help in carrying out new ones.

[From BBC News - Robots to get their own internet]

RoboEarth? No! Skynet, please. And Skynet needs to share an identity infrastructure with the interweb tubes, because of the rich interaction between personal identity and machine identity that will be integral to future living. The internet of things infrastructure needs an identity of things infrastructure to work properly. Our good friend Rob Bratby from Olswang wrote, accurately, that

The deployment of smart meters is one of the most significant deployments of what is often described as ‘the internet of things’, but its linkage to subscriber accounts and individual homes, and the increasing prevalence of data ‘mash-ups’ (cross-referencing of multiple databases) will require these issues to be thought about in a more sophisticated and nuanced way.

[From Watching the connectives | A lawyer's insight into telecoms and technology]

I can confirm from our experiences advising organisations in the smart metering value chain that these issues are certainly not being thought about in either sophisticated or nuanced ways.

“The existing business policies and practices of utilities and third-party smart grid providers may not adequately address the privacy risks created by smart meters and smart appliances,

[From Grid Regulator: The Internet & Privacy Concerns Will Shape Grid: Cleantech News and Analysis «]

Not my words, the Federal Energy Regulatory Commission in the US. Too right. The lack of an identity infrastructure isn't just a matter of Facebook data getting into the wrong hands or having to have a different 2FA dongle for each of your bank accounts. It's a matter of critical infrastructure starting down the wrong path, from which it will be hard to recover after the first Chernobyl of the smart meter age, the first time some kids, or the North Korean government, or a software error at the gas company shuts down all the meters, or publishes all of the meter readings in a Google maps-style mashup so that burglars can find out which houses in a street are empty, or the News of World can get a text alert when a sleb gets home, or whatever.

My CTO friend was, I'm certain, right to suggest that we need to start by working out what we what identity to look like in general and then work out what the subset of that in the physical world needs to look like. If we do start building an EUTIC or a UKTIC to complement NSTIC then I think it should work for smart meters as well as for dumb people.

Theoretically private

By Dave Birch posted Feb 10 2011 at 9:21 PM

The Institute for Advanced Legal Studies hosted an excellent seminar by Professor Michael Birnhack from the Faculty of Law at Tel Aviv University who was talking about "A Quest for a Theory of Privacy".

He pointed out that while we're all very worried about privacy, we're not really sure what should be done. It might be better to pause and review the legal "mess" around privacy and then try to find an intellectually-consistent way forward. This seems like a reasonable course of action to me, so I listened with interest as Michael explained that for most people, privacy issues are becoming more noticeable with Facebook, Google Buzz, Airport "nudatrons", Street View, CCTV everywhere (particularly in the UK) and so on. (I'm particularly curious about the intersection between new technologies -- such as RFID tags and biometrics -- and public perceptions of those technologies, so I found some of the discussion very interesting indeed.)

Michael is part of the EU PRACTIS research group that has been forecasting technologies that will have an impact on privacy (good and bad: PETs and threats, so to speak). They use a roadmapping technique that is similar to the one we use at Consult Hyperion to help our clients to plan their strategies for exploiting new transaction technologies and is reasonably accurate within a 20 year horizon. Note that for our work for commercial clients, we use a 1-2 year, 2-5 year, and 5+ year roadmap. No-one in a bank or a telco cares about the 20 year view, even if we could predict it with any accuracy -- and given that I've just read the BBC correspondents informed predictions for 2011 and they don't mention, for example, what's been going on in Tunisia and Egypt, I'd say that's pretty difficult.

One key focus that Michael rather scarily picked out is omnipresent surveillance, particularly of the body (data about ourselves, that is, rather than data about our activities), with data acted upon immediately, but perhaps it's best not go into that sort of thing right now!

He struck a definite chord when he said that it might be the new business models enabled by new technologies that are the real threat to privacy, not the technologies themselves. These mean that we need to approach a number of balances in new ways: privacy versus law enforcement, privacy versus efficiency, privacy versus freedom of expression. Moving to try and set these balances, via the courts, without first trying to understand what privacy is may take us in the wrong direction.

His idea for working towards a solution was plausible and understandable. Noting that privacy is a vague, elusive and contingent concept, but nevertheless a fundamental human right, he said that we need a useful model to start with. We can make a simple model by bounding a triangle with technology, law and values: this gives three sets of tensions to explore.

Law-Technology. It isn't a simple as saying that law lags technology. In some cases, law attempts to regulate technology directly, sometimes indirectly. Sometimes technology responds against the law (eg, anonymity tools) and sometimes it co-operates (eg, PETs -- a point that I thought I might disagree with Michael about until I realised that he doesn't quite mean the same thing as I do by PETs).

Technology-Values. Technological determinism is wrong, because technology embodies certain values. (with reference to Social Construction of Technology, SCOT). Thus (as I think repressive regimes around the world are showing) it's not enough to just have a network.

Law-Values, or in other words, jurisprudence, finds courts choosing between different interpretations. This is where Michael got into the interesting stuff from my point of view, because I'm not a lawyer and so I don't know the background of previous efforts to resolve tensions on this line.

Focusing on that third set of tensions, then, in summary: From Warren and Brandeis' 1890 definition of privacy as the right to be let alone, there have been more attempts to pick out a particular bundle of rights and call them privacy. Alan Westin's 1967 definition was privacy as control: the claims of individuals or groups or institutions to determine for themselves when, how and to what extent information about them is communicated to others.

This is a much better approach than the property right approach, where disclosing or not disclosing, "private" and "public" are the states of data. Think about the example of smart meters, where data outside the home provides information about how many people are in the home, what time they are there and so on. This shows that the public/private, in/out, home/work barriers are not useful for formulating a theory. The alternative that he put forward considers the person, their relationships, their community and their state. I'm not a lawyer so I probably didn't understand the nuances, but this didn't seem quite right to me, because there are other dimensions around context, persona, transaction and so on.

The idea of managing the decontextualisation of self seemed solid to my untrained ear and eye and I could see how this fitted with the Westin definition of control, taking on board the point that privacy isn't property and it isn't static (because it is technology-dependent). I do think that choices about identity ought, in principle, to be made on a transaction-by-transaction basis even if we set defaults and delegate some of the decisions to our technology and the idea that different persona, or avatars, might bundle some of these choices seems practical.

Michael's essential point is, then, that a theory of privacy that is formulated by examining definitions, classsifications, threats, descriptions, justifications and concepts around privacy from scratch will be based on the central notion of privacy as control rather than secrecy or obscurity. As a technologist, I'm used to the idea that privacy isn't about hiding data or not hiding it, but about controlling who can use it. Therefore Michael's conclusions from jurisprudence connect nicely connect with my observations from technology.

An argument that I introduced in support of his position during the questions draws on previous discussions around the real and virtual boundary, noting that the lack of control in physical space means the end of privacy there, whereas in virtual space it may thrive. If I'm walking down the street, I have no control over whether I am captured by CCTV or not. But in virtual space, I can choose which persona to launch into which environment, which set of relationships and which business deals. I found Michael's thoughts on the theory behind this fascinating, and I'm sure I'l be returning to them in the future.

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]

Tripped up

By Dave Birch posted Oct 11 2010 at 10:13 PM

[Dave Birch] Many people have a real problem with the apparently anonymous nature of the interweb. I say "apparently" because, of course, unless you work really hard at it and really understand how the internet works, and really understand how your PC works, and really plan it carefully, you're not really anonymous in the proper sense of the word.

Our sense of anonymity is largely an illusion. Pretty much everything we do online, down to individual keystrokes and clicks, is recorded, stored in cookies and corporate databases, and connected to our identities, either explicitly through our user names, credit-card numbers and the IP addresses assigned to our computers, or implicitly through our searching, surfing and purchasing histories.

[From The Great Privacy Debate: The Dangers of Web Tracking - WSJ.com]

I'm surprised that politicians, in particular, who keep going on about how terrible internet anonymity is, don't understand a little more about the dynamics of the problem. If they did, they would realise that anonymity isn't what it seems.

You might think, after enough major stories about "IP addresses" hit the news wires, everyone in political life would be aware that "anonymity" on the Internet is limited.

But someone in Sen. Saxby Chambliss' (R-GA) office didn't get the memo. In the aftermath of this week's failed vote on the military's "don't ask, don't tell" policy, someone named "Jimmy" registered an account at the gay news blog Joe.My.God. just to say, "All Faggots must die."

[From Outed! Senate staffers, anti-gay slurs, and IP addresses]

In the general case, you are not anonymous on the interweb, but economically-anonymous, which I propose to label "enonymous", and that's not the same thing at all. If you threaten to kill the President, you will be tracked down, and the state will spend the money it takes on it. But if you call Lily Allen a a hereditary celebrity and copyright hypocrite (not my own views, naturally) then it's not worth the state's money to track you down. If Lily wants to spend her own money on tracking you down and taking a civil action for libel, then fair enough, that's the English way of limiting free speech. If the newspapers want to spend their own money on it, fine. For issues of great national interest, such as spurious death threats to the nation's sweetheart, Cheryl Cole, The Sun can step in.

Yesterday The Sun traced the sender of a chilling anti-Cheryl message that blasted her over Zimbabwean Gamu's TV exit. Wannabe rapper Sanussi Ngoy Ebonda, 20, admitted penning the sinister rant, which accused Cheryl of "da biggest mistake of your life" and included a threat to attack other girls sharing her name.

[From Cheryl Cole boosts security at mansion | The Sun |Showbiz|TV|X Factor]

So even though there's precious little anonymity, should we allow enonymity to be the norm? There are plenty of people who think not, and they're not all English libel lawyers. Surely common sense is on their side? Isn't it wrong to let people hide behind pretend names?

Let's focus on a specific and straightforward example. The comment pages on newspaper, magazine and other media web sites. Many such sites require registration but are still essentially enonymous. Is it right that enonymous commenters can say bad things about celebrities, politicians, business leaders? Would people be as horrible about public figures if they were forced to identify themselves?

Would the online debate among commenters be stifled by requiring commenters to sign their real names?

[From What did you say your name was? | Analysis & Opinion |]

The Chinese government certainly hope so.

China is considering measures to force all its 400m internet users to register their real names before making comments on the country's myriad chat-rooms and discussion forums, in a further sign of tightening controls on freedom of speech.

[From China to force internet users to register real names - Telegraph]

We already know this doesn't work, incidentally, because the Chinese already tried this for Internet cafes, supposedly to deal with the problem of young people spending too much time in virtual worlds. The only result was an instant, and profitable, black market in ID card numbers, whereby kids would get the ID numbers of people who weren't going to play in cybercafes (eg, their grandparents) and used them to log in instead of using their own. There was an alignment of economic incentives here, because the cybercafes would not make money by turning people away.

Cafés that did not ask for identification often still had a registration book at the front desk, in which staff members were seen to write apparently random identification numbers and names during their free time.

[From HRIC | 中国人权]

Incidentally, another large and well-known country closely associated with our economic future (albeit a virtual one) has just abandoned plans to try and force Chinese-style real-name registration after a revolt by citizens (well, subscribers):

Blizzard has reversed a controversial decision that would have forced thousands of Starcraft and World of Warcraft (WoW) players to use their real names on the company's online forums

[From Blizzard stands down over forum controversy | TG Daily]

I simply would not allow my kids to log in with their real names. I'm happy for them to log in using one of their multiple e-mail addresses. They've had pseudonymous e-mail addresses since they were old enough to go online. This isn't just paranoia about people grooming children for sexual exploitation (the UK takes this kind of thing very seriously) and such like. There are lots of really good reasons for not wanting to use your identity in online debate and comment. I wrote once before about being shocked by some hate e-mails I received when I once posted some comments in a discussion about interest rates ("interest is the work of the devil", "we know how you are" etc etc). Now, I still enjoy participating in online debates, but do so pseudonymously: my friends know who I am.

That, incidentally, may not be much of a protection, because the mapping of social graphs can soon locate you within a group of friends even if none of those friends disclose who you are. A determined third-party can learn very interesting things from those graphs and, unless everyone is anonymous or pseudonymous under certain conditions, figure out who you are.

Iran appears to be in two minds about whether to embrace or stymie technological progress. On the one hand, Twitter accounts helped the opposition mobilise demonstrations in the wake of last year’s contested presidential election... On the other hand, by monitoring Twitter traffic, Tehran was able to identify who was organising the protests.

[From FT.com / FT Magazine - Who controls the internet?]

As I've said before, in cyberspace no-one knows you're a dog, but no-one knows you're from the FBI either. Thus our government, the US government and many others are caught in two minds, just as the Iranians are. On the one hand, they are supposed to be in favour of free speech, but on the other hand, well, you know Danish cartoonists, criminals, child pornographers, terrorists, enemies of the state, dissidents, apostates etc.

Now, maybe you don’t care. You’re “not doing anything wrong.” Well, Hoder wasn’t doing anything wrong when he went to Israel and blogged about it in Farsi. But he’s serving 20 years in jail in Iran.

[From Emergent Chaos » Blog Archive » AT&T, Voice Encryption and Trust]

But back to online commenting in our democracy. It's not a simple issue, and "common sense" is not a good guide to anything in the virtual world, but it is clearly the case that in that virtual world some people behave inappropriately. You only have to read The Guardian newspapers online "Comment is Free" or Guido Fawkes, the UK's top political blog, to see how appalling, disgusting, racist, misogynist, anti-semitic and just plain thick the general public can be. I am one of those old-fashioned liberals who thinks that the response to bad free speech should be more free speech, not less. I think we should be wary about limiting the anonymity of people who comment online, even if we could think of a way of doing so.

The Nazareth District Court has upheld the right of the Walla Web portal to refuse to hand over the IP addresses of commenters accused of defaming a journalist.

"The good of online anonymity outweighs the bad, and it must be seen as a byproduct of freedom of speech and the right to privacy," Judge Avraham Avraham wrote in his ruling last week.

The court also said the critical remarks concerning Yedioth Ahronoth reporter Israel Moskovitz, posted online in 2008, were unlikely to harm his reputation since they were poorly written and appeared only once, and readers were not likely to take them seriously.

[From Uphold talkbacker's anonymity in defamation trial, court says - Haaretz - Israel News ]

Actually, for journalists to complain about online comments, criticism and even abuse is a tiny bit worrying, since their business depends on such.

It doesn't take long to find articles on CNN that quote anonymous officials. For them to rage against "cowards" who won't stand behind what they say, and then to regularly quote "anonymous" sources, seems pretty damn hypocritical. Phillips claims anonymity online is "very unfair." Phillips also attacks the media for "giving anonymous bloggers credit or credibility." But again, CNN quotes all kinds of anonymous sources all the time.

[From CNN Claims 'Something Must Be Done' About Anonymous Bloggers | Techdirt]

On balance, then, I think a free society not only permits certain kinds of anonymity but actually depends on them, because we need informed and honest public debate to function properly. This was well-put in the Washington Post recently.

For every noxious comment, many more are astute and stimulating. Anonymity provides necessary protection for serious commenters whose jobs or personal circumstances preclude identifying themselves. And even belligerent anonymous comments often reflect genuine passion that should be heard.

[From Andrew Alexander - Online readers need a chance to comment, but not to abuse]

I couldn't agree more. However, as the Post goes on to note, we have to recognise that people can be pretty horrible and we need a way to deal with that. Not banning anonymity, but managing the anonymousness (if there is such a word) in a better way.

The solution is in moderating -- not limiting -- comments. In a few months, The Post will implement a system that should help. It's still being developed, but Straus said the broad outlines envision commenters being assigned to different "tiers" based on their past behavior and other factors. Those with a track record of staying within the guidelines, and those providing their real names, will likely be considered "trusted commenters." Repeat violators or discourteous agitators will be grouped elsewhere or blocked outright. Comments of first-timers will be screened by a human being.

[From Andrew Alexander - Online readers need a chance to comment, but not to abuse]

This -- in essence, baby steps toward a reputation economy -- could be toughened up by using better identity infrastructure, but it's not a bad place to start. But there are areas where the better infrastructure is more of a priority. Newspaper comments are one thing, but there are businesses that depend on online comments, and a good example is the burgeoning group review sector.

Continue reading "Tripped up" »

Listening in

By Dave Birch posted Aug 30 2010 at 4:45 PM

[Dave Birch] Who should we be listening to when formulating digital identity strategy? Consumers? Experts? Politicians? Lobbyists? Consultants? Consider, for example, the issue of privacy. This is complicated, sensitive, emotive. And some of the voices commenting on it are loud. Take a look at the "Wal-Mart story" -- the story that Wal-Mart are going to add RFID tags to some of their clothing lines -- that has naturally attracted plenty of attention. One particular sets of concerns were founded on the idea that consumers could not have the tags "killed" and so would be tracked and traced by... well, marketeers, advertisers, sinister footsoliders of the New World Order, the CIA and so on. So what is the truth?

The tags are based on the EPC Gen 2 standard, which requires that they have a kill command that would permanently disable them. So the tags can, in fact, be disabled. Wal-Mart does not plan to kill the tags at the point of sale (POS), only because it is not using RFID readers at the point of sale.

[From Privacy Nonsense Sweeps the Internet]

As a consumer, I don't want the tags to be turned off, because that means that the benefits of the tags are limited to Wal-Mart and not shared with me. I'd really like a washing machine that could read the tags and tell me if I have the wrong wash cycle. And there are plenty of other business models around tags that might be highly desirable to consumers.

If it adds £20 to the price of a Rolex to implement this infrastructure, so what? The kind of people who pay £5,000 for a Rolex wouldn't hesitate to pay £5,020 for a Rolex that can prove that it is real. Imagine the horror of being the host of a dinner party when one of the guests glances at their phone and says "you know those jeans aren't real Gucci, don't you?". Wouldn't you pay £20 for the satisfaction of knowing that your snooping guest's Bluetooth pen is steadfastly attesting to all concerned that your Marlboro, Paracetamol and Police sunglasses are all real.

[From Digital Identity: The Rolex premium]

So does the existence of convenience, business model, consumer interest and practicality mean I have no privacy concerns? Of course not! So what is a reasonable way forward?

Wal-Mart is demanding that suppliers add the tags to removable labels or packaging instead of embedding them in clothes, to minimize fears that they could be used to track people's movements. It also is posting signs informing customers about the tags.

[From Wal-Mart to Put Radio Tags on Clothes - WSJ.com]

That seems like a reasonable compromise: make it easy for people to cut the tags off if they don't want them. So is that the end of the story? I don't think it is.

What could possibly violate our privacy with tracking pants in a store to make sure there aren’t too many extra-large sizes on the shelves?

[From Privacy wingnuts « BuzzMachine]

The thing is, I agree with Jeff Jarvis here that some people are, indeed, "wingnuts". But that does not mean that there are no genuine concerns and it does not mean that anyone who is concerned about privacy (eg, me) is a wingnut. But what it does mean, I think, is that we need to implement new identity technologies in a privacy-enhancing fashion and make the "privacy settlement" with the public more explicit so that there is an opportunity for informed comment to shape it. It seems to me that some fairly simple design decisions can achieve both of these goals, something that I've referred to before when using Touch2id as an example.

Continue reading "Listening in" »

Let's make crime illegal

By Dave Birch posted Aug 9 2010 at 10:16 PM

[Dave Birch] In today's newspaper, I read that the Blackberry is not, after all, to be banned from Saudi Arabia as it has been from UAE.

The agreement, which involves placing a BlackBerry server inside Saudi Arabia, would allow the government to monitor users' messages and allay official fears the service could be used for criminal purposes.

[From Saudi Arabia halts plan to ban BlackBerry instant messanging - Telegraph]

I don't know whether it's a good thing for messages to be in the clear or not. If I were an investment banker negotiating a deal, I might worry that someone at the Ministry of Snooping might pass my messages on to his brother at a rival investment bank, for example. After all, the idea that only authorised law enforcement officers would have access to my private information is absolutely no comfort at all.

A drugs squad detective, Philip Berry, sold a valuable contacts book containing the personal details of the criminal underworld to pay off his credit card debt, a court heard.

[From Corrupt drugs detective 'sold underworld secrets to pay debt' - Telegraph]

The idea that law enforcement would be helpless to stem the tide of international crime unless they can tap every call, read every email, open every letter, is (if you ask me) suspect. If I am sending text messages to a known criminal, you do not need to be able to read those message to decide that you might want to obtain a warrant to find out who I am calling or where I am. The fact that I am using a prepaid phone does not, by itself, render me immune to law enforcement activity.

Beyene's role in the heist was to buy so-called dirty telephones and hire a van to use as a blocking vehicle,

[From Gunman jailed for 23 years over Britain's biggest jewellery robbery - Telegraph]

In fact this gang was caught because the police found one of the mobile phones they had been using. It contained four anonymous numbers, and from these the police were able to track down the gang members. It wasn't revealed how, but there at least two rather obvious ways to go about it: get a warrant to track the phones and correlate their movements with known criminals or get a warrant to find out which numbers those other phones have been calling and follow the chain until you get to a known number. Yes, this might require some police work, which is more expensive than having everything tracked automatically on a PC, but it is better for society. This reminds of a recent discussion about anonymous prepaid phones. I'm in favour of them, but plenty of people are against them. (Same for prepaid cards.) Ah, but you and the authorities in some countries might ask: how can you catch criminals who use anonymous prepaid phones? Forcing people to

Earlier this month, the FBI revealed that the suspected Times Square bomber had used an anonymous prepaid cell phone to purchase the Nissan Pathfinder and M-88 fireworks used in the bomb attempt.

[From Senators call for end to anonymous, prepaid cell phones]

Setting aside the fact that this guy was caught (despite the dreaded "anonymous prepaid call phone") and had been allowed on a flight despite being on the no-fly list, the politicians are, I'm sure, spot on with their informed and intelligent policy. In fact, one of them said:

"We caught a break in catching the Times Square terrorist, but usually a prepaid cell phone is a dead end for law enforcement".

[From Senators call for end to anonymous, prepaid cell phones]

Amazingly, the very same issue of the newspaper that reports on the captured UK armed robbers contains a story about a Mafia boss caught by... well, I'll let you read for yourself:

One of Italy's most wanted mafia godfathers has been arrested after seven years on the run after police traced him to his wife's mobile registered in the name of Winnie the Pooh

[From Winnie the Pooh leads to gangster's arrest - Telegraph]

So, basically, if you require people to register prepaid mobile phones then you raise the cost and inconvenience for the public but the criminals still get them (because they bribe, cheat and steal: that's criminals for you). I imagine that in the Naples branch of Carphone Warehouse the name "Winnie the Pooh" on a UK identity card looks perfectly plausible: they would have no more chance of knowing whether it's real or not than the Woking Carphone Warehouse would when looking at an Italian driving licence in the name of Gepetto Paparazzo. Again it's not clear exactly what the police did, but from elements of the story it appears to be something like: the police discovered (through intelligence) that the godfather's wife was calling an apparently random mobile phone number at exactly the same time every two weeks. From this they determined which phone was hers (the "Winnie the Pooh" phone) and they tracked it to Brussels. But suppose some foolproof method for obtaining the correct identities of purchasers were to be found. Would this then stop crime in, say, Italy? Of course not.

In an attempt to combat the cartel-related violence, Mexico enacted a law requiring cell phone users to register their identity with the carrier. Nearly 30 million subscribers didn’t do this because of a lack of knowledge or a distrust of what could happen to that information if it fell into the wrong hands. Unfortunately, the doubters were proven right, as the confidential data of millions of people leaked to the black market for a few thousand dollars, according to the Los Angeles Times.

[From Did Mexico's cell phone registration plans backfire?]

The law just isn't a solution. It might even make things worse.

Continue reading "Let's make crime illegal" »

Joe Bloggs

By Dave Birch posted Aug 2 2010 at 9:10 AM

[Dave Birch] Having just come from a meeting about the management of multiple identities and the potential commercial structure of a proposition based on pseudonyms, I found myself reading some excellent and thought-provoking comment on the issue of anonymity vs. pseudonymity vs. absonymity starting with a US perspective over at Public Citizen.

The First Amendment protects the right to speak anonymously, and if the bar to such discovery is set too low, much citizen and consumer discussion about the important issues of our day, including the doings of corporations and politicians, will be chilled and hence lost to the marketplace of ideas. If it is set too high, valid claims may be lost. We at Public Citizen have litigated many cases devoted to setting this balance correctly.

[From CL&P Blog: Two new cases on Internet Anonymity]

I can't say I understood everything (or, indeed, anything) in the legal argument, but I think I agree with the conclusion (applied by the US courts in the examples given) that "commercial" speech is not the same as "political" speech. Companies bashing each others' products via "astroturf" blogs are not (and should not) be subject to the same privileges as political opponents questioning policies. But, naturally, it is a very fuzzy boundary, and one of the key issues is anonymity. If you are allowed to post anonymously, then it's hard to

If you read through both stories you see that judges basically seem to be making it up as they go along as to what standards to use in deciding whether or not online anonymity is protectable

[From More Mixed Rulings On The Right To Be Anonymous Online | Techdirt]

Now, I would have thought that one of the reasons why we have judges is precisely so that they can make things up as they go along. If the law was written by people like me, it would be in XML and given the facts of the case as a set of propositions would be capable of delivering justice through an algorithm that would decide the outcome in polynomial time. But it isn't, so we need judges. Sometimes they come up with odd rulings -- look at the fuss about the UK judge who recently ruled that it's not against the law to smash stuff up if it belongs to people you really don't like -- but, generally speaking, they combine law and common sense.

Unfortunately, as I have constantly complained, common sense is a bad guide to what to do about identity.

We don't want paedophiles and nazis to be able to groom unsuspecting, innocent children online. Who could disagree with that? In the UK, this "common sense" drove a furore about Facebook that has led to an completely pointless resolution (along the lines of "something must be done, this is something, so let's do it").

how can the police help with every teen who is struggling with the wide range of bullying implied, from teasing to harassment? Even if every teen in the UK were to seriously add this and take it seriously, there’s no way that the UK police have a fraction of the resources to help teens manage challenging social dynamics. As a result, what false promises are getting made?

[From danah boyd | apophenia » Facebook’s Panic Button: Who’s panicking? And who’s listening?]

I would be utterly shocked if the presence of this button makes even the slightest difference. The kids who are smart enough to press it when they are approached are presumably smart enough to know that they are being approached, if you see what I mean, and the kids who press it because they are being bullied by their peers in some way are not going to get any help, so what's the point? The "Facebook murder" that Danah refers to might just as well have been called the "Ford Mondeo" murder, since both technologies were crucial to the crime, and as she points out having this button would not have averted the tragedy.

Continue reading "Joe Bloggs" »

Spot the looney

By Dave Birch posted Jun 1 2010 at 9:36 PM

[Dave Birch] I happened to be chatting to our friend Tony Poulos from the Telecommunications Manager's Forum about new service possibilities for mobile operators facing commoditisation and declining ARPUs, and one of the areas he got me to brainstorm was identity services.

One of the world’s leading experts in this field, David Birch, spent some time with me explaining how mobile operators, in particular, could actually become ‘smart pipes’ with financial transactions. The ‘secret sauce’ according to Birch, lies in the ability for operators to provide secure identification linked to the SIM providing private and public keys for multiple providers.

[From The 'secret sauce'? | Poulos Ponderings]

The mobile phone is the obvious "remote control" for identity, and I'm surprised that operators haven't moved into this space more aggressively (there are some exceptions, of course, such as Turkcell). This led me to think, again, about the nature of the value-added identity infrastructure that might be built.

One thing, I think, is clear: the goal shouldn't be to build a virtual version of the current identity "system". At the moment, the online world has a dynsfunctional identity layer: it's not really anonymous but it's not really absonymous either.

Implementing an Internet without anonymity is very difficult, and causes its own problems. In order to have perfect attribution, we'd need agencies -- real-world organizations -- to provide Internet identity credentials based on other identification systems: passports, national identity cards, driver's licenses, whatever. Sloppier identification systems, based on things such as credit cards, are simply too easy to subvert.

[From Schneier on Security: Anonymity and the Internet]

Bruce goes on to note that in the real world, half-baked identity management schemes actually make matters worse, not better. You can't argue that having people sort-of-identified is better than having them not identified at all. It isn't.

We have nothing that comes close to this global identification infrastructure. Moreover, centralizing information like this actually hurts security because it makes identity theft that much more profitable a crime.

[From Schneier on Security: Anonymity and the Internet]

This is why I am naturally somewhat suspicious of attempts to slap identity on the ends of the network rather than having identity management as a value-added service that is part of the network infrastructure and quite distinct from the issue of which identities will be managed (in other words, the web server has PKI built in, but it doesn't provide the identities, it facilitates identity providers to do so). Simple solutions to this difficult problem -- along the lines of the Chinese attempts to have "real-name registration" of Internet access by decreeing that everyone has to present their ID number when connecting -- don't work.

Mundie and other experts have said there is a growing need to police the internet to clampdown on fraud, espionage and the spread of viruses. "People don't understand the scale of criminal activity on the internet. Whether criminal, individual or nation states, the community is growing more sophisticated," the Microsoft executive said... He also called for a "driver's license" for internet users. "If you want to drive a car you have to have a license to say that you are capable of driving a car, the car has to pass a test to say it is fit to drive and you have to have insurance."

[From UN agency calls for global cyberwarfare treaty, ‘driver’s license’ for Web users | Raw Story]

It's a bad analogy for a start, because cars are covered by product liability laws and Microsoft's software isn't, but the law on driving licences doesn't stop cars from being stolen, used in crimes and being in accidents. If there were an Internet driver's license, the 419 scammer wouldn't apply for one, he'd make a fraudulent one just as he would in the physical world, and then use it to open bank accounts and so forth.

Many of the forgeries are “know your customer” documents such as utility bills and driving licences, which are then used to open bank accounts under false names.

[From Police war on fake ID factories as fraudsters net millions | News]

Ah, you might say, but in the Internet world we can use cryptography and similar geek tools to stop people from forging licences. In which case, the scammers will still get their licences.

An Irvington, N.J., man who operated a driving school pleaded guilty yesterday in federal court to bribing Pennsylvania driver's license examiners to obtain phony licenses for his customers... Authorities said Lominy began paying bribes to a PennDOT driver's license examiner, Alexander Steele, in early 2009 in exchange for Steele issuing licenses to his customers even though they weren't Pennsylvania residents and hadn't passed a written test or driving exam.

[From He admits bribing PennDOT examiners to issue fake licenses | Philadelphia Daily News | 04/02/2010]

I see reports of people being convicted for taking other people's tests for them for money in the UK from time to time as well. So, an Internet driving licence? I don't think this is a way to improve security. I might go further and say that compared to this, the Monster Raving Looney Party's manifesto commitment to ban envelopes and force everyone to communicate via postcards looks more practical.

All sealed private letters to be banned - we propose that all letters must be written on postcards, and emails to be routed through police stations. (After all honest citizens have nothing to hide)

[From Official Monster Raving Loony Party - manifesto proposals]

Continue reading "Spot the looney" »