Tough choices

The relationship between identity and privacy is deep: privacy (in the sense of control over data associated with an identity) ought to be facilitated by the identity infrastructure. But that control cannot be absolute: society needs a balance in order to function, so the infrastructure ought to include a mechanism for making that balance explicit. It is very easy to set the balance in the wrong place even with the best of intentions. And once the balance is set in the wrong place, it may have most undesirable consequences.

An obsession with child protection in the UK and throughout the EU is encouraging a cavalier approach to law-making, which less democratic regimes are using to justify much broader repression on any speech seen as extreme or dangerous…. “The UK and EU are supporting measures that allow for websites to be censored on the basis of purely administrative processes, without need for judicial oversight.”

[From Net censors use UK’s kid-safety frenzy to justify clampdown • The Register]

So a politician in one country decides, say, that we should all be able to read out neighbour’s emails just in case our neighbour is a pervert or serial killer or terrorist and the next thing we know is that Iranian government supporters in the UK are reading their neighbours emails and passing on their details to a hit squad if the emails contain any anti-regime comments.

By requiring law enforcement backdoors, we open ourselves to surveillance by hackers and foreign intelligence agencies

[From slight paranoia: Web 2.0 FBI backdoors are bad for national security]

This is, of course, absolutely correct, and it was shown in relief today when I read that…

Some day soon, when pro-democracy campaigners have their cellphones confiscated by police, they’ll be able to hit the “panic button” — a special app that will both wipe out the phone’s address book and emit emergency alerts to other activists… one of the new technologies the U.S. State Department is promoting to equip pro-democracy activists in countries ranging from the Middle East to China with the tools to fight back against repressive governments.

[From U.S. develops panic button for democracy activists | Reuters]

Surely this also means that terrorists about to execute a dastardly plot in the US will be able to wipe their mobile phones and alert their co-conspirators when the FBI knock on the door and, to use the emotive example, that child pornographers will be able to wipe their phones and alert fellow abusers when the police come calling. Tough choices indeed. We want to protect individual freedom so we must create private space. And yet we still need some kind of “smash the glass” option, because criminals do use the interweb tubes and there are legitimate law enforcement and national security interests here. Perhaps, however, the way forward to move away from the idea of balance completely.

In my own area of study, the familiar trope of “balancing privacy and security” is a source of constant frustration to privacy advocates, because while there are clearly sometimes tradeoffs between the two, it often seems that the zero-sum rhetoric of “balancing” leads people to view them as always in conflict. This is, I suspect, the source of much of the psychological appeal of “security theater”: If we implicitly think of privacy and security as balanced on a scale, a loss of privacy is ipso facto a gain in security. It sounds silly when stated explicitly, but the power of frames is precisely that they shape our thinking without being stated explicitly.

[From The Trouble With “Balance” Metaphors]

This is a great point, and when I read it it immediately helped me to think more clearly. There is no evidence that taking away privacy improves security, so it’s purely a matter of security theatre.

Retaining telecommunications data is no help in fighting crime, according to a study of German police statistics, released Thursday. Indeed, it could even make matters worse… This is because users began to employ avoidance techniques, says AK Vorrat.

[From Retaining Data Does Not Help Fight Crime, Says Group – PCWorld]

This is precisely the trajectory that we will all be following. The twin pressures from Big Content and law enforcement mean that the monitoring, recording and analysis of internet traffic is inevitable. But it will also be largely pointless, as my own recent experiences have proven. When I was in China, I wanted to use Twitter but it was blocked. So I logged in to a VPN back in the UK and twittered away. When I wanted to listen to the football on Radio 5 while in Spain, the BBC told me that I couldn’t, so I logged back in to my VPN and cheered the Blues. When I want to watch “The Daily Show” from the UK or when I want to watch “The Killing” via iPlayer in the US, I just go via VPN.

I’m surprised more ISPs don’t offer this as value-added service themselves. I already pay £100 per month for my Virgin triple-play (50Mb/s broadband, digital TV and telephone, so another £5 per month for OpenVPN would suit me fine).

Theoretically private

The Institute for Advanced Legal Studies hosted an excellent seminar by Professor Michael Birnhack from the Faculty of Law at Tel Aviv University who was talking about “A Quest for a Theory of Privacy”.

He pointed out that while we’re all very worried about privacy, we’re not really sure what should be done. It might be better to pause and review the legal “mess” around privacy and then try to find an intellectually-consistent way forward. This seems like a reasonable course of action to me, so I listened with interest as Michael explained that for most people, privacy issues are becoming more noticeable with Facebook, Google Buzz, Airport “nudatrons”, Street View, CCTV everywhere (particularly in the UK) and so on. (I’m particularly curious about the intersection between new technologies — such as RFID tags and biometrics — and public perceptions of those technologies, so I found some of the discussion very interesting indeed.)

Michael is part of the EU PRACTIS research group that has been forecasting technologies that will have an impact on privacy (good and bad: PETs and threats, so to speak). They use a roadmapping technique that is similar to the one we use at Consult Hyperion to help our clients to plan their strategies for exploiting new transaction technologies and is reasonably accurate within a 20 year horizon. Note that for our work for commercial clients, we use a 1-2 year, 2-5 year, and 5+ year roadmap. No-one in a bank or a telco cares about the 20 year view, even if we could predict it with any accuracy — and given that I’ve just read the BBC correspondents informed predictions for 2011 and they don’t mention, for example, what’s been going on in Tunisia and Egypt, I’d say that’s pretty difficult.

One key focus that Michael rather scarily picked out is omnipresent surveillance, particularly of the body (data about ourselves, that is, rather than data about our activities), with data acted upon immediately, but perhaps it’s best not go into that sort of thing right now!

He struck a definite chord when he said that it might be the new business models enabled by new technologies that are the real threat to privacy, not the technologies themselves. These mean that we need to approach a number of balances in new ways: privacy versus law enforcement, privacy versus efficiency, privacy versus freedom of expression. Moving to try and set these balances, via the courts, without first trying to understand what privacy is may take us in the wrong direction.

His idea for working towards a solution was plausible and understandable. Noting that privacy is a vague, elusive and contingent concept, but nevertheless a fundamental human right, he said that we need a useful model to start with. We can make a simple model by bounding a triangle with technology, law and values: this gives three sets of tensions to explore.

Law-Technology. It isn’t a simple as saying that law lags technology. In some cases, law attempts to regulate technology directly, sometimes indirectly. Sometimes technology responds against the law (eg, anonymity tools) and sometimes it co-operates (eg, PETs — a point that I thought I might disagree with Michael about until I realised that he doesn’t quite mean the same thing as I do by PETs).

Technology-Values. Technological determinism is wrong, because technology embodies certain values. (with reference to Social Construction of Technology, SCOT). Thus (as I think repressive regimes around the world are showing) it’s not enough to just have a network.

Law-Values, or in other words, jurisprudence, finds courts choosing between different interpretations. This is where Michael got into the interesting stuff from my point of view, because I’m not a lawyer and so I don’t know the background of previous efforts to resolve tensions on this line.

Focusing on that third set of tensions, then, in summary: From Warren and Brandeis’ 1890 definition of privacy as the right to be let alone, there have been more attempts to pick out a particular bundle of rights and call them privacy. Alan Westin‘s 1967 definition was privacy as control: the claims of individuals or groups or institutions to determine for themselves when, how and to what extent information about them is communicated to others.

This is a much better approach than the property right approach, where disclosing or not disclosing, “private” and “public” are the states of data. Think about the example of smart meters, where data outside the home provides information about how many people are in the home, what time they are there and so on. This shows that the public/private, in/out, home/work barriers are not useful for formulating a theory. The alternative that he put forward considers the person, their relationships, their community and their state. I’m not a lawyer so I probably didn’t understand the nuances, but this didn’t seem quite right to me, because there are other dimensions around context, persona, transaction and so on.

The idea of managing the decontextualisation of self seemed solid to my untrained ear and eye and I could see how this fitted with the Westin definition of control, taking on board the point that privacy isn’t property and it isn’t static (because it is technology-dependent). I do think that choices about identity ought, in principle, to be made on a transaction-by-transaction basis even if we set defaults and delegate some of the decisions to our technology and the idea that different persona, or avatars, might bundle some of these choices seems practical.

Michael’s essential point is, then, that a theory of privacy that is formulated by examining definitions, classsifications, threats, descriptions, justifications and concepts around privacy from scratch will be based on the central notion of privacy as control rather than secrecy or obscurity. As a technologist, I’m used to the idea that privacy isn’t about hiding data or not hiding it, but about controlling who can use it. Therefore Michael’s conclusions from jurisprudence connect nicely connect with my observations from technology.

An argument that I introduced in support of his position during the questions draws on previous discussions around the real and virtual boundary, noting that the lack of control in physical space means the end of privacy there, whereas in virtual space it may thrive. If I’m walking down the street, I have no control over whether I am captured by CCTV or not. But in virtual space, I can choose which persona to launch into which environment, which set of relationships and which business deals. I found Michael’s thoughts on the theory behind this fascinating, and I’m sure I’l be returning to them in the future.

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]

Why are we waiting?

[Dave Birch] It isn’t only dreamers like me who want to see an effective digital infrastructure in place.

Law enforcement worldwide should focus on developing an international identity verification system, according to INTERPOL secretary general Ronald K. Noble.

[From INTERPOL: International ID verification system needed]

I agree, although I imagine my vision of this infrastructure and Interpol’s may differ in a few details. But governments, irrespective of the law enforcement agenda, should be enthusiastic too. In a September 2010 research notes on “eIDs in Europe”, Deutsche Bank say that

At the European level a number of electronic identity cards (eIDs) and the qualified electronic signature (QES) do already exist. Together they possess the potential to form another of the foundations of the internal market for financial services – especially for opening accounts.

Deutsche Bank go on to say that

A further obstacle will be that the design of ID cards does not fall within the competence of the EU and varies greatly from one member state to the other. To date, there are e.g. no harmonised European definitions for the topic of “identity” or “identification”. This means that in the medium term the issue for the trailblazers in this segment is likely to be enhanced cooperation.

(Note to foreign readers: remember when reading that paragraph that “competence” in EU-speak does not mean the same thing as it does in normal language: they don’t mean that the Commission would be hopeless at designing eID systems, although I’m sure they would be, but that it is not their problem — it is a problem for national governments to solve.)

So how do we move forward then? Is it time for an ESTIC, a version of the US National Strategy for Trusted Identities in Cyberspace (NSTIC) that adds European values to the technical infrastructure to create something that the public and private sectors can use to transform (I mean this seriously) service delivery? This would rest on corporate identities (eg, your bank identity) being extended across corporate boundaries and into government — as is already the case in Scandinavia — and implies a much greater degree of public-private sector co-operation than we have seen to date.

Verily

[Dave Birch] I enjoyed Scott Silverman's talk about privacy and security at ID World. Scott (the devil, according to CASPIAN) is the CEO of Verichip, the company that developed the first FDA-approved RFID chip for human implantation. (It's just a passive RFID chip containing a 16-bit identification number). Apparently, they had had some 900 emergency rooms across the US signed up for the service before the "privacy backlash" started. Opponents of the system told the newspapers that the chips caused cancer, and that was that.

Now, to be honest, I'm very sympathetic to Scott. A couple of years ago, I contacted Verichip because I thought it would be fun to have a Verichip implanted in my arm ready for the Digital Identity Forum, but they said no (spoilsports). My cat has one, and I'm jealous.

Anyway, the point is that the privacy backlash was so great that the stock price collapsed and the company — which was reduced to a shell — has now been restructured as PositiveID with Scott as the majority shareholder. They have a number of initiatives, one of them being "PatientID" which will link high-risk patients (eg, Alzheimer patients) to their medical records. Now, as far as I can see (and I'm speaking from the point of view of someone with an Alzheimer's sufferer in the family) this is a splendid idea. I'm pretty privacy sensitive, but this is an application that makes absolute sense to me. If I had Alzheimer's, I'd want a chip so that if I get lost or confused, a doctor can instantly find out who I am and what my conditions and medications are. You could do it by fingerprinting me, or iris scanning or whatever. But it appears to quick and simple to use the chip instead.

Scott also mentioned their "HealthID" initiative that will link sensors to the chip: so, for example, you could have a glucose-sensing chip for some types of diabetes so that when the chip is read to identify the patient it will also report glucose levels. If I had diabetes, I would much rather have one of these than prick my finger and test drops of blood. I wouldn't want everyone to be able to read it though, and this is where the problem comes: we need to have some form of standard privacy-enhancing infrastructure that sits above the "chip layer" to make this all work properly.

Thanks, thank you all

[Dave Birch] This blog has been nominated for the Computer Weekly Blog Awards for 2009.

Now, merely being nominated is reward and testament enough, but should you feel moved to voice your support in the traditional way, then please feel free to vote early and vote often.

Close enough for jazz

[Dave Birch] I had a typical fascinating and productive discussion with Hazel Lacohee and Piotr Cofta when we last got together. We were kicking around some ideas for finding practical ways to improve privacy, security and other good stuff while simultaneously worrying about the government's approach to the interweb, broadband and ID cards. With the right combination of technology and vision we can take an entirely different view of the "identity problem" and how to solve it. In a decentralised fashion we can see identity develop as an emergent property of trust networks, shaped by evolution to be fit for purpose or, as Piotr Cofta puts it, "good enough identity". Good enough identity (GEI). I love it.

I'm certain that there is merit in this approach. There is a real difference between between trying to create a kind of "gold standard" identity that delivers the highest possible levels of authentication and identification in all circumstances and trying to create an identity that is useful (defined by: reduces total transaction costs and, in my world, aligns social costs with private costs). Therefore, a utilitarian approach of trying to do something, anything to make the identity situation improve for individuals and organisations, we might be better off starting with some simple building blocks and building up rather than by starting with a national ID card (I mean, a 21st-century national ID card of the psychic ID kind, not electronic cardboard) and driving that down. Go from the personal to the enterprise, from the enterprise to government.

What a cunning stunt

[Dave Birch] I am, very literally, green with envy. I count myself as a reasonably good speaker, and I try to use narrative and historical examples to explain key principles. But nothing beats a good demo, and I saw an excellent one today, one that I wish I'd thought of!

At the Intellect conference on Identity & Information in London today, Edgar Whitely from the LSE gave a terrific presentation. He was pointing out that the principle of data minimisation in identity systems is important, but he did it in a particularly arresting way.

Here's what he did.

He showed this recent newspaper photograph of the British Home Secretary, Alan Johnson, showing off his new ID card and holding it up to the camera. This version comes from The Guardian….

Alan Johnson reveals the design of the British national identity card

Alan Johnson reveals the design of the British national identity card. Photograph: Stefan Rousseau/PA

As you can see in the picture, for reasons that will be not fully explained in a moment, the UK ID card has the holder's full name, date of birth and place of birth on it. These three data points are sufficient to uniquely identify the overwhelming majority of the population. So Edgar went to the Identity & Passport Service birth certificate ordering service and put in the details from the Home Secretary's card. He then paid his £10 and… with a suitably theatrical flourish, Edgar produced the copy of the Home Secretary's birth certificate that he had been sent in the post. Note that Edgar hadn't done anything wrong. As James Hall, the head of IPS who was on the same panel, pointed out, in the UK anyone can order a copy of anyone's birth certificate. He said that if you are a celebrity then hundreds of people will order copies of your birth certificate every year, which had never occurred to me. I'm sure James is right, but it does seem a little odd that people who want to commit identity theft will simply have to look at their mark's ID card to get started.

Edgar hadn't used the birth certificate to open a bank account or get a driving licence or anything, he was just making the point that if we don't adopt the right principles (eg, data minimisation) for identity systems, then we run the risk of making identity theft worse. It was a great presentation and a super stunt. Well done.

Anyone familiar with my deranged rantings about psychic ID (ie, virtually nobody) will be familiar with the general point: a characteristic of a 21st-century ID scheme is that it should only give up information necessary to enable a transactions, nothing more or less. So, if you are authorised to ask my ID card whether I am over 18 or not, that's all it should tell you. Not my name, not my address, not my age or date of birth. Just whether I am over 18 or not and that's it.

The current ID card scheme does not have this key characteristic, not for any functional reason but because the ID card and passport were jumbled up for a political purpose — the purpose being, as far as I know, to make it harder for an incoming administration to scrap the scheme — that constrains the design and implementation. Since the government wants the ID card to be used as a travel document within in the EU, it has to have certain human-readable information on it. That's why it gives away the key data points that make it tempting for criminals to kick-start their identity theft antics.

What is a “suitable” ID for banking?

[Dave Birch] There was a really interesting letter in The Daily Telegraph "Money" section (2nd October). I can't find it online to link to, so I hope they don't mind me quoting a couple of chunks here. The letter comes from someone who tried to open a bank account with HSBC, but who didn't have a current passport or driving licence.

When I explained this at a branch, it was suggested that I ask the police station for proof of identity. The police officers said they had never heard of such a thing unless I had a criminal record.

[From The Daily Telegraph "Jessica Investigates", 2nd October 2009]

That can't be right: you can only have a bank account at HSBC if you have a criminal record? The disappointed would-be bank account holder went back to their branch to ask for alternatives.

The counter person showed me a list of possible documents, but, as I am not a pensioner, nor in receipt of benefits, the only item on the list she could suggest I try was to get a letter from HMRC. I duly went to the local tax office, where the assistant said she wished banks would stop sending people there… they would not waste public money providing such letters for banks.

[From The Daily Telegraph "Jessica Investigates", 2nd October 2009]

The letter goes on to list the documents that the wannabe-HSBCer had presented, and had had rejected by the bank: an out-of-date passport, a birth certificate, a current payslip from an employer (the local council, for whom the person had worked for more than two decades), a work ID card (complete with microchip), utility bills, statements from another bank, house deeds and a voting card. Any one of these would have got you a job with the bank, but not, it seems, an account. Identity is broken, and the Conservative plan to scrap the national ID card scheme is a bad as the government's plan to keep it. What this country needs is a working national identity infrastructure.

The ten minute version

[Dave Birch] A diversion. I filled in a questionnaire about digital identity (for reasons not germane to this post) so I thought it might be mildly interesting to post my answers and see if they attract any comment.

  • Who are you? (Name, job role and organisation)
  • Dave Birch, Director, Consult Hyperion
  • What does the term ‘digital identity’ mean to you?
  • It's the bridge between virtual identities that exist only inside computers and things in the real world.
  • s your digital identity ‘you’? Why? You may also want to comment on whether your ‘digital identity’ is an individual understanding or composed of group, community and organisational identities?
  • My digital identity isn't me, although it may be created by me. In general use, I imagine that people will have a small number of digital identities, just as they have 3 or 4 credit and debit cards, but each of these may support a large number of virtual identities. These virtual identities will, by and large, embody relationships.
  • What skills and competencies do we need to manage our digital identity?
  • We need to implement the "front end" in familiar ways while hiding the OpenID, PKI and all the rest of it. It should be a simple of matter of "who do you want to be today" and choosing from a menu on your mobile phone screen. I do not believe that the average person has either the competenices or, frankly, the inclination to manage their identities (and privacy) properly, so we (ie, responsible professionals) need to construct and infrastructure that will do it for them.
  • What do you see as the current issue/s of concern surrounding digital identity
  • The tension between the unlimited possibilities of technology and the limited vision of politicians, regulators, designers. Since virtual identities do not behave as mere electronic simulations of "real" identities, but can in fact do far more, we need people with vision who can understand what technology can deliver.
  • What do you see as future issue/s of concern in the area of digital identity?
  • Managing multiple digital identities in ways that make sense, so that there's a narrative around identity and privacy that can underpin future social, commercial and government relationships.
  • Which tools and services do you use to manage your digital identity? For example do you separate personal and professional identities?
  • I do separate personal and professional identities. I have different e-mail addresses, different blogs and now different OpenIDs. Sometimes I even comment on things anonymously. Personally, I think this is a natural way to work — my kids do it implicitly when they IM me, e-mail their grandma and Facebook their friends.

i expect my responses were a little different from most people, partly because I spend a lot of time thinking about this sort of thing but also partly because I have quite a strong model of the relationship between real and virtual identities and I locate digital identity there.

Touch and gone

[Dave Birch] I ran a workshop on mobile proximity security day, and one of the things we touched on in the group is the EU’s publication of their recommendations on the “identity of stuff” last week. They’ve published a 14-point action plan.

The European Commission has announced plans for Europe to play a leading part in developing and managing interconnected networks formed from everyday objects with radio frequency identity (RFID) tags embedded in them – the so-called “internet of things”.

[From EU lays out plans for the “internet of things” – V3.co.uk – formerly vnunet.com]

These are real issues, and although I’m not making any comment on the value or otherwise of the specific recommendations, there’s no doubt that the subject deserves more attention. There’s an “identity of things” problem that came up (again) in a meeting I was in last week that I think is worth sharing. It comes from the world of NFC, where the problem revolves around contactless stickers, tags, posters and that kind of thing. It’s the same problem that we looked at before, and it’s worth reviewing because there’s been no industry progress toward a solution.

A little background. The NFC Forum have announced their “N mark” which is a standard symbol to be applied to adverts, magazines, posters and such like. The idea is to show consumers (none of whom have ever even heard of NFC, let alone seen an NFC phone) where they can “tap” their phones to get some kind of service.

The NFC Forum has developed the “N-Mark” trademark so that consumers can easily identify where their NFC-enabled devices can be used. It is a stylized “N” and indicates the spot where an NFC-enabled device can read an NFC tag to establish the connection.

[From NFC Forum : N-Mark]

If you haven’t seen it, it looks like this. A simple ecosystem in the offing: you put the N-mark on things, consumers come along and touch them with other things.


Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.