In our Live 5 for 2021, we said that governance would be a major topic for digital identity this year. Nowhere has this been more true than in the UK, where the government has been diligently working with a wide set of stakeholders to develop its digital identity and attribute trust framework – the rules of road for digital identity in the UK. The work continues but with the publication of the second iteration of the framework I thought it would be helpful to focus on one particular aspect – how might the framework apply to decentralised identity, given that is the direction of travel in the industry.
In the new digital economy, digital identity is a key component to ensuring security, privacy, and convenience for people and businesses.
When we look forward to 2021, it is no surprise that COVID-19 is the dominant factor. So far as the merchant payments world is concerned, the shape of the post-pandemic new normal transaction environment must be the key strategic consideration for stakeholders and I am desperately keen to hear the variety of informed opinion on this topic that I have come to expect at Merchant Payments Ecosystem every year. At Consult Hyperion we like to contribute to these conversations by providing a useful framework for discussion: our annual “Live 5”, our yearly set of suggestions for strategic focus. This year, we choose to look at the key issue of pandemic transformation and its impact of on the three key domains where our clients operate: Payment, Identity and Transit, together with (as is traditional!) a suggestion as to a technology that the POS world may not be thinking about but probably should be.
At the (sadly, virtual) Fintech South event the year, I was asked to chair a discussion on identity and privacy with three extremely well-qualified experts who had informed perspectives on the state of, and trends in, those important pillars of a digital society. These were Adam Gunther (SVP, Digital Identity for Equifax), Andrew Gowasack (Co-Founder and President at TrustStamp) and Megan Heinze (President, Financial Institutions, North America for IDEMIA). It was great to talk to a group of people who were not only well-informed on these topics but had some passion for them too.
I won’t go over everything that was discussed, but I do want to pick up on a comment that was made in passing when I was chatting to the panelists: someone said that a guiding principle should be “no scary systems”. Hear hear! But what is a scary system? It is, in my opinion, a system that privileges security over privacy. This is not how we should be designing the identity systems for the 21st century!
The relationship between identity and privacy is deep: privacy (in the sense of control over data associated with an identity) ought to be facilitated by the identity infrastructure. But that control cannot be absolute: society needs a balance in order to function, so the infrastructure ought to include a mechanism for making that balance explicit. It is very easy to set the balance in the wrong place even with the best of intentions. And once the balance is set in the wrong place, it may have most undesirable consequences.
An obsession with child protection in the UK and throughout the EU is encouraging a cavalier approach to law-making, which less democratic regimes are using to justify much broader repression on any speech seen as extreme or dangerous…. “The UK and EU are supporting measures that allow for websites to be censored on the basis of purely administrative processes, without need for judicial oversight.”
So a politician in one country decides, say, that we should all be able to read out neighbour’s emails just in case our neighbour is a pervert or serial killer or terrorist and the next thing we know is that Iranian government supporters in the UK are reading their neighbours emails and passing on their details to a hit squad if the emails contain any anti-regime comments.
By requiring law enforcement backdoors, we open ourselves to surveillance by hackers and foreign intelligence agencies
This is, of course, absolutely correct, and it was shown in relief today when I read that…
Some day soon, when pro-democracy campaigners have their cellphones confiscated by police, they’ll be able to hit the “panic button” — a special app that will both wipe out the phone’s address book and emit emergency alerts to other activists… one of the new technologies the U.S. State Department is promoting to equip pro-democracy activists in countries ranging from the Middle East to China with the tools to fight back against repressive governments.
Surely this also means that terrorists about to execute a dastardly plot in the US will be able to wipe their mobile phones and alert their co-conspirators when the FBI knock on the door and, to use the emotive example, that child pornographers will be able to wipe their phones and alert fellow abusers when the police come calling. Tough choices indeed. We want to protect individual freedom so we must create private space. And yet we still need some kind of “smash the glass” option, because criminals do use the interweb tubes and there are legitimate law enforcement and national security interests here. Perhaps, however, the way forward to move away from the idea of balance completely.
In my own area of study, the familiar trope of “balancing privacy and security” is a source of constant frustration to privacy advocates, because while there are clearly sometimes tradeoffs between the two, it often seems that the zero-sum rhetoric of “balancing” leads people to view them as always in conflict. This is, I suspect, the source of much of the psychological appeal of “security theater”: If we implicitly think of privacy and security as balanced on a scale, a loss of privacy is ipso facto a gain in security. It sounds silly when stated explicitly, but the power of frames is precisely that they shape our thinking without being stated explicitly.
This is a great point, and when I read it it immediately helped me to think more clearly. There is no evidence that taking away privacy improves security, so it’s purely a matter of security theatre.
Retaining telecommunications data is no help in fighting crime, according to a study of German police statistics, released Thursday. Indeed, it could even make matters worse… This is because users began to employ avoidance techniques, says AK Vorrat.
This is precisely the trajectory that we will all be following. The twin pressures from Big Content and law enforcement mean that the monitoring, recording and analysis of internet traffic is inevitable. But it will also be largely pointless, as my own recent experiences have proven. When I was in China, I wanted to use Twitter but it was blocked. So I logged in to a VPN back in the UK and twittered away. When I wanted to listen to the football on Radio 5 while in Spain, the BBC told me that I couldn’t, so I logged back in to my VPN and cheered the Blues. When I want to watch “The Daily Show” from the UK or when I want to watch “The Killing” via iPlayer in the US, I just go via VPN.
I’m surprised more ISPs don’t offer this as value-added service themselves. I already pay £100 per month for my Virgin triple-play (50Mb/s broadband, digital TV and telephone, so another £5 per month for OpenVPN would suit me fine).
The Institute for Advanced Legal Studies hosted an excellent seminar by Professor Michael Birnhack from the Faculty of Law at Tel Aviv University who was talking about “A Quest for a Theory of Privacy”.
He pointed out that while we’re all very worried about privacy, we’re not really sure what should be done. It might be better to pause and review the legal “mess” around privacy and then try to find an intellectually-consistent way forward. This seems like a reasonable course of action to me, so I listened with interest as Michael explained that for most people, privacy issues are becoming more noticeable with Facebook, Google Buzz, Airport “nudatrons”, Street View, CCTV everywhere (particularly in the UK) and so on. (I’m particularly curious about the intersection between new technologies — such as RFID tags and biometrics — and public perceptions of those technologies, so I found some of the discussion very interesting indeed.)
Michael is part of the EU PRACTIS research group that has been forecasting technologies that will have an impact on privacy (good and bad: PETs and threats, so to speak). They use a roadmapping technique that is similar to the one we use at Consult Hyperion to help our clients to plan their strategies for exploiting new transaction technologies and is reasonably accurate within a 20 year horizon. Note that for our work for commercial clients, we use a 1-2 year, 2-5 year, and 5+ year roadmap. No-one in a bank or a telco cares about the 20 year view, even if we could predict it with any accuracy — and given that I’ve just read the BBC correspondents informed predictions for 2011 and they don’t mention, for example, what’s been going on in Tunisia and Egypt, I’d say that’s pretty difficult.
One key focus that Michael rather scarily picked out is omnipresent surveillance, particularly of the body (data about ourselves, that is, rather than data about our activities), with data acted upon immediately, but perhaps it’s best not go into that sort of thing right now!
He struck a definite chord when he said that it might be the new business models enabled by new technologies that are the real threat to privacy, not the technologies themselves. These mean that we need to approach a number of balances in new ways: privacy versus law enforcement, privacy versus efficiency, privacy versus freedom of expression. Moving to try and set these balances, via the courts, without first trying to understand what privacy is may take us in the wrong direction.
His idea for working towards a solution was plausible and understandable. Noting that privacy is a vague, elusive and contingent concept, but nevertheless a fundamental human right, he said that we need a useful model to start with. We can make a simple model by bounding a triangle with technology, law and values: this gives three sets of tensions to explore.
Law-Technology. It isn’t a simple as saying that law lags technology. In some cases, law attempts to regulate technology directly, sometimes indirectly. Sometimes technology responds against the law (eg, anonymity tools) and sometimes it co-operates (eg, PETs — a point that I thought I might disagree with Michael about until I realised that he doesn’t quite mean the same thing as I do by PETs).
Technology-Values. Technological determinism is wrong, because technology embodies certain values. (with reference to Social Construction of Technology, SCOT). Thus (as I think repressive regimes around the world are showing) it’s not enough to just have a network.
Law-Values, or in other words, jurisprudence, finds courts choosing between different interpretations. This is where Michael got into the interesting stuff from my point of view, because I’m not a lawyer and so I don’t know the background of previous efforts to resolve tensions on this line.
Focusing on that third set of tensions, then, in summary: From Warren and Brandeis’ 1890 definition of privacy as the right to be let alone, there have been more attempts to pick out a particular bundle of rights and call them privacy. Alan Westin‘s 1967 definition was privacy as control: the claims of individuals or groups or institutions to determine for themselves when, how and to what extent information about them is communicated to others.
This is a much better approach than the property right approach, where disclosing or not disclosing, “private” and “public” are the states of data. Think about the example of smart meters, where data outside the home provides information about how many people are in the home, what time they are there and so on. This shows that the public/private, in/out, home/work barriers are not useful for formulating a theory. The alternative that he put forward considers the person, their relationships, their community and their state. I’m not a lawyer so I probably didn’t understand the nuances, but this didn’t seem quite right to me, because there are other dimensions around context, persona, transaction and so on.
The idea of managing the decontextualisation of self seemed solid to my untrained ear and eye and I could see how this fitted with the Westin definition of control, taking on board the point that privacy isn’t property and it isn’t static (because it is technology-dependent). I do think that choices about identity ought, in principle, to be made on a transaction-by-transaction basis even if we set defaults and delegate some of the decisions to our technology and the idea that different persona, or avatars, might bundle some of these choices seems practical.
Michael’s essential point is, then, that a theory of privacy that is formulated by examining definitions, classsifications, threats, descriptions, justifications and concepts around privacy from scratch will be based on the central notion of privacy as control rather than secrecy or obscurity. As a technologist, I’m used to the idea that privacy isn’t about hiding data or not hiding it, but about controlling who can use it. Therefore Michael’s conclusions from jurisprudence connect nicely connect with my observations from technology.
An argument that I introduced in support of his position during the questions draws on previous discussions around the real and virtual boundary, noting that the lack of control in physical space means the end of privacy there, whereas in virtual space it may thrive. If I’m walking down the street, I have no control over whether I am captured by CCTV or not. But in virtual space, I can choose which persona to launch into which environment, which set of relationships and which business deals. I found Michael’s thoughts on the theory behind this fascinating, and I’m sure I’l be returning to them in the future.
These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]
[Dave Birch] It isn’t only dreamers like me who want to see an effective digital infrastructure in place.
Law enforcement worldwide should focus on developing an international identity verification system, according to INTERPOL secretary general Ronald K. Noble.
I agree, although I imagine my vision of this infrastructure and Interpol’s may differ in a few details. But governments, irrespective of the law enforcement agenda, should be enthusiastic too. In a September 2010 research notes on “eIDs in Europe”, Deutsche Bank say that
At the European level a number of electronic identity cards (eIDs) and the qualified electronic signature (QES) do already exist. Together they possess the potential to form another of the foundations of the internal market for financial services – especially for opening accounts.
Deutsche Bank go on to say that
A further obstacle will be that the design of ID cards does not fall within the competence of the EU and varies greatly from one member state to the other. To date, there are e.g. no harmonised European definitions for the topic of “identity” or “identification”. This means that in the medium term the issue for the trailblazers in this segment is likely to be enhanced cooperation.
(Note to foreign readers: remember when reading that paragraph that “competence” in EU-speak does not mean the same thing as it does in normal language: they don’t mean that the Commission would be hopeless at designing eID systems, although I’m sure they would be, but that it is not their problem — it is a problem for national governments to solve.)
So how do we move forward then? Is it time for an ESTIC, a version of the US National Strategy for Trusted Identities in Cyberspace (NSTIC) that adds European values to the technical infrastructure to create something that the public and private sectors can use to transform (I mean this seriously) service delivery? This would rest on corporate identities (eg, your bank identity) being extended across corporate boundaries and into government — as is already the case in Scandinavia — and implies a much greater degree of public-private sector co-operation than we have seen to date.
[Dave Birch] I enjoyed Scott Silverman's talk about privacy and security at ID World. Scott (the devil, according to CASPIAN) is the CEO of Verichip, the company that developed the first FDA-approved RFID chip for human implantation. (It's just a passive RFID chip containing a 16-bit identification number). Apparently, they had had some 900 emergency rooms across the US signed up for the service before the "privacy backlash" started. Opponents of the system told the newspapers that the chips caused cancer, and that was that.
Now, to be honest, I'm very sympathetic to Scott. A couple of years ago, I contacted Verichip because I thought it would be fun to have a Verichip implanted in my arm ready for the Digital Identity Forum, but they said no (spoilsports). My cat has one, and I'm jealous.
Anyway, the point is that the privacy backlash was so great that the stock price collapsed and the company — which was reduced to a shell — has now been restructured as PositiveID with Scott as the majority shareholder. They have a number of initiatives, one of them being "PatientID" which will link high-risk patients (eg, Alzheimer patients) to their medical records. Now, as far as I can see (and I'm speaking from the point of view of someone with an Alzheimer's sufferer in the family) this is a splendid idea. I'm pretty privacy sensitive, but this is an application that makes absolute sense to me. If I had Alzheimer's, I'd want a chip so that if I get lost or confused, a doctor can instantly find out who I am and what my conditions and medications are. You could do it by fingerprinting me, or iris scanning or whatever. But it appears to quick and simple to use the chip instead.
Scott also mentioned their "HealthID" initiative that will link sensors to the chip: so, for example, you could have a glucose-sensing chip for some types of diabetes so that when the chip is read to identify the patient it will also report glucose levels. If I had diabetes, I would much rather have one of these than prick my finger and test drops of blood. I wouldn't want everyone to be able to read it though, and this is where the problem comes: we need to have some form of standard privacy-enhancing infrastructure that sits above the "chip layer" to make this all work properly.
[Dave Birch] I had a typical fascinating and productive discussion with Hazel Lacohee and Piotr Cofta when we last got together. We were kicking around some ideas for finding practical ways to improve privacy, security and other good stuff while simultaneously worrying about the government's approach to the interweb, broadband and ID cards. With the right combination of technology and vision we can take an entirely different view of the "identity problem" and how to solve it. In a decentralised fashion we can see identity develop as an emergent property of trust networks, shaped by evolution to be fit for purpose or, as Piotr Cofta puts it, "good enough identity". Good enough identity (GEI). I love it.
I'm certain that there is merit in this approach. There is a real difference between between trying to create a kind of "gold standard" identity that delivers the highest possible levels of authentication and identification in all circumstances and trying to create an identity that is useful (defined by: reduces total transaction costs and, in my world, aligns social costs with private costs). Therefore, a utilitarian approach of trying to do something, anything to make the identity situation improve for individuals and organisations, we might be better off starting with some simple building blocks and building up rather than by starting with a national ID card (I mean, a 21st-century national ID card of the psychic ID kind, not electronic cardboard) and driving that down. Go from the personal to the enterprise, from the enterprise to government.