In our Live 5 for 2021, we said that governance would be a major topic for digital identity this year. Nowhere has this been more true than in the UK, where the government has been diligently working with a wide set of stakeholders to develop its digital identity and attribute trust framework – the rules of road for digital identity in the UK. The work continues but with the publication of the second iteration of the framework I thought it would be helpful to focus on one particular aspect – how might the framework apply to decentralised identity, given that is the direction of travel in the industry.
At the (sadly, virtual) Fintech South event the year, I was asked to chair a discussion on identity and privacy with three extremely well-qualified experts who had informed perspectives on the state of, and trends in, those important pillars of a digital society. These were Adam Gunther (SVP, Digital Identity for Equifax), Andrew Gowasack (Co-Founder and President at TrustStamp) and Megan Heinze (President, Financial Institutions, North America for IDEMIA). It was great to talk to a group of people who were not only well-informed on these topics but had some passion for them too.
I won’t go over everything that was discussed, but I do want to pick up on a comment that was made in passing when I was chatting to the panelists: someone said that a guiding principle should be “no scary systems”. Hear hear! But what is a scary system? It is, in my opinion, a system that privileges security over privacy. This is not how we should be designing the identity systems for the 21st century!
[Dave Birch] I popped along to a City Forum round table on the Information Intensive Society. I was late, because of public transport, so I didn’t hear the chairman say that we were to stick to the Chatham House rule. As a consequence I started twittering what people were saying, as well as posting a picture! I apologise unreservedly — but the medium really is the message, isn’t it? Anyway, the subtitle for the meeting was “balancing security and privacy”, which I think framed the whole discussion the wrong way: the subtitle should have been “obtaining security and privacy”. I don’t want them balanced, I want them both: this is what, to me, marks the difference between these debates in the “old” context and the “new” context.
I think this is why I found the discussion unsatisfying — and I don’t mean this as a criticism of the event, or of the organisers, even though one of the speakers actually did say “the Internet is the future”. The problem is that there is a kind of assumption that privacy is an enemy of security and anyone who advocates more privacy is mutant commie scum (didn’t you used to play Paranoia?). If you put forward any alternative view, then it is answered with the old “well, if you knew what I knew blah blah” and the debate goes nowhere.
[Dave Birch] There’s a fascinating, but slightly creepy, category of issue that makes for a good acid test of proposals for population-scale identity management. How does the “system” recover when an identity really is stolen? If there’s another you out there, if you have an evil doppelganger, if an ex-partner is taking revenge… if there’s someone out there who is pretending to be you (in fact, in virtual terms, is you) then who do you call? And when you call them, what are they going to do? This is a complicated issue. How do you establish that you really are you? And once you have established this, what do you do with the compromised virtual identity?
[Dave Birch] According to a letter I saw a while ago in The Daily Telegraph, British supermarkets won’t accept a British armed forces ID cards as a proof of age, but they will accept foreign ID cards that they cannot read. Or not. It depends what for.
The student’s French ID card was not deemed to be sufficient proof of her age for the staff at Sainsbury’s, even though the chain does accpet the card from foreign workers who wish to work in the UK.
So you can use your foreign ID card to get a job at Sainsbury’s but not to buy a bottle of champagne. Bizarre, but predictable: this is what happens when we jumble up credentials and identification, absent any well-formed rules for understanding or verifying them. It reminded me of the discussion from a few weeks back concerning the distinction between actual security and security theatre. Here’s a simple example: you go to open bank account and the bank asks to see identity, so you show them a passport. If it is a British passport, they can phone a Home Office hotline to see if it is real, whether it has been reported stolen and so forth. If it is, say, a Bulgarian passport, they cannot possibly tell whether it is real or not, so they just photocopy it and file the copy away somewhere, just as the British Attorney General should have done with her maid’s work permit (since it is an offence is to not to keep a copy of such documentation). Thus, if you are a criminal then you will always choose to use a Bulgarian passport. Honest citizens are inconvenienced, criminals aren’t. This isn’t so much security theatre as security pantomime, as the BBC have highlighted.
The banks are worried it is still too easy to use a counterfeit passport from abroad to open a bank account, or to get an overdraft or credit card.
Well, I suppose they could always not open the account unless they can understand and verify the identification documents. The fact is, it’s really, really hard for anyone to understand foreign credentials of any kind. Remember the amusing story of the mystery Polish serial traffic offender being tracked by the Irish police?
It was discovered that the man every member of the Irish police’s rank and file had been looking for – a Mr Prawo Jazdy – wasn’t exactly the sort of prized villain whose apprehension leads to an officer winning an award… Prawo Jazdy is actually the Polish for driving licence and not the first and surname on the licence.
This does nicely illustrate a key advantage of digital identity over physical identity: this would never happen. If my reader can’t understand your card, that’s the end of the discussion. There’s a nice binary outcome. Where the results depend on human interpretation of shades of grey, surely the system will always throw up crazy outcomes.
An innocent South Tyneside man was arrested because his MoT certificate was a paler shade of green. Michael Cook, from South Shields, had gone to the Driver and Vehicle Licensing Agency (DVLA) centre in Newcastle to renew his car tax. Staff thought his two-week-old MOT certificate was a forgery because it was a lighter shade than his previous one, and the police were called.
Essential to a functional identity system, then, is a cheap and simple “box” for checking whether the card is valid. You put your French ID card, British Forces ID card or Tesco Clubcard into the box at the checkout and the light goes green or red. That’s it.
[Dave Birch] A couple of days ago I was in a discussion concerning the discrepancy between what enlightened experts (eg, me) think about identity management and what governments, civil servants and IT vendors think about identity management. One of the points I made, which I think I can defend, is that the “common sense” notion of identity, rooted in our pre-industrial social structures and pre-human cortex, is not only not very good at dealing with the properties and implications of identity in an online world but positively misleading when applied to system and service design. The fact is that virtual identity and “physical” identity are not the same thing, and they differ in ways that we are only beginning to take on board. Here’s an interesting reflection on the difference between physical and virtual identity.
I used to work on campus 5 days a week, but working at home more has coincided with the advent of blogs and twitter. My professional and personal profile on campus is now much higher than it was when I attended every day, but largely sat in my office, and occasionally ventured out for coffee.
Interesting. An online identity in a context that makes it worth more than an offline identity, because it is more connected. The Facebook economy, so to speak. Which leads me on to…
[Dave Birch] I’ve been following a few discussions about online anonymity, triggered by a couple of stories about bloggers identities being disclosed for one reason or another. One of them was the ridiculous story about an outraged model.
Of course, pretty much no one would have seen such a blog if Cohen hadn’t gone legal about it, claiming (with no proof) that she was losing jobs because of it (which seems difficult to believe).
This what they call on the interweb the “Streisand effect“, but of course in these knowing post-modern times it could all be a clever publicity stunt and the model is not being stupid by cynically wasting taxpayers money to attract attention. Anyway, the point is that this story got yet another discussion about internet anonymity going. The general tone of the discussions in the media appears to be the usual unthinking “if you’ve got nothing to hide…”.
I take a different view. Most people do not have anonymity, it’s a myth. If I log on to The Guardian’s “Comment is Free” and post something about the destruction of the public finances under the name “General Wolfe of Quebec”, I am not really acting anonymously because it is trivial (as the recent headline stories have proved) to determine the IP address that the post came from and then go to the ISP to get the account. So although the Internet seems anonymous to people who don’t understand it (eg, models, politicians), it isn’t. And it’s not obvious whether that is good or bad. If you’re trying to track down someone posting child pornography (the usual short-circuit for the argument) then it’s bad, but if you’re trying to complain about the treatment of political prisoners in your country, then it’s good. And what’s more, whether your blogging is anonymous or not depends on the technology, not on the constitution or the judiciary.
As Ben Laurie has so clearly pointed out, unless the connection layer is anonymous, nothing else matters.
I think that at a minimum bloggers should have conditional anonymity: that is, they should be able to use a pseudonym that is only connected to them on the production of a court order. This cannot be achieved by depending on the service providers: even if they operate with good will,
Computer scientists have recently undermined our faith in the privacy-protecting power of anonymization, the name for techniques for protecting the privacy of individuals in large databases by deleting information like names and social security numbers. These scientists have demonstrated they can often ‘reidentify’ or ‘deanonymize’ individuals hidden in anonymized data with astonishing ease.
What this, I think, implies is that there will be blogging platforms that spring up in the US to operate under the provisions of protected free speech legislation and beyond the vagaries of UK libel laws and, over time, the most interesting and valuable blogs will migrate in that direction. Those platforms will provide authenticated pseudonymous identities (using, as I repeatedly wish for, 2FA OpenID or something similar) that are contingent on cryptography. How is the nurse going to blow the whistle on a drunk surgeon without pseudonymity?
[Dave Birch] I’ve repeatedly said that I want the laws of mathematics and physics to protect my personal data, not to rely on the laws of the UK (or anywhere else). This is for two reasons. For one thing, I’m not confident that the people making the laws know what they’re doing (they tend to be lawyers and politicians rather than engineers or scientists). For another thing, there’s no reason to expect that the cold, hard distinction between 1s and 0s that builds the virtual world is suitable to manage the ambiguities of the legal world. Thus, even if a law is set out correctly, that doesn’t mean that it will never be replaced or altered in a perverse way. The oldest law still on the books in our United Kingdom is the Distress Act of 1267 (which outlawed private feuds, forcing people to go to court for redress in civil disputes) but not many laws have made it through eight centuries. Things change. But even if the law is right and on the books, that doesn’t mean it will be interpreted as intended. At the eema European eIdentity conference, I noticed that the Chief Privacy Officer for the Department of Homeland Security referred to the US Privacy Act of 1974 as one of the inputs to their policy. But,
The Privacy Act of 1974—the law designed to protect your rights as the government collects, uses, and shares your data—fails to consistently protect of citizens’ privacy because circuit courts disagree on how to interpret its language.
This illustrates my point. My personal data should be protected by cryptography, not by the vagaries of judicial interpretation.
[Dave Birch] Health care is a very difficult environment to deal with, and no-one can underestimate the complexity and tensions in the space. I want my health details to remain absolutely private, but if I get run over by a bus then I want the doctor in casualty to have access to everything, instantly. There are basically two ways of doing this: storing my medical details with my doctor and letting other doctors access them, or taking my details and putting them in a big database for other doctors to access. In the UK, the government has naturally opted for the big database model. But as with big database models for everything else (eg, the Children’s Index) that means that privacy is hard to preserve because things will always go wrong.
Dr Paul Golik, secretary of North Staffordshire LMC and a GP in Norton-in-the-Moors, Stoke on Trent… accessed the personal details of a number of other patients registered elsewhere, including, with their consent, staff at his practice – all without being detected… ‘It’s basically open – we might as well put our names and addresses on Google,’
This is apparently the Conservative Party’s plan anyway.
Health records could be transferred to Google or Microsoft under a Tory government.
Why do health records have to be transferred anywhere? Everyone has to be registered with a GP, so let the GPs choose whichever service providers they want to store the data provided they comply with certain interface requirements. Then when I go to GP B while on holiday, he can put his smart card in his laptop and look up my health details at GP A (it would be easy to do: just make email@example.com autorespond with my health record in XML encyrpted using the public key of the requesting doctor). Of course, there might still be ways for it to go wrong, provided people are involved somewhere. Even the Germans are having problems securing national health data, although in their cases they’ve buggered it up in a “fail safe” way and lost the keys so that no-one can read the data, rather the having everyone read the data which I suppose if you’re going to make an error is the better way to do it.
Test runs with Germany’s first-generation electronic health cards and doctors’ “health professional cards” have suffered a serious setback. After the failure of a hardware security module (HSM) holding the private keys for the root Certificate Authority (root CA) for the first-generation cards, it emerged that the data had not been backed up.