Inclusion, identity and privacy

Financial inclusion is necessarily built on a foundation of customer identity, but the rush to inclusion and the consequent focus on mass registration in many countries has placed at risk the citizens’ rights to privacy – even where these are recognised in law.  But the mere fact of being excluded should never mean that someones right to privacy is in any way diminished.

With support from Omidyar Network, Consult Hyperion has undertaken a global review of the privacy and data protection aspects of digital identity services, with particular reference to their relevance for financial inclusion. We have reviewed the various digital identity initiatives around the world from a privacy perspective. Building on this framework, we have developed a ‘roadmap’ for digital identity that ensures that privacy, and the needs of regulatory authorities, can be built into digital identity services, ensuring the drive towards financial inclusion can be at its most effective. We hope that this roadmap will be a useful contribution to the industry as it considers how best to deliver digital identity to those most in need.

The key elements of this roadmap are as follows.

Put the individual at the centre of privacy protection

This does not only mean giving individuals control over how their personal data is used; it needs to be reflected in the entire approach to the digital identity system. In order to avoid low levels of take-up and use, it is essential that the emphasis be placed on user needs, rather than vendor-driven use cases or so-called “gold standard” solutions.

Provide an effective legal environment

An effective legal environment must be in place that contains, and can enforce, legal remedies to prevent or punish abuses of personal data.  An effective legal environment will also increase confidence that any contractual measures put in place as part of the trust framework to ensure privacy can be enforced.

Design in privacy from the start

There is widespread recognition that privacy should be designed into any system from the start rather than bolted on as an afterthought.  Privacy–by–design requires a careful understanding of the expected goals of the identity system, an appreciation of the distinctive characteristics of the context of use and an awareness of the technological capabilities and privacy risks associated with proposed next generation digital identity systems.

Separate identification from authentication and authorisation

Many existing identity systems combine identification and authentication activities within the scope of the identity provider. Separating out identification from authentication allows for the relatively rapid roll out of basic digital identity credentials, perhaps issued to all but based on low assurance identity data. The quality of the digital identity can be enhanced over time, in part simply through a history of ownership and use or by incorporating additional data points.

Furthermore, if the basic digital identity credentials only show that the citizen is unique and identifiable and not include other data attributes by default, this will allow future developments to minimise disclosure of data. Today identity systems often include a default data set that is always shared, even when it is not necessary for the service being accessed.

Improve authentication then identification

In an ideal world, it would be desirable to move directly to high quality identification and high quality authentication.  In practice, however, the time and effort to improve the quality of these aspects of digital identity are different.  In general, improvements to authentication quality are likely to be quicker to achieve than improvements in identification quality.

Provide a viable commercial model that disincentivises abuse of personal data

Whilst the monolithic identity providers like Facebook and Google offer easy to use digital identity credentials, their business models could run counter to consumer privacy as key revenue streams come from sharing individual and aggregate customer data. Whilst it is possible to constrain such actions contractually and technologically, long term the commercial model must be designed so that incentives to protect privacy are aligned.

Consider who will pay for the identity system

If identity credentials are to become a key infrastructure for a society, then important questions of how they are to be paid for arise.  There are different models of charging for infrastructure provision that can be drawn upon, but choosing the right payment model can be problematic whether the identity provider is a government agency or a commercial body.

Address questions of liability

Service providers should not be held liable for actions based on properly authenticated identity claims. What then of the liability of the identity providers?  Here the complexity of the liability model grows as benefits and risks are shared unequally.  In extremis, the identity provider privatises the some of the benefits (e.g. payments for authentications) but socialises the risks (e.g. complete failure of trust in the identity system as a whole).

Review the role of compulsion

For countries introducing new identity credentials, questions of consent and compulsion become particularly significant from a market and rights perspective.  They may cause significant disruption to the roll out of system.  In such cases it is frequently stated that the new identity system is voluntary, not compulsory and that individuals can always choose not to have an identity credential. In this case, as the critical mass of credential holders develops, effective compulsion can arise. However, evidence from Europe suggests that the various electronic identity cards are used infrequently because most people have infrequent access to public services and those that do have more frequent access rarely need to formally identify themselves each time.

All of the underlying issues, and the elements of the proposed roadmap, are explored in detail in the report available here. It’s very detailed piece of work, so you might want to being with the Executive Summary that is available here. We are genuinely curious about your views and look forward to all feedback.

We might want an irreversible anonymous blockchain but not for irreversible anonymous payments

I think I’ll just read John Lanchester’s superb piece about bitcoin in the London Review of Books one more time. It’s hard to choose a favourite part of such an excellent article, but if I was pressed to do so, I suppose it would be this part:

David Birch is the author of a fresh, original and fascinatingly wide-ranging short book about developments in the field, Identity Is the New Money. His is the best book on general issues around new forms of money, and new possibilities generated by blockchain technology.

From John Lanchester · When Bitcoin Grows Up: What is Money? · LRB 21 April 2016

John is much too kind. And is a much better writer than I am, which is why his piece is so good. His basic question about where we are going next is fascinating and has been at the heart of some heated debates that I’ve been involved in recently, including a stand-up with a bunch of very clever people at the European Blockchain Congress in London.

Arguing with smart people is how I learn

 

My preferred method of accelerated learning is arguing with smart people, and the Congress delivered them in spades. But before I come back to this particular argument, let’s just frame the big picture. First of all, no-one would deny that the bitcoin blockchain is a triumph of technology and engineering and innovation and ingenuity. Statistically, almost no-one uses it, but that’s by the by.

“The total addressable market of people who want to buy bitcoin is very, very thin,”

From What a Tech Startup’s Pivots Say About Bitcoin’s Future | American Banker

Indeed. And most of them aren’t in America or any other developed market. Why? Well, bitcoin is a super-inefficient form of digital currency that was designed to solve one problem (uncensorability). If I’m trying to get my last few dollars out of Caracas before the power is shut off permanently then bitcoin might provide a rickety bridge to US Dollars, but if I’m trying to pay for a delicious burrito at Chipotle then bitcoin is pointless. However, and this is what the argument at the Congress (in the picture above) made me think about, there may be other factors that mean the bitcoin blockchain will obtain mass market traction.

What factors? Well, here are two that were touched on during the discussion pictured above, together with my more considered reflections on them.

One factor might be irreversibility. I think we all understand that you can’t build an irreversible payment system on top of a reversible payment system (such as direct debits in the UK) but you can build a reversible payment system (which is what society actually wants) on top of an irreversible one. That’s a good argument for having an fast, free and irreversible payment system that can be built on to provide a variety of different payment schemes suited to particular marketplaces. In the UK we already have this, it’s called the Faster Payment Service (FPS). Once the Payment Systems Regulator (PSR) has finished opening up access to FPS and once FPS can be accessed efficiently through the “XS2A” Application Programming Interaces (APIs) that will be put in place by the Second Payment Services Directive (PSD2), then we ought to be able to unleash some creativity in the developer community and perhaps build a reversible payment scheme on top of this irreversible infrastructure (I’m not the only genius to have thought of this: MasterCard are one of the bidders). Then it wouldn’t matter whether the scheme used the bitcoin blockchain or the FPS or NPP in Australia or TCH in the US or Ripple or anything else: the choice would come down to price and performance. Perhaps bitcoin would then be a choice, although I’m not sure about it.

Another factor might be anonymity. No-one who actually thinks about it wants anonymity. What they want is privacy. But there is a similar asymmetry as in the case of irreversibility. You can’t build an anonymous system on top of a non-anonymous system but you could build a privacy-enhancing transaction system on topic of an anonymous system and since I’m rather wedded to the idea of private payment systems, I find this an interesting combination. Again, would bitcoin be a choice for this? That’s not clear to me at all.

What if those factors turn out to be important enough to build new services, but not for creating a currency? This would support the view that a blockchain, although not necessarily the bitcoin blockchain, might well be the shared security service that society needs to anchor a new generation of online transactional services. As time goes by, this strikes me as a more and more interesting possibility. I mentioned it a couple of weeks ago.

Dr. Wright says “The mining of bitcoin is a security service that alone creates no wealth”. So to return to the point above, the sheer volume of mining going on (provided it does not become concentrated) means that there is a very, very secure piece of infrastructure out there. This infrastructure may be used to “anchor” all sorts of new services that need security as I said above. Some of them may be payments (as the Lightning folks hope) but most of them will not be.

From Mining for what? | Consult Hyperion

So, to get back to John Lanchester’s piece, where might we be going next? I’m pretty sure that we’ll soon see another more efficient blockchain that will untangle the cryptocurrency from the carrier by providing some other incentive for mining (perhaps more like Ethereum, who knows). This, the Watt blockchain that will replace the Newcomben blockchain that we have now, could well be the new supranational security infrastructure that, as some claim, will be as important as the Internet itself because it will provide the security layer that the Internet should have had in the first place.

#IDIoT is a serious business

The Gartner hype cycle is jolly bullish on autonomous vehicles, which I’m really looking forward to. According to Jerry Kaplan’s fascinating “Humans need not apply”, switching to autonomous vehicles in the US will save thousands of lives and billions of dollars every year. Personally, I couldn’t care less if I never drive a car for myself ever again, and I hope that Woking will become an autonomous vehicle only zone as soon as possible. Sadly, this won’t be for a while.

While autonomous vehicles are still embryonic, this movement still represents a significant advancement, with all major automotive companies putting autonomous vehicles on their near-term roadmaps.

[From Gartner’s 2015 Hype Cycle for Emerging Technologies Identifies the Computing Innovations That Organizations Should Monitor]

Gartner are even more bullish on what they call autonomous field vehicles (which I think means drones, combine harvesters and such like) and predict that these will be around in 2-5 years time, just like enterprise 3D printing and cryptocurrency exchanges. I couldn’t help but notice, though, that their very same hype cycle puts digital security at least 5-10 years out. So they are forecasting that there will be vehicles running around for some years before we are able to secure them, 3D printers inside organisations printing things for years before we are able to protect them and people trading money years before we can stop hackers from looting them. Actually, I agree with Gartner’s prediction, as it’s entirely congruent with my own #IDIoT line of thinking, which is that our developments in connection technologies are accelerating past our developments in disconnection technologies. And if you don’t care what I think about it, you probably do care what Vint Cerf thinks about it.

“Sometimes I’m terrified by it,” he said in a news briefing Monday at the Heidelberg Laureate Forum in Germany. “It’s a combination of appliances and software, and I’m always nervous about software — software has bugs.”

[From Vint Cerf: ‘Sometimes I’m terrified’ by the IoT | ITworld]

We’re busy going round connecting vehicles, equipment and money to the internet with having any sort of strategy in place for disconnecting them, which is much more difficult (doors are easy, locks are hard, basically). And with chips that we don’t even understand being built into everyday devices, the complexity of managing security is escalating daily. Look at the recently-launched “21” idea.

Its core business plan it turns out will be embedding ASIC bitcoin mining chips into everyday devices like USB battery chargers, routers, printers, gaming consoles, set-top boxes and — the piece de resistance — chipsets to be used by internet of things devices.

[From Meet the company that wants to put a bitcoin miner in your toaster | FT Alphaville]

Really? Chips in everything? What could possibly go wrong? Oh wait, it already has. There’s something missing here: an identity layer. Hardly a new idea and I’m not the only person going on about it.

Everyone and everything will have an identity… We can’t scale a world that we can’t talk to, can’t control and can’t secure. Everything, including your toaster, you fridge and your car, will have an identity.

[From Facing the new Big Bang: The IoT’s identity onslaught — Tech News and Analysis]

Yet nothing much is getting done, despite that fact that we already have plenty of case studies as to how bad the situation is already. Never mind smart fridges that give away your personal details or televisions that spy on you there are issues about the maintenance and upkeep of things in the field that create an identity management environment utterly different to anything are used to dealing with in the worlds of OIX, Mobile Connect, SAML and so on. 

Did you buy a smart TV or set-top box or tablet any time before January 2013? Do you watch YouTube on it, perhaps through an app? Bad news: Google has shut down the feed that pushed content into the app.

[From You buy the TV, Google ‘upgrades’ its software and then YouTube doesn’t work … | Technology | The Guardian]

It’s issues like this that make me want to focus on identity in the internet of things (or #IDIoT, as I call it) in the near term, so I was really flattered to be asked along by the good people at ForgeRock to talk about this at their London Identity Summit tomorrow. Really looking forward to exploring some of these ideas and getting feedback from people who know what they’re talking about. What’s more, Consult Hyperion and the Surrey Centre for the Digital Economy (CoDE) will be delivering a highly interactive workshop session designed specifically for the University of Surrey’s 5G Innovation Centre SME Technology Pioneer Members on 30th November 2015. This will include “business lab sessions” interleaved with presentations and discussion. We’ll be putting forward the #IDIoT structure to explore identity, privacy and security issues using our ‘3 Rs’ of Recognition, Relationship and Reputation. The event will be an opportunity to establish contacts with companies interested in the IoT space, as well as connecting with the broader University community and a select group of large enterprises so I’m really looking forward to it and, as you might imagine, you’ll read all about it here!

Private money and privacy money

Dgwb blog white border

The relationship between payments and anonymity (which we can label “cash” for short) is far more complicated than it appears. If you ask people whether they want anonymity in payments, they are very likely to say yes, but that’s because they haven’t really thought about it.

We can contribute to childhood e-safety

Dgwb blog white border

We can use identity and authentication (ie “recognition”) technologies to improve Internet safety, if we use them correctly.

It is good to wander out of the comfort zone from time to time and expose your ideas to more acid tests. Hence I went along to the seminar on “Childhood and the Internet – Safety, Education and Regulation” in London in January. I was there for three main reasons:

  1. I am interested in the evolution of identification and authentication in an online environment, and protecting children is one of the cases that brings the mass market practicalities into sharp relief.
  2. We have clients who are developing recognition services, and it seems to me that if these services can contribute to a safer environment for children then we may have something of a win-win for encouraging adoption.
  3. Protecting children is an emotional topic, and as responsible member of society it concerns me that emotional responses may not be society’s best responses. This is a difficult subject. If, as technologists, we make any comment about initiatives to protect children being pointless or even counterproductive we may be accused of being sympathetic to criminals and perverts hence we need to learn to engage effectively. I’m not interest in childhood e-safety theatre, but childhood e-safety.

The seminar was kicked-off by Simon Milner, the Policy Director (UK and Ireland) for Facebook. He started off by noting that Facebook has a “real” names policy. Given my fascination with the topic, I found his comments were quite interesting as they were made on the same day that the head of Facebook, Mark Zuckerberg, was interviewed in Business Week saying that the “real” names policy was being amended.

One thing about some of the new apps that will come as a shock to anyone familiar with Facebook: Users will be able to log in anonymously.

[From Facebook Turns 10: The Mark Zuckerberg Interview – Businessweek]

Simon went on to say that the “real” names policy, setting to one side whether it means anything or not, is a good thing (he didn’t really explain why and I didn’t get a chance to ask) and then talked about how children who are being bullied on Facebook can report the problem and so on. I know nothing about this topic, other than as a parent, so I can’t comment on how effective or otherwise these measures might be. To be honest, there were several talks that I’m not qualified to comment on so I won’t, other than to say I found some of the talks by the subject matter experts extremely thought-provoking and I’m glad I heard them.

The main discussion that I was interested in was led by Helen Goodman MP (the Shadow Minister for Culture, Media and Sport) and Claire Perry MP, who is the Prime Minister’s special advisor on preventing the sexualisation and commercialisation of childhood. The ex-McKinsey Ms. Perry attracted a certain amount of fame in web circles last year (just search on “#PornoPerry”) when she made some public statements that seemed to indicate that she didn’t completely understand how the internet worked, despite being behind the government’s “porn filter”. (I am not picking on her. I should explain for foreign readers that most MPs are lawyers, management consultants, property developers, PR flacks and such like and they don’t really understand how anything actually works, least of all the interweb tubes. Only one out of the 635 MPs in the British Parliament is scientist.)

Now, let me be completely honest and point out that I have previously criticised not only the “real” names movement in general but Ms. Goodman’s views on anonymity in particular. I think she is wrong to demand “real” names. However, as I said a couple of years ago,

I’m not for one moment suggesting that Ms. Goodman’s concerns are not wholly real and heart felt. I’m sure they are.

[From The battle of the internet security experts – Tomorrow’s Transactions]

This does not make her right about what to do though. Forcing people to interact online using their mundane identity is a bad idea on so many levels.

But that was the same month that the Communist party struck its first major blow against Weibo, requiring users to register their real names with the service. From that point, those wishing to criticise the Party had to do so without the comforting blanket of anonymity and users started to rein themselves in.

[From China kills off discussion on Weibo after internet crackdown – Telegraph]

I’m not suggesting that Ms. Perry represents a government intent on creating a totalitarian corporatist state that reduces us wage-slaves to the level of serfs to be monitored at all times. I’m sure her good intentions are to block only those communications that challenge basic human decency and serve to undermine the foundations of our society, such as MTV, but the end of public online space seems a drastic step. What has been the result of the Chinese campaign to end anonymity? What is the practical impact of a real names policy?

Once an incalculably important public space for news and opinion – a fast-flowing river of information that censors struggled to contain – it has arguably now been reduced to a wasteland of celebrity endorsements, government propaganda and corporate jingles.

[From China kills off discussion on Weibo after internet crackdown – Telegraph]

None of us, I’m sure, would like to see pillars of our society such as the Daily Mail reduced to the level of “celebrity endorsements, government propaganda and corporate jingles”. Perhaps there is now less crime in China too, but I have yet to discover any statistics that would prove that. I don’t want this to happen to Twitter, Facebook and The Telegraph web site (where it is my right as Englishman to post abuse about the Chancellor of the Exchequer should I so choose). So here is a practical and positive suggestion. At the seminar Helen said the “The gap between real-world identity and online identity is at the root of [the problem of cyberbullying]”. So let’s close that gap. Not by requiring (and policing) “real” names, but by implementing pseudonymity correctly. I wrote an extended piece on this for Total Payments magazine recently.

Now imagine that I get a death threat from an authenticated account. I report the abuse. Twitter can (automatically) tell the police who authenticated the transaction (i.e., Barclays). The police can then obtain a warrant and ask Barclays who I am. Barclays will tell them my name and address and where I last used my debit card. If it was, say, Vodafone who had authenticated me rather than Barclays, then Vodafone could even tell the police where I am (or at least, where my phone is).

[From Dave Birch’s Guest Post: Anonymity – privilege or right? – Total Payments : Total Payments]

As I said, I don’t just want to talk about doing something about cyberbullying and the like, I actually want to do something about it. “Real” names are a soundbite, not a solution. What we need is a working identity infrastructure that allows for strongly-authenticated pseudonyms so that bullies can be blocked and revealed but public space can remain open for discussion and debate. Then you can default Facebook and Twitter and whatever to block unauthenticated pseudonyms without insisting the kid looking for help on coming out, the woman looking at double-glazing options or the dreary middle-aged businessman railing against suicidal economic policies from revealing their identities unless they want to

It’s all fun and games, until… no, wait, it is all fun and games

Consult Hyperion has been working on a project called VOME with the UK Technology Strategy Board. The idea of the project is to help people who are specifying and designing new, mass-market products and services (eg, Consult Hyperion’s clients) to understand privacy issues and make better decisions on architecture.

VOME, a research project that will reveal and utilise end users’ ideas and concepts regarding privacy and consent, facilitating a clearer requirement of the hardware and software required to meet end users’ expectations.

[From Technology Strategy Board | News | Latest News | New research projects help to ensure privacy of data]

Part of the project is about finding different ways to communicate with the public about privacy and factor their concerns into the requirements and design processes. Some of these ways involve various kinds of artistic experiments and it’s been fun to be involved with these. We’ve already taken part in a couple of unusual experiments, such as getting amateur writers to produce work about privacy from different perspectives.

More recently we have been working with Woking Writers’ Circle on the production of a collection of short stories and poems entitled ‘Privacy Perspectives’.

[From Media – Consult Hyperion]

As one of the technical team, I have to say that it’s very useful to be forced to try to think about things like privacy-enhanced technology, data protection and risk in these different contexts. One the artistic experiments underway at the moment, primarily aimed at educating teenagers and young people about the value of their personal data, is the development of a card game that explores the concept. The card game experiment, lead by Dr. David Barnard-Wills from Cranfield University, has reached the point where the game needs playtesting. So… we all met up in London to play a couple of games of it.

Turned out that not only had the chaps developed the game way further than I had imagined, but they’ve invented a pretty good game. Think the constant trading of “Settlers of Catan” with the power structures of “Illuminati” mixed with game play of “Crunch”. I liked it.

You get cards representing personal data of different kinds. Depending on who you are (each player is a different kind of business: bank, dating agency, insurance company etc) you want different datasets and you want to link them together into your corporate database. A dataset is a line of three or more data items of the same kind. Here’s a corporate database with two datasets in it: the green biographical data 2-2-3 and the orange financial data 3-3-3, these will score at the end of the game.

There are event cards, that pop up each round to affect the play, and some special cards that the players get from time to time. Check out the database I ended up with in the game that my colleague and I won! I was the bank, so I was trying to collect financial data in my database but I was also trying to collect social data (purple) in my hand.

We had great fun, and we all contributed a ton of ideas. The game is being refined for a new version in a month or two, so we’ll try it again then and I’ll let you know how it’s going! I don’t know if the guys are actually going to turn it into a commercial product (that isn’t really the point of it) but I’d say they are on to a winner. My tip: instead of calling it “Privacy”, call it “Super Injunction”.

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]

What do they want us to do?

What do the politicians, regulators, police and the rest of them want us (technologists) to do about the interweb tubes? It might be easier to work out what to do if we had a clear set of requirements from them. Then, when confronted with a problem such as, for example, identity theft, we could build systems to make things better. In that particular case, things are currently getting worse.

Mr Bowron told the MPs this week that although recovery rates were relatively low, the police detection rate was 80 per cent. However, the number of cases is rising sharply with nearly 2m people affected by identity fraud every year.

[From FT.com / UK / Politics & policy – MP calls cybercrime Moriarty v PC Plod]

So, again, to pick on this paricular case, what should be done?

Mr Head also clarified his position on the safety of internet banking, insisting that while traditional face-to-face banking was a better guarantee against fraud, he accepted that society had moved on. “If you take precautions, it’s safe,” he said.

[From FT.com / UK / Politics & policy – MP calls cybercrime Moriarty v PC Plod]

Yet I remember reading in The Daily Telegraph (just googled it: 20th November 2010) there was a story about an eBay fraud perpetrated by fraudsters who set up bank accounts using forged identity documents, so face-to-face FTF does not, as far as I can see, mean any improvement in security at all. In fact, I’m pretty sure that it is worse than nothing, because people are easier to fool than computers. I would argue that Mr. Head has things exactly wrong here, because we an integrated identity infrastructure should not discriminate between FTF and remote transactions.

I think this sort of thing is actually representative of a much bigger problem around the online world. Here’s another example. Bob Gourley. the former CTO of the U.S. Defense Intelligence Agency, poses a fundamental and important question about the future identity infrastructure.

We must have ways to protect anonymity of good people, but not allow anonymity of bad people. This is going to be much harder to do than it is to say. I believe a structure could be put in place, with massive engineering, where all people are given some means to stay anonymous, but when a certain key is applied, their cloak can be peeled back. Hmmm. Who wants to keep those keys

[From A CTO analysis: Hillary Clinton’s speech on Internet freedom | IT Leadership | TechRepublic.com]

So, just to recap, Hillary says that we need an infrastructure that stops crime but allows free assembly. I have no idea how to square that circle, except to say that prevention and detection of crime ought to be feasible even with anonymity, which is the most obvious and basic way to protect free speech, free assembly and whistleblowers: it means doing more police work, naturally, but it can be done. By comparison, “knee jerk” reactions, attempting to force the physical world’s limited and simplistic identity model into cyberspace, will certainly have unintended consequences.

Facebook’s real-name-only approach is non-negotiable – despite claims that it puts political activists at risk, one of its senior policy execs said this morning.

[From Facebook’s position on real names not negotiable for dissidents • The Register]

I’ve had a Facebook account for quite a while, and it’s not in my “real” name. My friends know that John Q. Doe is me, so we’re linked and can happily communicate, but no-one else does. Which suits me fine. If my real name is actually Dave bin Laden, Hammer of the Infidel, but I register as John Smith, how on Earth are Facebook supposed to know whether “John Smith” is a “real” name or not? Ludicrous, and just another example of how broken the whole identity realm actually is.

For Facebook to actually check the real names, and then to accept the liabilities that will inevitably result, would be expensive and pointless even if it could be achieved. A much better solution is for Facebook to help to the construction and adoption of a proper digital identity infrastructure (such as USTIC, for example) and then use it.

The implementation of NSTIC could force some companies, like Facebook, to change the way it does business.

[From Wave of the Future: Trusted Identities In Cyberspace]

That’s true, but it’s a good thing, and it’s good for Facebook as well as for other businesses and society as a whole. So, for example, I might use a persistent pseudonymous identity given to me by a mobile operator, say Vodafone UK. If I use that identity to obtain a Facebook identity, that’s fine by Facebook: they have a certificate from Vodafone UK to say that I’m a UK citizen or whatever. I use the Vodafone example advisedly, because it seems to me that mobile operators would be the natural providers of these kinds of credentials, having both the mechanism to interact FTF (shops) and remotely, as well as access to the SIM for key storage and authentication. Authentication is part of the story too.

But perhaps the US government’s four convenient “levels of assurance” (LOAs), which tie strong authentication to strong identity proofing, don’t apply to every use case under the sun. On the recent teleconference where I discussed these findings, we ended up looking at the example of World of Warcraft, which offers strong authentication but had to back off strong proofing.

[From Identity Assurance Means Never Having To Say “Who Are You, Again?” | Forrester Blogs]

Eve is, naturally, absolutely right to highlight this. There is no need for Facebook to know who I really am if I can prove that Vodafone know who I am (and, importantly, that I’m over 13, although they may not be for much longer given Mr. Zuckerberg’s recent comments on age limits).

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]

Tough choices

The relationship between identity and privacy is deep: privacy (in the sense of control over data associated with an identity) ought to be facilitated by the identity infrastructure. But that control cannot be absolute: society needs a balance in order to function, so the infrastructure ought to include a mechanism for making that balance explicit. It is very easy to set the balance in the wrong place even with the best of intentions. And once the balance is set in the wrong place, it may have most undesirable consequences.

An obsession with child protection in the UK and throughout the EU is encouraging a cavalier approach to law-making, which less democratic regimes are using to justify much broader repression on any speech seen as extreme or dangerous…. “The UK and EU are supporting measures that allow for websites to be censored on the basis of purely administrative processes, without need for judicial oversight.”

[From Net censors use UK’s kid-safety frenzy to justify clampdown • The Register]

So a politician in one country decides, say, that we should all be able to read out neighbour’s emails just in case our neighbour is a pervert or serial killer or terrorist and the next thing we know is that Iranian government supporters in the UK are reading their neighbours emails and passing on their details to a hit squad if the emails contain any anti-regime comments.

By requiring law enforcement backdoors, we open ourselves to surveillance by hackers and foreign intelligence agencies

[From slight paranoia: Web 2.0 FBI backdoors are bad for national security]

This is, of course, absolutely correct, and it was shown in relief today when I read that…

Some day soon, when pro-democracy campaigners have their cellphones confiscated by police, they’ll be able to hit the “panic button” — a special app that will both wipe out the phone’s address book and emit emergency alerts to other activists… one of the new technologies the U.S. State Department is promoting to equip pro-democracy activists in countries ranging from the Middle East to China with the tools to fight back against repressive governments.

[From U.S. develops panic button for democracy activists | Reuters]

Surely this also means that terrorists about to execute a dastardly plot in the US will be able to wipe their mobile phones and alert their co-conspirators when the FBI knock on the door and, to use the emotive example, that child pornographers will be able to wipe their phones and alert fellow abusers when the police come calling. Tough choices indeed. We want to protect individual freedom so we must create private space. And yet we still need some kind of “smash the glass” option, because criminals do use the interweb tubes and there are legitimate law enforcement and national security interests here. Perhaps, however, the way forward to move away from the idea of balance completely.

In my own area of study, the familiar trope of “balancing privacy and security” is a source of constant frustration to privacy advocates, because while there are clearly sometimes tradeoffs between the two, it often seems that the zero-sum rhetoric of “balancing” leads people to view them as always in conflict. This is, I suspect, the source of much of the psychological appeal of “security theater”: If we implicitly think of privacy and security as balanced on a scale, a loss of privacy is ipso facto a gain in security. It sounds silly when stated explicitly, but the power of frames is precisely that they shape our thinking without being stated explicitly.

[From The Trouble With “Balance” Metaphors]

This is a great point, and when I read it it immediately helped me to think more clearly. There is no evidence that taking away privacy improves security, so it’s purely a matter of security theatre.

Retaining telecommunications data is no help in fighting crime, according to a study of German police statistics, released Thursday. Indeed, it could even make matters worse… This is because users began to employ avoidance techniques, says AK Vorrat.

[From Retaining Data Does Not Help Fight Crime, Says Group – PCWorld]

This is precisely the trajectory that we will all be following. The twin pressures from Big Content and law enforcement mean that the monitoring, recording and analysis of internet traffic is inevitable. But it will also be largely pointless, as my own recent experiences have proven. When I was in China, I wanted to use Twitter but it was blocked. So I logged in to a VPN back in the UK and twittered away. When I wanted to listen to the football on Radio 5 while in Spain, the BBC told me that I couldn’t, so I logged back in to my VPN and cheered the Blues. When I want to watch “The Daily Show” from the UK or when I want to watch “The Killing” via iPlayer in the US, I just go via VPN.

I’m surprised more ISPs don’t offer this as value-added service themselves. I already pay £100 per month for my Virgin triple-play (50Mb/s broadband, digital TV and telephone, so another £5 per month for OpenVPN would suit me fine).


Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.