We can contribute to childhood e-safety

Dgwb blog white border

We can use identity and authentication (ie “recognition”) technologies to improve Internet safety, if we use them correctly.

It is good to wander out of the comfort zone from time to time and expose your ideas to more acid tests. Hence I went along to the seminar on “Childhood and the Internet – Safety, Education and Regulation” in London in January. I was there for three main reasons:

  1. I am interested in the evolution of identification and authentication in an online environment, and protecting children is one of the cases that brings the mass market practicalities into sharp relief.
  2. We have clients who are developing recognition services, and it seems to me that if these services can contribute to a safer environment for children then we may have something of a win-win for encouraging adoption.
  3. Protecting children is an emotional topic, and as responsible member of society it concerns me that emotional responses may not be society’s best responses. This is a difficult subject. If, as technologists, we make any comment about initiatives to protect children being pointless or even counterproductive we may be accused of being sympathetic to criminals and perverts hence we need to learn to engage effectively. I’m not interest in childhood e-safety theatre, but childhood e-safety.

The seminar was kicked-off by Simon Milner, the Policy Director (UK and Ireland) for Facebook. He started off by noting that Facebook has a “real” names policy. Given my fascination with the topic, I found his comments were quite interesting as they were made on the same day that the head of Facebook, Mark Zuckerberg, was interviewed in Business Week saying that the “real” names policy was being amended.

One thing about some of the new apps that will come as a shock to anyone familiar with Facebook: Users will be able to log in anonymously.

[From Facebook Turns 10: The Mark Zuckerberg Interview – Businessweek]

Simon went on to say that the “real” names policy, setting to one side whether it means anything or not, is a good thing (he didn’t really explain why and I didn’t get a chance to ask) and then talked about how children who are being bullied on Facebook can report the problem and so on. I know nothing about this topic, other than as a parent, so I can’t comment on how effective or otherwise these measures might be. To be honest, there were several talks that I’m not qualified to comment on so I won’t, other than to say I found some of the talks by the subject matter experts extremely thought-provoking and I’m glad I heard them.

The main discussion that I was interested in was led by Helen Goodman MP (the Shadow Minister for Culture, Media and Sport) and Claire Perry MP, who is the Prime Minister’s special advisor on preventing the sexualisation and commercialisation of childhood. The ex-McKinsey Ms. Perry attracted a certain amount of fame in web circles last year (just search on “#PornoPerry”) when she made some public statements that seemed to indicate that she didn’t completely understand how the internet worked, despite being behind the government’s “porn filter”. (I am not picking on her. I should explain for foreign readers that most MPs are lawyers, management consultants, property developers, PR flacks and such like and they don’t really understand how anything actually works, least of all the interweb tubes. Only one out of the 635 MPs in the British Parliament is scientist.)

Now, let me be completely honest and point out that I have previously criticised not only the “real” names movement in general but Ms. Goodman’s views on anonymity in particular. I think she is wrong to demand “real” names. However, as I said a couple of years ago,

I’m not for one moment suggesting that Ms. Goodman’s concerns are not wholly real and heart felt. I’m sure they are.

[From The battle of the internet security experts – Tomorrow’s Transactions]

This does not make her right about what to do though. Forcing people to interact online using their mundane identity is a bad idea on so many levels.

But that was the same month that the Communist party struck its first major blow against Weibo, requiring users to register their real names with the service. From that point, those wishing to criticise the Party had to do so without the comforting blanket of anonymity and users started to rein themselves in.

[From China kills off discussion on Weibo after internet crackdown – Telegraph]

I’m not suggesting that Ms. Perry represents a government intent on creating a totalitarian corporatist state that reduces us wage-slaves to the level of serfs to be monitored at all times. I’m sure her good intentions are to block only those communications that challenge basic human decency and serve to undermine the foundations of our society, such as MTV, but the end of public online space seems a drastic step. What has been the result of the Chinese campaign to end anonymity? What is the practical impact of a real names policy?

Once an incalculably important public space for news and opinion – a fast-flowing river of information that censors struggled to contain – it has arguably now been reduced to a wasteland of celebrity endorsements, government propaganda and corporate jingles.

[From China kills off discussion on Weibo after internet crackdown – Telegraph]

None of us, I’m sure, would like to see pillars of our society such as the Daily Mail reduced to the level of “celebrity endorsements, government propaganda and corporate jingles”. Perhaps there is now less crime in China too, but I have yet to discover any statistics that would prove that. I don’t want this to happen to Twitter, Facebook and The Telegraph web site (where it is my right as Englishman to post abuse about the Chancellor of the Exchequer should I so choose). So here is a practical and positive suggestion. At the seminar Helen said the “The gap between real-world identity and online identity is at the root of [the problem of cyberbullying]”. So let’s close that gap. Not by requiring (and policing) “real” names, but by implementing pseudonymity correctly. I wrote an extended piece on this for Total Payments magazine recently.

Now imagine that I get a death threat from an authenticated account. I report the abuse. Twitter can (automatically) tell the police who authenticated the transaction (i.e., Barclays). The police can then obtain a warrant and ask Barclays who I am. Barclays will tell them my name and address and where I last used my debit card. If it was, say, Vodafone who had authenticated me rather than Barclays, then Vodafone could even tell the police where I am (or at least, where my phone is).

[From Dave Birch’s Guest Post: Anonymity – privilege or right? – Total Payments : Total Payments]

As I said, I don’t just want to talk about doing something about cyberbullying and the like, I actually want to do something about it. “Real” names are a soundbite, not a solution. What we need is a working identity infrastructure that allows for strongly-authenticated pseudonyms so that bullies can be blocked and revealed but public space can remain open for discussion and debate. Then you can default Facebook and Twitter and whatever to block unauthenticated pseudonyms without insisting the kid looking for help on coming out, the woman looking at double-glazing options or the dreary middle-aged businessman railing against suicidal economic policies from revealing their identities unless they want to

It’s all fun and games, until… no, wait, it is all fun and games

Consult Hyperion has been working on a project called VOME with the UK Technology Strategy Board. The idea of the project is to help people who are specifying and designing new, mass-market products and services (eg, Consult Hyperion’s clients) to understand privacy issues and make better decisions on architecture.

VOME, a research project that will reveal and utilise end users’ ideas and concepts regarding privacy and consent, facilitating a clearer requirement of the hardware and software required to meet end users’ expectations.

[From Technology Strategy Board | News | Latest News | New research projects help to ensure privacy of data]

Part of the project is about finding different ways to communicate with the public about privacy and factor their concerns into the requirements and design processes. Some of these ways involve various kinds of artistic experiments and it’s been fun to be involved with these. We’ve already taken part in a couple of unusual experiments, such as getting amateur writers to produce work about privacy from different perspectives.

More recently we have been working with Woking Writers’ Circle on the production of a collection of short stories and poems entitled ‘Privacy Perspectives’.

[From Media – Consult Hyperion]

As one of the technical team, I have to say that it’s very useful to be forced to try to think about things like privacy-enhanced technology, data protection and risk in these different contexts. One the artistic experiments underway at the moment, primarily aimed at educating teenagers and young people about the value of their personal data, is the development of a card game that explores the concept. The card game experiment, lead by Dr. David Barnard-Wills from Cranfield University, has reached the point where the game needs playtesting. So… we all met up in London to play a couple of games of it.

Turned out that not only had the chaps developed the game way further than I had imagined, but they’ve invented a pretty good game. Think the constant trading of “Settlers of Catan” with the power structures of “Illuminati” mixed with game play of “Crunch”. I liked it.

You get cards representing personal data of different kinds. Depending on who you are (each player is a different kind of business: bank, dating agency, insurance company etc) you want different datasets and you want to link them together into your corporate database. A dataset is a line of three or more data items of the same kind. Here’s a corporate database with two datasets in it: the green biographical data 2-2-3 and the orange financial data 3-3-3, these will score at the end of the game.

There are event cards, that pop up each round to affect the play, and some special cards that the players get from time to time. Check out the database I ended up with in the game that my colleague and I won! I was the bank, so I was trying to collect financial data in my database but I was also trying to collect social data (purple) in my hand.

We had great fun, and we all contributed a ton of ideas. The game is being refined for a new version in a month or two, so we’ll try it again then and I’ll let you know how it’s going! I don’t know if the guys are actually going to turn it into a commercial product (that isn’t really the point of it) but I’d say they are on to a winner. My tip: instead of calling it “Privacy”, call it “Super Injunction”.

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]

What do they want us to do?

What do the politicians, regulators, police and the rest of them want us (technologists) to do about the interweb tubes? It might be easier to work out what to do if we had a clear set of requirements from them. Then, when confronted with a problem such as, for example, identity theft, we could build systems to make things better. In that particular case, things are currently getting worse.

Mr Bowron told the MPs this week that although recovery rates were relatively low, the police detection rate was 80 per cent. However, the number of cases is rising sharply with nearly 2m people affected by identity fraud every year.

[From FT.com / UK / Politics & policy – MP calls cybercrime Moriarty v PC Plod]

So, again, to pick on this paricular case, what should be done?

Mr Head also clarified his position on the safety of internet banking, insisting that while traditional face-to-face banking was a better guarantee against fraud, he accepted that society had moved on. “If you take precautions, it’s safe,” he said.

[From FT.com / UK / Politics & policy – MP calls cybercrime Moriarty v PC Plod]

Yet I remember reading in The Daily Telegraph (just googled it: 20th November 2010) there was a story about an eBay fraud perpetrated by fraudsters who set up bank accounts using forged identity documents, so face-to-face FTF does not, as far as I can see, mean any improvement in security at all. In fact, I’m pretty sure that it is worse than nothing, because people are easier to fool than computers. I would argue that Mr. Head has things exactly wrong here, because we an integrated identity infrastructure should not discriminate between FTF and remote transactions.

I think this sort of thing is actually representative of a much bigger problem around the online world. Here’s another example. Bob Gourley. the former CTO of the U.S. Defense Intelligence Agency, poses a fundamental and important question about the future identity infrastructure.

We must have ways to protect anonymity of good people, but not allow anonymity of bad people. This is going to be much harder to do than it is to say. I believe a structure could be put in place, with massive engineering, where all people are given some means to stay anonymous, but when a certain key is applied, their cloak can be peeled back. Hmmm. Who wants to keep those keys

[From A CTO analysis: Hillary Clinton’s speech on Internet freedom | IT Leadership | TechRepublic.com]

So, just to recap, Hillary says that we need an infrastructure that stops crime but allows free assembly. I have no idea how to square that circle, except to say that prevention and detection of crime ought to be feasible even with anonymity, which is the most obvious and basic way to protect free speech, free assembly and whistleblowers: it means doing more police work, naturally, but it can be done. By comparison, “knee jerk” reactions, attempting to force the physical world’s limited and simplistic identity model into cyberspace, will certainly have unintended consequences.

Facebook’s real-name-only approach is non-negotiable – despite claims that it puts political activists at risk, one of its senior policy execs said this morning.

[From Facebook’s position on real names not negotiable for dissidents • The Register]

I’ve had a Facebook account for quite a while, and it’s not in my “real” name. My friends know that John Q. Doe is me, so we’re linked and can happily communicate, but no-one else does. Which suits me fine. If my real name is actually Dave bin Laden, Hammer of the Infidel, but I register as John Smith, how on Earth are Facebook supposed to know whether “John Smith” is a “real” name or not? Ludicrous, and just another example of how broken the whole identity realm actually is.

For Facebook to actually check the real names, and then to accept the liabilities that will inevitably result, would be expensive and pointless even if it could be achieved. A much better solution is for Facebook to help to the construction and adoption of a proper digital identity infrastructure (such as USTIC, for example) and then use it.

The implementation of NSTIC could force some companies, like Facebook, to change the way it does business.

[From Wave of the Future: Trusted Identities In Cyberspace]

That’s true, but it’s a good thing, and it’s good for Facebook as well as for other businesses and society as a whole. So, for example, I might use a persistent pseudonymous identity given to me by a mobile operator, say Vodafone UK. If I use that identity to obtain a Facebook identity, that’s fine by Facebook: they have a certificate from Vodafone UK to say that I’m a UK citizen or whatever. I use the Vodafone example advisedly, because it seems to me that mobile operators would be the natural providers of these kinds of credentials, having both the mechanism to interact FTF (shops) and remotely, as well as access to the SIM for key storage and authentication. Authentication is part of the story too.

But perhaps the US government’s four convenient “levels of assurance” (LOAs), which tie strong authentication to strong identity proofing, don’t apply to every use case under the sun. On the recent teleconference where I discussed these findings, we ended up looking at the example of World of Warcraft, which offers strong authentication but had to back off strong proofing.

[From Identity Assurance Means Never Having To Say “Who Are You, Again?” | Forrester Blogs]

Eve is, naturally, absolutely right to highlight this. There is no need for Facebook to know who I really am if I can prove that Vodafone know who I am (and, importantly, that I’m over 13, although they may not be for much longer given Mr. Zuckerberg’s recent comments on age limits).

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]

Tough choices

The relationship between identity and privacy is deep: privacy (in the sense of control over data associated with an identity) ought to be facilitated by the identity infrastructure. But that control cannot be absolute: society needs a balance in order to function, so the infrastructure ought to include a mechanism for making that balance explicit. It is very easy to set the balance in the wrong place even with the best of intentions. And once the balance is set in the wrong place, it may have most undesirable consequences.

An obsession with child protection in the UK and throughout the EU is encouraging a cavalier approach to law-making, which less democratic regimes are using to justify much broader repression on any speech seen as extreme or dangerous…. “The UK and EU are supporting measures that allow for websites to be censored on the basis of purely administrative processes, without need for judicial oversight.”

[From Net censors use UK’s kid-safety frenzy to justify clampdown • The Register]

So a politician in one country decides, say, that we should all be able to read out neighbour’s emails just in case our neighbour is a pervert or serial killer or terrorist and the next thing we know is that Iranian government supporters in the UK are reading their neighbours emails and passing on their details to a hit squad if the emails contain any anti-regime comments.

By requiring law enforcement backdoors, we open ourselves to surveillance by hackers and foreign intelligence agencies

[From slight paranoia: Web 2.0 FBI backdoors are bad for national security]

This is, of course, absolutely correct, and it was shown in relief today when I read that…

Some day soon, when pro-democracy campaigners have their cellphones confiscated by police, they’ll be able to hit the “panic button” — a special app that will both wipe out the phone’s address book and emit emergency alerts to other activists… one of the new technologies the U.S. State Department is promoting to equip pro-democracy activists in countries ranging from the Middle East to China with the tools to fight back against repressive governments.

[From U.S. develops panic button for democracy activists | Reuters]

Surely this also means that terrorists about to execute a dastardly plot in the US will be able to wipe their mobile phones and alert their co-conspirators when the FBI knock on the door and, to use the emotive example, that child pornographers will be able to wipe their phones and alert fellow abusers when the police come calling. Tough choices indeed. We want to protect individual freedom so we must create private space. And yet we still need some kind of “smash the glass” option, because criminals do use the interweb tubes and there are legitimate law enforcement and national security interests here. Perhaps, however, the way forward to move away from the idea of balance completely.

In my own area of study, the familiar trope of “balancing privacy and security” is a source of constant frustration to privacy advocates, because while there are clearly sometimes tradeoffs between the two, it often seems that the zero-sum rhetoric of “balancing” leads people to view them as always in conflict. This is, I suspect, the source of much of the psychological appeal of “security theater”: If we implicitly think of privacy and security as balanced on a scale, a loss of privacy is ipso facto a gain in security. It sounds silly when stated explicitly, but the power of frames is precisely that they shape our thinking without being stated explicitly.

[From The Trouble With “Balance” Metaphors]

This is a great point, and when I read it it immediately helped me to think more clearly. There is no evidence that taking away privacy improves security, so it’s purely a matter of security theatre.

Retaining telecommunications data is no help in fighting crime, according to a study of German police statistics, released Thursday. Indeed, it could even make matters worse… This is because users began to employ avoidance techniques, says AK Vorrat.

[From Retaining Data Does Not Help Fight Crime, Says Group – PCWorld]

This is precisely the trajectory that we will all be following. The twin pressures from Big Content and law enforcement mean that the monitoring, recording and analysis of internet traffic is inevitable. But it will also be largely pointless, as my own recent experiences have proven. When I was in China, I wanted to use Twitter but it was blocked. So I logged in to a VPN back in the UK and twittered away. When I wanted to listen to the football on Radio 5 while in Spain, the BBC told me that I couldn’t, so I logged back in to my VPN and cheered the Blues. When I want to watch “The Daily Show” from the UK or when I want to watch “The Killing” via iPlayer in the US, I just go via VPN.

I’m surprised more ISPs don’t offer this as value-added service themselves. I already pay £100 per month for my Virgin triple-play (50Mb/s broadband, digital TV and telephone, so another £5 per month for OpenVPN would suit me fine).

Two-faced, at the least

The end of privacy is in sight, isn’t it? After all, we are part of a generation that twitters and updates its path through the world, telling everyone everything. Not because Big Brother demands it, but because we want to. We have, essentially, become one huge distributed Big Brother. We give away everything about ourselves. And I do mean everything.

Mr. Brooks, a 38-year-old consultant for online dating Web sites, seems to be a perfect customer. He publishes his travel schedule on Dopplr. His DNA profile is available on 23andMe. And on Blippy, he makes public everything he spends with his Chase Mastercard, along with his spending at Netflix, iTunes and Amazon.com.

“It’s very important to me to push out my character and hopefully my good reputation as far as possible, and that means being open,” he said, dismissing any privacy concerns by adding, “I simply have nothing to hide.”

[From T.M.I? Not for Sites Focused on Sharing – NYTimes.com]

We’ll come back to the reputation thing later on, but the point I wanted to make is that I think this is dangerous thinking, the rather lazy “nothing to hide” meme. Apart from anything else, how do you know whether you have anything to hide if you don’t know what someone else is looking for?

To Silicon Valley’s deep thinkers, this is all part of one big trend: People are becoming more relaxed about privacy, having come to recognize that publicizing little pieces of information about themselves can result in serendipitous conversations — and little jolts of ego gratification.

[From T.M.I? Not for Sites Focused on Sharing – NYTimes.com]

We haven’t had the Chernobyl yet, so I don’t privilege the views of the “deep thinkers” on this yet. In fact, I share the suspicion that these views are unrepresentative, because they come from such a narrow strata of society.

“No matter how many times a privileged straight white male tech executive tells you privacy is dead, don’t believe it,” she told upwards of 1,000 attendees during the opening address. “It’s not true.”

[From Privacy still matters at SXSW | Tech Blog | FT.com]

So what can we actually do? Well, I think that the fragmentation of identity and the support of multiple personas is one good way to ensure that the privacy that escapes us in the physical world will be inbuilt in the virtual world. Not everyone agrees. If you are a rich white guy living in California, it’s pretty easy to say that multiple identities are wrong, that you have no privacy get over it, that if you have nothing to hide you have nothing to fear, and such like. But I disagree. So let’s examine a prosaic example to see where it takes us: not political activists trying to tweet in Iran or Algerian pro-democracy Facebook groups or whatever, but the example we touched on a few weeks ago when discussing comments on newspaper stories: blog comments.

There’s an undeniable problem with people using the sort-of-anonymity of the web, the cyber-equvalent of the urban anonymity that began with the industrial revolution, to post crap, spam, abuse and downright disgusting comments on blog posts. And there is no doubt that people can use that sort-of-anonymity to do stupid, misleading and downright fraudulent things.

Sarah Palin has apparently created a second Facebook account with her Gmail address so that this fake “Lou Sarah” person can praise the other Sarah Palin on Facebook. The Gmail address is available for anyone to see in this leaked manuscript about Sarah Palin, and the Facebook page for “Lou Sarah” — Sarah Palin’s middle name is “Louise” — is just a bunch of praise and “Likes” for the things Sarah Palin likes and writes on her other Sarah Palin Facebook page

[From Sarah Palin Has Secret ‘Lou Sarah’ Facebook Account To Praise Other Sarah Palin Facebook Account]

Now, that’s pretty funny. But does it really matter? if Lou Sarah started posting death threats or child pornography then, yeah, I suppose it would, but I’m pretty sure there are laws about that already. But astrosurfing with Facebook and posting dumb comments on tedious blogs, well, who cares? If Lou Sarah were to develop a reputation for incisive and informed comment, and I found myself looking forward to her views on key issues of the day, would it matter to me that she is an alter-ego. I wonder.

I agree with websites such as LinkedIn and Quora that enforce real names, because there is a strong “reputation” angle to their businesses.

[From Dean Bubley’s Disruptive Wireless: Insistence on a single, real-name identity will kill Facebook – gives telcos a chance for differentiation]

Surely, the point here is that on LinkedIn and Quora (to be honest, I got a bit bored with Quora and don’t go there much now), I want the reputation for work-related skills, knowledge, experience and connections, so I post with my real name. When I’m commenting at my favourite newspaper site, I still want reputation – I want people to read my comments – but I don’t always want them connected either with each other or with the physical me (I learned this lesson after posting in a discussion about credit card interest rates and then getting some unpleasant e-mails from someone ranting on about how interest is against Allah’s law and so on).

My identity should play ZERO part in the arguments being made. Otherwise, it’s just an appeal to authority.

[From The Real “Authenticity Killer” (and an aside about how bad the Yahoo brand has gotten) — Scobleizer]

To be honest, I think I pretty much agree with this. A comment thread on a discussion site about politics or football should be about the ideas, the argument, not “who says”. I seem to remember, from when I used to teach an MBA course on IT Management a long time ago, that one of the first lessons of moving to what was then called computer-mediated communication (CMC) for decision-making was that it led to better results precisely because of this. (I also remember that women would often create male pseudonyms for these online communications because research showed that their ideas were discounted when they posted as women.)

It isn’t just about blog comments. Having a single identity, particularly the Facebook identity, it seems to me, is fraught with risk. It’s not the right solution. It’s almost as if it was built in a different age, where no-one had considered what would happen when the primitive privacy model around Facebook met commercial interests with the power of the web at their disposal.

that’s the approach taken by two provocateurs who launched LovelyFaces.com this week, with profiles — names, locations and photos — scraped from publicly accessible Facebook pages. The site categorizes these unwitting volunteers into personality types, using a facial recognition algorithm, so you can search for someone in your general area who is “easy going,” “smug” or “sly.”

[From ‘Dating’ Site Imports 250,000 Facebook Profiles, Without Permission | Epicenter | Wired.com]

Nothing to hide? None of my Facebook profiles is in my real name. My youngest son has great fun in World of Warcraft and is very attached to his guilds, and so on, but I would never let him do this in his real name. There’s no need for it and every reason to believe that it would make identity problems of one form or another far worse (and, in fact, the WoW rebellion over “real names” was led by the players themselves, not privacy nuts). But you have to hand it to Facebook. They’ve been out there building stuff while people like me have been blogging about identity infrastructure.

Although it’s not apparent to many, Facebook is in the process of transforming itself from the world’s most popular social-media website into a critical part of the Internet’s identity infrastructure

[From Facebook Wants to Supply Your Internet Driver’s License – Technology Review]

Now Facebook may very well be an essential part of the future identity infrastructure, but I hope that people will learn how to use it properly.

George Bronk used snippets of personal information gleaned from the women’s Facebook profiles, such as dates of birth, home addresses, names of pets and mother’s maiden names to then pass the security questions to reset the passwords on their email accounts.

[From garlik – The online identity experts]

I don’t know if we should expect the public, many of who are pretty dim, to take more care over their personal data or if we as responsible professionals, should design an infrastructure that at least makes it difficult for them to do dumb things with their personal data, but I do know that without some efforts and design and vision, it’s only going to get worse for the time being.

“We are now making a user’s address and mobile phone number accessible as part of the User Graph object,”

[From The Next Facebook Privacy Scandal: Sharing Phone Numbers, Addresses – Nicholas Jackson – Technology – The Atlantic]

Let’s say, then, for sake of argument, that I want to mitigate the dangers inherent in allowing any one organisation to gather too much data about me so I want to engage online using multiple personas to at least partition the problem of online privacy. Who might provide these multiple identities? In an excellent post on this, Forum friend Dean Bubley aggresively asserts

I also believe that this gives the telcos a chance to fight back against the all-conquering Facebook – if, and only if, they have the courage to stand up for some beliefs, and possibly even push back against political pressure in some cases. They will also need to consider de-coupling identity from network-access services.

[From Dean Bubley’s Disruptive Wireless: Insistence on a single, real-name identity will kill Facebook – gives telcos a chance for differentiation]

The critical architecture here is pseduonymity, and an obvious way to implement it is by using multiple public-private key pairs and then binding them to credentials to form persona that can be selected from the handset, making the mobile phone into an identity remote control, allowing you to select which identity you want to asset on a per transaction basis if so desired. I’m sure Dean is right about the potential. Now, I don’t want to sound the like grumpy old man of Digital Identity, but this is precisely the idea that Stuart Fiske and I put forward to BT Cellnet back in the days of Genie – the idea was the “Genie Passport” to online services. But over the last decade, the idea has never gone anywhere with any of the MNOs that we have worked for. Well, now is the right time to start thinking about this seriously in MNO-land.

But mark my words, we WILL have a selector-based identity layer for the Internet in the future. All Internet devices will have a selector or a selector proxy for digital identity purposes.

[From Aftershocks of an untimely death announcement | IdentitySpace]

The most logical place for this selector is in the handset, managing multiple identities in the UICC, accessible OTA or via NFC. I use case is very appealing: I select ‘Dave Birch’ on my hansdset, tap it to my laptop and there is all of the ‘Dave Birch’ stuff. Change the handset selector to ‘David G.W. Birch’ and then tap the handset to the laptop again and all of the ‘Dave Birch’ stuff is gone and all of the ‘David G.W. Birch’ stuff is there. It’s a very appealing implementation of a general-purpose identity infrastructure and it would a means for MNOs to move to smart pipe services. But is it too late? Perhaps the arrival of non-UICC secure elements (SEs) mean that more agile organisations will move to exploit the identity opportunity.

How smart?

I had an interesting conversation with the CTO of a multi-billion company at the Mobile World Congress in Barcelona. He, like me, felt that something has been going wrong in the world of identity, authentication, credentials and reputation as we try to create electronic versions of physical world legacy constructs instead of starting from a new sets of requirements for the virtual world and working back. He was talking about machines, though, not people.

Robots could soon have an equivalent of the internet and Wikipedia. European scientists have embarked on a project to let robots share and store what they discover about the world. Called RoboEarth it will be a place that robots can upload data to when they master a task, and ask for help in carrying out new ones.

[From BBC News – Robots to get their own internet]

RoboEarth? No! Skynet, please. And Skynet needs to share an identity infrastructure with the interweb tubes, because of the rich interaction between personal identity and machine identity that will be integral to future living. The internet of things infrastructure needs an identity of things infrastructure to work properly. Our good friend Rob Bratby from Olswang wrote, accurately, that

The deployment of smart meters is one of the most significant deployments of what is often described as ‘the internet of things’, but its linkage to subscriber accounts and individual homes, and the increasing prevalence of data ‘mash-ups’ (cross-referencing of multiple databases) will require these issues to be thought about in a more sophisticated and nuanced way.

[From Watching the connectives | A lawyer’s insight into telecoms and technology]

I can confirm from our experiences advising organisations in the smart metering value chain that these issues are certainly not being thought about in either sophisticated or nuanced ways.

“The existing business policies and practices of utilities and third-party smart grid providers may not adequately address the privacy risks created by smart meters and smart appliances,

[From Grid Regulator: The Internet & Privacy Concerns Will Shape Grid: Cleantech News and Analysis «]

Not my words, the Federal Energy Regulatory Commission in the US. Too right. The lack of an identity infrastructure isn’t just a matter of Facebook data getting into the wrong hands or having to have a different 2FA dongle for each of your bank accounts. It’s a matter of critical infrastructure starting down the wrong path, from which it will be hard to recover after the first Chernobyl of the smart meter age, the first time some kids, or the North Korean government, or a software error at the gas company shuts down all the meters, or publishes all of the meter readings in a Google maps-style mashup so that burglars can find out which houses in a street are empty, or the News of World can get a text alert when a sleb gets home, or whatever.

My CTO friend was, I’m certain, right to suggest that we need to start by working out what we what identity to look like in general and then work out what the subset of that in the physical world needs to look like. If we do start building an EUTIC or a UKTIC to complement NSTIC then I think it should work for smart meters as well as for dumb people.

Theoretically private

The Institute for Advanced Legal Studies hosted an excellent seminar by Professor Michael Birnhack from the Faculty of Law at Tel Aviv University who was talking about “A Quest for a Theory of Privacy”.

He pointed out that while we’re all very worried about privacy, we’re not really sure what should be done. It might be better to pause and review the legal “mess” around privacy and then try to find an intellectually-consistent way forward. This seems like a reasonable course of action to me, so I listened with interest as Michael explained that for most people, privacy issues are becoming more noticeable with Facebook, Google Buzz, Airport “nudatrons”, Street View, CCTV everywhere (particularly in the UK) and so on. (I’m particularly curious about the intersection between new technologies — such as RFID tags and biometrics — and public perceptions of those technologies, so I found some of the discussion very interesting indeed.)

Michael is part of the EU PRACTIS research group that has been forecasting technologies that will have an impact on privacy (good and bad: PETs and threats, so to speak). They use a roadmapping technique that is similar to the one we use at Consult Hyperion to help our clients to plan their strategies for exploiting new transaction technologies and is reasonably accurate within a 20 year horizon. Note that for our work for commercial clients, we use a 1-2 year, 2-5 year, and 5+ year roadmap. No-one in a bank or a telco cares about the 20 year view, even if we could predict it with any accuracy — and given that I’ve just read the BBC correspondents informed predictions for 2011 and they don’t mention, for example, what’s been going on in Tunisia and Egypt, I’d say that’s pretty difficult.

One key focus that Michael rather scarily picked out is omnipresent surveillance, particularly of the body (data about ourselves, that is, rather than data about our activities), with data acted upon immediately, but perhaps it’s best not go into that sort of thing right now!

He struck a definite chord when he said that it might be the new business models enabled by new technologies that are the real threat to privacy, not the technologies themselves. These mean that we need to approach a number of balances in new ways: privacy versus law enforcement, privacy versus efficiency, privacy versus freedom of expression. Moving to try and set these balances, via the courts, without first trying to understand what privacy is may take us in the wrong direction.

His idea for working towards a solution was plausible and understandable. Noting that privacy is a vague, elusive and contingent concept, but nevertheless a fundamental human right, he said that we need a useful model to start with. We can make a simple model by bounding a triangle with technology, law and values: this gives three sets of tensions to explore.

Law-Technology. It isn’t a simple as saying that law lags technology. In some cases, law attempts to regulate technology directly, sometimes indirectly. Sometimes technology responds against the law (eg, anonymity tools) and sometimes it co-operates (eg, PETs — a point that I thought I might disagree with Michael about until I realised that he doesn’t quite mean the same thing as I do by PETs).

Technology-Values. Technological determinism is wrong, because technology embodies certain values. (with reference to Social Construction of Technology, SCOT). Thus (as I think repressive regimes around the world are showing) it’s not enough to just have a network.

Law-Values, or in other words, jurisprudence, finds courts choosing between different interpretations. This is where Michael got into the interesting stuff from my point of view, because I’m not a lawyer and so I don’t know the background of previous efforts to resolve tensions on this line.

Focusing on that third set of tensions, then, in summary: From Warren and Brandeis’ 1890 definition of privacy as the right to be let alone, there have been more attempts to pick out a particular bundle of rights and call them privacy. Alan Westin‘s 1967 definition was privacy as control: the claims of individuals or groups or institutions to determine for themselves when, how and to what extent information about them is communicated to others.

This is a much better approach than the property right approach, where disclosing or not disclosing, “private” and “public” are the states of data. Think about the example of smart meters, where data outside the home provides information about how many people are in the home, what time they are there and so on. This shows that the public/private, in/out, home/work barriers are not useful for formulating a theory. The alternative that he put forward considers the person, their relationships, their community and their state. I’m not a lawyer so I probably didn’t understand the nuances, but this didn’t seem quite right to me, because there are other dimensions around context, persona, transaction and so on.

The idea of managing the decontextualisation of self seemed solid to my untrained ear and eye and I could see how this fitted with the Westin definition of control, taking on board the point that privacy isn’t property and it isn’t static (because it is technology-dependent). I do think that choices about identity ought, in principle, to be made on a transaction-by-transaction basis even if we set defaults and delegate some of the decisions to our technology and the idea that different persona, or avatars, might bundle some of these choices seems practical.

Michael’s essential point is, then, that a theory of privacy that is formulated by examining definitions, classsifications, threats, descriptions, justifications and concepts around privacy from scratch will be based on the central notion of privacy as control rather than secrecy or obscurity. As a technologist, I’m used to the idea that privacy isn’t about hiding data or not hiding it, but about controlling who can use it. Therefore Michael’s conclusions from jurisprudence connect nicely connect with my observations from technology.

An argument that I introduced in support of his position during the questions draws on previous discussions around the real and virtual boundary, noting that the lack of control in physical space means the end of privacy there, whereas in virtual space it may thrive. If I’m walking down the street, I have no control over whether I am captured by CCTV or not. But in virtual space, I can choose which persona to launch into which environment, which set of relationships and which business deals. I found Michael’s thoughts on the theory behind this fascinating, and I’m sure I’l be returning to them in the future.

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]

Real-time identity

Naturally, given my obsessions, I was struck by a subset of the Real-Time Club discussions about identities on the web at their evening with Aleks Krotoski. In particular, I was struck by the discussion about multiple identities on the web, because it connects with some work we (Consult Hyperion) have been doing for the European Commission. One point that was common to a number of the discussions was the extent to which identity is needed for, or integral to, online transactions. Generally speaking, I think many people mistake the need for some knowledge about a counterparty with the need to know who they are, a misunderstanding that actually makes identity fraud worse because it leads to identities being shared more widely than they need be. There was a thread to the discussion about children using the web, as there always is in such discussions, and this led me to conclude that proving that you are over (or under) 18 online might well be the acid test of a useful identity infrastructure: if your kids can’t easily figure out a way to get round it, then it will be good enough for e-government, e-business and the like.

I think the conversation might have explored more about privacy vs. anonymity, because many transactions require the former but not the latter. But then there should be privacy rather than anonymity for a lot of things, and there should be anonymity for some things (even if this means friction in a free society, as demonstrated by the Wikileaks storm). I can see that this debate is going to be difficult to organise in the public space, simply because people don’t think about those topics in a rich enough way: they think common sense is a useful guide which, when it comes to online identity, it isn’t.

On a different subject, a key element of the evening’s discussion was whether the use of social media, and the directions of social media technology, lead to more or less serendipity. (Incidentally, did you know that the word “serendipity” was invented by Horace Walpole in 1754?) Any discussion about social media naturally revolves around Facebook.

Facebook is better understood, not as a country, but as a refugee camp for people who feel today’s lack of identity-forging social experience.

[From Facebook: the heart in a heartless world | spiked]

I don’t agree, but I can see the perspective. But I don’t see my kids fleeing into Facebook, I see them using Facebook to multiply and enrich their interpersonal interactions. Do they meet new people on Facebook? Yes, they do. Is that true for all kids, of all educational abilities, of all socio-economic classes, I don’t know (and I didn’t find out during the evening, because everyone who was discussing the issue seemed to have children at expensive private schools, so they didn’t seem like a statistically-representative cross-section of the nation).

Personally, I would come down on the side of serendipity. Because of social media I know more people than I did before, but I’ve also physically met more people than I knew before: social media means that I am connected with people who a geographically and socially more dispersed. I suppose you might argue that its left me less connected with the people who live across the street from me, but then I don’t have very much in common with them.

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]


Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.