Confronting the issue

There’s an interesting choice of words in the O’Reilly Radar publication on “ePayments 2010“. The report’s subtitle is “Emerging Platforms, Embracing Mobile and Confronting Identity”. I thought that this is expressive: the payments industry is “confronting” identity.

…even as consumers come to expect online systems to know more about them in order to facilitate transactions and reduce friction in accomplishing tasks, they are likely to want to maintain control over which online services have access to distinct aspects of their identity.

Very well put. It illustrates a point that I find myself making in more and more discussions these days: that if the players in the payments industry don’t deal with the identity problem, then someone else will.

Identity is critical in many ways: It ensures the right degree of user personalization, enables the reliable billing of services used across a platform, and provides a strong foundation of trust for any transaction occurring on the platform.

[From Making Sense of Ever-Changing Payment Technologies: The Year of APIs and the Reshaping of the Payment Ecosystem – pymnts.com]

Patrick is right to highlight the key role of identity in constructing the future payments infrastructure, although I would draw a slightly different diagram to illustrate the relationship. He has drawn identity on top of payment services, whereas as I would draw them side-by-side to show that some commerce applications will use identity and some will not, some commerce applications will use payments and some will not. This isn’t just a payments issue, of course. It’s rapidly becoming a major block on the development of the online economy. There’s a Chernobyl coming, and the recent fuss about Sony and Sega will appear utterly trivial in comparison. I’m not smart enough to know where or when it will happen, but it will happen. If I had to take a wild guess, I might be tempted to predict the epicentre if not the cause or symptoms.

I trust Facebook to give the messages that I type to my ‘friends’. I trust Facebook with the login details to my Yahoo email account… Even in the last week at least four of my friends have been link-jacked in Facebook – whereby their accounts start spewing malicious links onto the walls of their friends.

[From Trust co-opetition is the key to avoiding disintermediation « in2payments]

It’s the interlinking via social networking that is precisely the danger, because that means when something goes wrong is goes connectedly wrong and gets out of control in unpredictable ways. Something has got to be done to make identity mischief substantially more difficult. But how?

We need online identities anchored in hardware cryptography. Everybody who does financial cryptography understands that for anything of value, you can’t store the keys in software. You need hardware protected keys, with a cryptoprocessor to operate on them, and very importantly, a trusted UI to the human that doesn’t involve hackable software. EMV is a good basis for this

[From The Case for EMV Chip Cards in the US? — Payments Views from Glenbrook Partners]

Hear hear. I’d say that it was the chip with a crypto co-processor that is the basis (EMV is just an application running on such a chip) but the point holds. So where are these chips today? Well, they exist in your chip and PIN card is a sort of autistic form, with limited communication and narrow bandwidth through which we can reach the smart core. And they exist in your mobile phone, in the form of the UICC, where they have high bandwidth, constant connectivity, a UI, huge memory and an ecosystem beyond the device. And they will soon exist in your mobile phone, set-top box and elsewhere in the Secure Element (SE). (As an aside, in some models the SE will be resident in the UICC, so there may only be one physical chip.)

Therefore, there is an opportunity to roll-out an SE-based infrastructure, perhaps in the NSTIC architecture, that sets us down the path to identity security. I’m surprised that, in Europe at least, the mobile operators haven’t already got together to develop their joint response to NSTIC and begun work on the business models that it spawns. The mobile operator is a naturally identity and attribute provider and they already have the tamper-resistant hardware (ie, UICCs) out in the market. They know the customer, they know the network, they know the device. I should be logging on to everything using my handset already, not messing about with passwords and secret phrases and mother’s maiden name.

From the point of view of the UK, where the national identity card scheme has just been scrapped and there is no alternative identity infrastructure in place, there is much to be admired in the US approach.

[From Digital Identity: USTIC]

This may be another area where the ease of use afforded by NFC makes for a big difference in the shape of the marketplace and the trajectory of the stakeholders. There were some early experiments in SIM-based secure PKI, but they were very, very clunky because they needed SMS or Bluetooth to connect the handset to the target device, like a PC or a kiosk (or a POS). But in the new world of NFC, what could be simpler: use menu on phone to select identity, tap and go online. And since the SE can handle the proper cryptography, my phone can tell whether it is talking to the real Barclays as well as Barclays working out whether it is talking to my phone. The NSTIC framework, when combined with the security and ease-of-use of NFC in mobile phones, may not be whole solution, but it’s certainly a plausible hypothesis about what that solution may grow from.

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]

Tough choices

The relationship between identity and privacy is deep: privacy (in the sense of control over data associated with an identity) ought to be facilitated by the identity infrastructure. But that control cannot be absolute: society needs a balance in order to function, so the infrastructure ought to include a mechanism for making that balance explicit. It is very easy to set the balance in the wrong place even with the best of intentions. And once the balance is set in the wrong place, it may have most undesirable consequences.

An obsession with child protection in the UK and throughout the EU is encouraging a cavalier approach to law-making, which less democratic regimes are using to justify much broader repression on any speech seen as extreme or dangerous…. “The UK and EU are supporting measures that allow for websites to be censored on the basis of purely administrative processes, without need for judicial oversight.”

[From Net censors use UK’s kid-safety frenzy to justify clampdown • The Register]

So a politician in one country decides, say, that we should all be able to read out neighbour’s emails just in case our neighbour is a pervert or serial killer or terrorist and the next thing we know is that Iranian government supporters in the UK are reading their neighbours emails and passing on their details to a hit squad if the emails contain any anti-regime comments.

By requiring law enforcement backdoors, we open ourselves to surveillance by hackers and foreign intelligence agencies

[From slight paranoia: Web 2.0 FBI backdoors are bad for national security]

This is, of course, absolutely correct, and it was shown in relief today when I read that…

Some day soon, when pro-democracy campaigners have their cellphones confiscated by police, they’ll be able to hit the “panic button” — a special app that will both wipe out the phone’s address book and emit emergency alerts to other activists… one of the new technologies the U.S. State Department is promoting to equip pro-democracy activists in countries ranging from the Middle East to China with the tools to fight back against repressive governments.

[From U.S. develops panic button for democracy activists | Reuters]

Surely this also means that terrorists about to execute a dastardly plot in the US will be able to wipe their mobile phones and alert their co-conspirators when the FBI knock on the door and, to use the emotive example, that child pornographers will be able to wipe their phones and alert fellow abusers when the police come calling. Tough choices indeed. We want to protect individual freedom so we must create private space. And yet we still need some kind of “smash the glass” option, because criminals do use the interweb tubes and there are legitimate law enforcement and national security interests here. Perhaps, however, the way forward to move away from the idea of balance completely.

In my own area of study, the familiar trope of “balancing privacy and security” is a source of constant frustration to privacy advocates, because while there are clearly sometimes tradeoffs between the two, it often seems that the zero-sum rhetoric of “balancing” leads people to view them as always in conflict. This is, I suspect, the source of much of the psychological appeal of “security theater”: If we implicitly think of privacy and security as balanced on a scale, a loss of privacy is ipso facto a gain in security. It sounds silly when stated explicitly, but the power of frames is precisely that they shape our thinking without being stated explicitly.

[From The Trouble With “Balance” Metaphors]

This is a great point, and when I read it it immediately helped me to think more clearly. There is no evidence that taking away privacy improves security, so it’s purely a matter of security theatre.

Retaining telecommunications data is no help in fighting crime, according to a study of German police statistics, released Thursday. Indeed, it could even make matters worse… This is because users began to employ avoidance techniques, says AK Vorrat.

[From Retaining Data Does Not Help Fight Crime, Says Group – PCWorld]

This is precisely the trajectory that we will all be following. The twin pressures from Big Content and law enforcement mean that the monitoring, recording and analysis of internet traffic is inevitable. But it will also be largely pointless, as my own recent experiences have proven. When I was in China, I wanted to use Twitter but it was blocked. So I logged in to a VPN back in the UK and twittered away. When I wanted to listen to the football on Radio 5 while in Spain, the BBC told me that I couldn’t, so I logged back in to my VPN and cheered the Blues. When I want to watch “The Daily Show” from the UK or when I want to watch “The Killing” via iPlayer in the US, I just go via VPN.

I’m surprised more ISPs don’t offer this as value-added service themselves. I already pay £100 per month for my Virgin triple-play (50Mb/s broadband, digital TV and telephone, so another £5 per month for OpenVPN would suit me fine).

It all comes back to liability

I posted about the silo-style identity and authentication schemes we have in place at the moment and complained that we are making no progress on federation. Steve Wilson posted a thoughtful reply and picked me up on a few points, such as my “idea” (that’s a bit strong – more of a notion, really) of developing an equivalent of creative commons licences, a sort of open source framework. He says

CC licenses wouldn’t ever be enough. Absent new laws to make this kind of grand identity federation happen, we will still need new contracts — brand new contracts of an unusual form — struck between all the parties.

[From comment on Digital Identity: The sorry state of id and authentication]

But isn’t that what CC licences solve?

It’s complicated by the fact that banks & telcos don’t naturally see themselves as “identity providers”, not in the open anyway

[From comment on Digital Identity: The sorry state of id and authentication]

Well, I’m doing what I can to change that (see, for example, the Visa/CSFI Research Fellowship), but on the main point I happened to be reading the notes from the EURIM Identity Governance Subgroup meeting on 23 February 2011, talking about business cases for population scale identity management systems. The notes say that

It is alleged that the only body with the remit, power and capability needed for assuring and recording a root identity through a secure and reliable registration process is Government.

The notes then go on to talk about case studies such as the Nordic bank-issued eIDs though. These arguments are to some extent circular, of course, because the e-government applications in the Nordics are using bank-issued eIDs, but the only reason that the banks can issue these eIDs is because they are using government ID as the basis for KYC. In the discussion about this at a recent roundtable in that Visa/CSFI “Identity and Financial Services” series, someone made a comment in passing (and I’m embarrassed to say that I can’t remember who said this, because I noted the comment but forgot the commenter) that all of this takes places in a model absent liability. That is, as far as I understand what was said, the government accepts no liability from the banks, and vice versa. So if the bank opens an account for me Sven Birch, using a government “Sven Birch” identity, but it subsequently transpires that I am actually Theogenes de Montford, then the bank cannot claim against the government. Similarly, if I used my bank eID “Sven Birch” to access government services, but it subsequently transpires that I am actually Theogenes, then the government has no claim against the bank. (If this isn’t true, by the way, I would appreciate clarification from a knowledgeable correspondent.)

So what is the situation? Must we have a liability model, or can we all agree to get along without one. Or do you have to a have a more consensual society, or perhaps one with fewer lawyers per head of population?

The sorry state of id and authentication

I had a problem with my PayPal account: I used it in China, and it got blocked as the result of some kind of fraud screening.

I ended up having to promise the guys at Bike Beijing that I will sort this out when I get back to the UK and then send them their money.

[From Digital Money: Holding court]

They still haven’t got their money. In order to unblock the account, you had to log in to your account and then have a code sent via your home telephone number. I clicked, the phone rang, I punched in the number and hung up. Nothing. I clicked again, the phone rang, I punched in the number and waited. Nothing. I clicked again, the phone rang, I punched in the number. After a while, I got an e-mail telling me that the authentication process had failed and so PayPal would send a letter containing some kind of code to my home address and that I could then use this code to unblock my account. It mentioned that the letter might takes six weeks to arrive.

So the nice guys at Bike Beijing still don’t have their money and I’m still embarrassed.

Now, all the time that this nonsense about codes and letters was going on, I had on my desk a Barclays’ PINSentry (which I can’t even use to log on to Barclaycard, let alone PayPal) and a O2 mobile phone (I’ve been with O2 for two decades and have a billing relationship with them – their system knew that I was in China) and a keyring OTP generator that we used for our corporate VPN. Any one of these could provide a better solution then messing about typing in code numbers, but they all sit in their own silos and don’t provide the kind of general-purpose services that they should.

What should have happened, of course, is that I should have been able to log in to PayPal using OpenID and then logged in to a 2FA OpenID using my (say) PINSentry. So now PayPal knows that I have been 2FA logged in from an “acceptable” source (ie, Barclays Bank) and we could move on. So why doesn’t this happen? Is it because OpenID has failed?

But if OpenID is a failure, it’s one of the web’s most successful failures. OpenID is available on more than 50,000 websites. There are over a billion OpenID enabled URLs on the web thanks to providers like Google, Yahoo and AOL. Yet, for most people, trying to log in to every website using OpenID remains a difficult task, which means that while thousands of websites support it, hardly anyone uses OpenID.

[From OpenID: The Web’s Most Successful Failure | Webmonkey | Wired.com]

It can’t be that. OpenID has plenty of support, and even the US government got behind it.

Who would have predicted say, 5 years ago, that you would some day be able to use commercial identities on government websites? Evidently, this raises questions about privacy and security but if these initiatives can garner enough public support, government validation of open identity frameworks could be a boon for the ecosystem of the open, distributed web. Plus, it can make dealing with the government a lot easier for you, too.

[From US Government To Embrace OpenID, Courtesy Of Google, Yahoo, PayPal Et Al.]

It’s not about the technology. I make no judgement as to whether OpenID is the best technology or not (although it does actually exist, which is a good start), but the truth is that it simply doesn’t matter whether it is or it isn’t.

The unresolved business and legal challenges implicit in federated identity are to blame for the under-delivery of OpenID

[From OpenID, Successful Failures And New Federated Identity Options | Forrester Blogs]

Indeed they are. So the problem isn’t really anything to do with OpenID, or any other framework that might come along in cyberspace, but the legal framework that it has to sit inside. This is where we need the breakthrough. We need potential identity providers (eg, Barclays, O2) to be able to set up OpenID responders for their customers inside a well-known and well-understood legal framework. Now, you can do this contractually (as IdenTrust has done), but to scale to the open web, we need something more than that, perhaps an equivalent of the “creative commons” licences that are used for content but for credentials.

Even then, would someone like PayPal rely on them? Or would it only rely on identities from regulated financial institutions in the EU? Or only such institutions that met some minimum authentication standard? We’re a long way from fixing my Chinese problem, despite having all of the technology needed to do so.

Not magic bullets, but bullets nonetheless

How do you identify people? This is a difficult problem. Let’s set aside what you need to identify people for, and just concentrate on large scale solutions.

The Indian government is trying to give all 1.2 billion Indians something like an American Social Security number, but more secure. Because each “universal identity number” (UID) will be tied to biometric markers, it will prove beyond reasonable doubt that anyone who has one is who he says he is. In a country where hundreds of millions of people lack documents, addresses or even surnames, this will be rather useful. It should also boost a wide range of businesses.

[From India: Identifying a billion Indians | The Economist]

The “but more secure” is obvious, because otherwise “something like” a US SSN will be as disastrous as a UK National Insurance number as a viable means of identifying individuals.

The study found that rather than serving as a unique identifier, more than 40 million SSNs are associated with multiple people. 6% of Americans have at least two SSNs associated with their name. More than 100,000 Americans have five or more SSNs associated with their name.

[From One In Seven Social Security Numbers Are Shared]

So what do we mean by “more secure”? How do you go about uniquely identifying people? In the case of India, it means a biometric universal ID (UID). Once the word “biometric” appears, people seem to think there is now a magic bullet against identity theft and fraud and they want to use it for everything (which is why I have previously argued that – given convenience – the market will automatically shift to demand the highest level of assurance of identity for every transaction, whether it requires it or not).

Securities and Exchange Board of India (SEBI)… has constituted an internal group with members from various departments to examine the modalities for making UID applicable for KYC norms and to formulate their views. This information was given by the Minister of State for Finance, Shri Namo Narain Meena in written reply to a question raised in Rajya Sabha today.

[From Press Information Bureau English Releases]

This kind of behaviour builds a tower on shifting sand, introducing a single point of failure into all systems. In fact, it introduces exactly the same single point of failure into all systems, which is why I like the NSTIC approach of multiple identity providers (of which the government in merely one, and a non-priviledged one at that). In India, biometrics have not had a good start. The first attempts to register people for the UID saw only a fifth of the attempts succeed.

Though the department conducted proof-of-concept (pilot project) on over 266,000 people in Mysore and Tumkur districts, only 52,238 UIDs could be generated.

[From Pilot project yielded few UIDs – The Times of India]

Is there something unusual about Indian biometrics? I suspect not. I suspect that biometrics are being used in systems designed by management consultants who have been watching Hollywood movies rather than by technologists who understand the appropriate modalities and bounds. You wouldn’t get that sort of thing here in the UK. No, wait…

Biometric face scanners at Manchester Airport have been switched off after a couple walked through one after swapping passports.

[From Aircargo Asia Pacific – Face scanners switched off at Manchester]

I’ve been through the e-passport face scanners at LHR a few times (I don’t use the IRIS scheme after it rejected me three trips in a row) and I can’t say I haven’t wondered whether it is real or not. We all know that iris scanning is more secure.

A woman from eastern Europe who was deported from the UAE re-entered weeks after her departure using a new identity… To prevent her from returning, her eyes were scanned before she left. But, according to her testimony in court this week, she returned to the UAE through Dubai International Airport using a forged passport and a different name. She said her eyes were scanned upon entry.

[From Iris scan fails to stop returning deportee – The National Newspaper]

Hhhmmm. It seems as if building big databases of biometrics may not be the way forward for the time being. Is there any other way to make biometrics more practical at a large scale? I’m sure there is. Perhaps a good place to start would be to marry some capability and convenience. One thing that we know from examples around the world is that customers like biometrics because of convenience. So what else is convenient? I know: contactless, wireless and RIFD technology.

Standard Chartered is issuing RFID chips to select customers at its newest Korean location, eliminating the need for affluent individuals to wait in lines at the branch. When a customer holding an RFID tag enters the facility, the system immediately notifies the branch manager and a relationship manager who can greet the customer personally at the door.

[From RFID Chips Spell End to Branch Lines for High-Value Customers | The Financial Brand: Marketing Insights for Banks & Credit Unions]

Ah, but when you get to the counter, how does the bank know that you are indeed the valued customer and not an imposter, intent on transferring funds off to Uzbekistan? Well, you could ask the customer to put their finger on a pad, or look at a camera, or speak into a microphone, or what ever, and then send the captured biometric to the RFID device for matching. Instead of rummaging through a giant database, the system can now do an efficient 1-1 comparison offline. If the device returns the correct, digitally-signed response, then the customer is verified. No PINs, no passwords: the combination of biometrics, contactless and tamper-resistant chips can deliver a workable solution to a lot of problems.

Theoretically private

The Institute for Advanced Legal Studies hosted an excellent seminar by Professor Michael Birnhack from the Faculty of Law at Tel Aviv University who was talking about “A Quest for a Theory of Privacy”.

He pointed out that while we’re all very worried about privacy, we’re not really sure what should be done. It might be better to pause and review the legal “mess” around privacy and then try to find an intellectually-consistent way forward. This seems like a reasonable course of action to me, so I listened with interest as Michael explained that for most people, privacy issues are becoming more noticeable with Facebook, Google Buzz, Airport “nudatrons”, Street View, CCTV everywhere (particularly in the UK) and so on. (I’m particularly curious about the intersection between new technologies — such as RFID tags and biometrics — and public perceptions of those technologies, so I found some of the discussion very interesting indeed.)

Michael is part of the EU PRACTIS research group that has been forecasting technologies that will have an impact on privacy (good and bad: PETs and threats, so to speak). They use a roadmapping technique that is similar to the one we use at Consult Hyperion to help our clients to plan their strategies for exploiting new transaction technologies and is reasonably accurate within a 20 year horizon. Note that for our work for commercial clients, we use a 1-2 year, 2-5 year, and 5+ year roadmap. No-one in a bank or a telco cares about the 20 year view, even if we could predict it with any accuracy — and given that I’ve just read the BBC correspondents informed predictions for 2011 and they don’t mention, for example, what’s been going on in Tunisia and Egypt, I’d say that’s pretty difficult.

One key focus that Michael rather scarily picked out is omnipresent surveillance, particularly of the body (data about ourselves, that is, rather than data about our activities), with data acted upon immediately, but perhaps it’s best not go into that sort of thing right now!

He struck a definite chord when he said that it might be the new business models enabled by new technologies that are the real threat to privacy, not the technologies themselves. These mean that we need to approach a number of balances in new ways: privacy versus law enforcement, privacy versus efficiency, privacy versus freedom of expression. Moving to try and set these balances, via the courts, without first trying to understand what privacy is may take us in the wrong direction.

His idea for working towards a solution was plausible and understandable. Noting that privacy is a vague, elusive and contingent concept, but nevertheless a fundamental human right, he said that we need a useful model to start with. We can make a simple model by bounding a triangle with technology, law and values: this gives three sets of tensions to explore.

Law-Technology. It isn’t a simple as saying that law lags technology. In some cases, law attempts to regulate technology directly, sometimes indirectly. Sometimes technology responds against the law (eg, anonymity tools) and sometimes it co-operates (eg, PETs — a point that I thought I might disagree with Michael about until I realised that he doesn’t quite mean the same thing as I do by PETs).

Technology-Values. Technological determinism is wrong, because technology embodies certain values. (with reference to Social Construction of Technology, SCOT). Thus (as I think repressive regimes around the world are showing) it’s not enough to just have a network.

Law-Values, or in other words, jurisprudence, finds courts choosing between different interpretations. This is where Michael got into the interesting stuff from my point of view, because I’m not a lawyer and so I don’t know the background of previous efforts to resolve tensions on this line.

Focusing on that third set of tensions, then, in summary: From Warren and Brandeis’ 1890 definition of privacy as the right to be let alone, there have been more attempts to pick out a particular bundle of rights and call them privacy. Alan Westin‘s 1967 definition was privacy as control: the claims of individuals or groups or institutions to determine for themselves when, how and to what extent information about them is communicated to others.

This is a much better approach than the property right approach, where disclosing or not disclosing, “private” and “public” are the states of data. Think about the example of smart meters, where data outside the home provides information about how many people are in the home, what time they are there and so on. This shows that the public/private, in/out, home/work barriers are not useful for formulating a theory. The alternative that he put forward considers the person, their relationships, their community and their state. I’m not a lawyer so I probably didn’t understand the nuances, but this didn’t seem quite right to me, because there are other dimensions around context, persona, transaction and so on.

The idea of managing the decontextualisation of self seemed solid to my untrained ear and eye and I could see how this fitted with the Westin definition of control, taking on board the point that privacy isn’t property and it isn’t static (because it is technology-dependent). I do think that choices about identity ought, in principle, to be made on a transaction-by-transaction basis even if we set defaults and delegate some of the decisions to our technology and the idea that different persona, or avatars, might bundle some of these choices seems practical.

Michael’s essential point is, then, that a theory of privacy that is formulated by examining definitions, classsifications, threats, descriptions, justifications and concepts around privacy from scratch will be based on the central notion of privacy as control rather than secrecy or obscurity. As a technologist, I’m used to the idea that privacy isn’t about hiding data or not hiding it, but about controlling who can use it. Therefore Michael’s conclusions from jurisprudence connect nicely connect with my observations from technology.

An argument that I introduced in support of his position during the questions draws on previous discussions around the real and virtual boundary, noting that the lack of control in physical space means the end of privacy there, whereas in virtual space it may thrive. If I’m walking down the street, I have no control over whether I am captured by CCTV or not. But in virtual space, I can choose which persona to launch into which environment, which set of relationships and which business deals. I found Michael’s thoughts on the theory behind this fascinating, and I’m sure I’l be returning to them in the future.

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]


Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.