[Dave Birch] I’m not sure if it was a good idea to have National Get Online Week at the same time as National Identity Fraud Prevention Week and at the same time as announcing record identity fraud figures!

The National Fraud Authority (NFA) said fraudsters who stole identities had gained £1.9bn in the past year. Their frauds had affected 1.8 million people, the NFA estimated.

[From BBC News – Identity fraud now costs £1.9bn, says fraud authority]

As Philip Virgo notes, there appear to be some conflicting messages here and there may be some danger of a lack of strategic co-ordination.

Just after Martha had described her plans to the “Parliament and the Internet” conference last week, those at the session on “On-line Safety” discussed the need to bring the two sets of messages together lest they cancel each other out.

[From Mixed messages: “Get Online Week” v. “National Identity Fraud Prevention Week” – When IT Meets Politics]

I’ve scoured the coverage to find out exactly what it is that the “Get Online” campaign and the “Fraud Prevention” campaign plan to do about identity infrastructure and I’ve looked through the Cabinet Office “Manifesto for a Network Nation” (which does not mention identity or authentication even once) to find out what the British equivalent of the US National Strategy for Trusted Identities in Cyberspace is but I’m afraid I’ve come up with a bit of a blank (although a search of the Get Online Week website did turn up one article that mentioned identity theft in 2008). Perhaps I’m looking in the wrong places and a correspondent can point me in the right direction.

The UK national security strategy that was released last week does at least mention identity theft as a problem (it says that “Government, the private sector and citizens are under sustained cyber attack today, from both hostile states and criminals. They are stealing our intellectual property, sensitive commercial and government information, and even our identities in order to defraud individuals, organisations and the Government”) but doesn’t actually mention identity or authentication, nor does it put forward any suggestion as to what might be done about the problem.

I was looking into this because IBM kindly invited me along to give a talk to various border control agencies and such like on the topic of data sharing in government. I formulated the general problem in a simple way to make for a good talk (I hope!), because I wanted to make a few points about the relationship between technology and policy, and the relationship between identity and identity fraud.

I’m in the process of applying for a visa (for a country that is not relevant to the discussion, but I’ve been invited to give a talk there), and as part of the process I was asked to provide my last three months’ bank statements. I’m very uncomfortable with this, as I worry about what might happen to that data: not because I have a lot of money, but because I’m worried about identity theft. What if there’s someone working for the visa processing company who will pass my details to organised crime? Conceptually, it would make more sense to provide the embassy with a letter from my bank stating that I have enough money to pay for my hotel in the country, but in the physical world that is so much hassle it is unbelievable. In the virtual world, it should be trivial: somewhere on my home banking page there should be a button to press to get a digital certificate to attest to my solvency, and then I should be able to pass that certificate to the relevant visa organisation (or anyone else I want to), but not the bank data. Alternatively, perhaps Barclays could provide a digitally-signed statement showing the account balances but not any of the account details.

Absent a proper digital identity infrastructure, then every time you are asked to hand over personal data in order to prove something about yourself, then you are making identity theft more likely. Am I being paranoid, one of the “privacy taliban”, peddling fantasies about the abuse of personal data collected for entirely legitimate and reasonable purposes? I don’t think so.

An NHS IT manager who illegally snooped on the medical records of more than 400 patients, including his family, friends and colleagues, has been spared jail

[From Hull NHS worker Dale Trever spared jail for snooping on patient records]

People reading this sort of thing might well decide to opt out of the NHS supercomputer system — although personally I can see the benefits of this excellent service: after all, why waste time buying a woman drinks in a bar when you can just log on with your iPhone and find out if she is on contraception, has had depression or is a problem drinker — and it might perhaps be wise to do so before anyone (not only NHS employees) can look up your data on the web.

Patients will be able to view their medical records online, email their GP and compare doctors across Britain under plans for an “information revolution” in the NHS. In an article for today’s Daily Telegraph, Martha Lane Fox says that the most useful online services for consumers should be made available to NHS patients over the next few years.

[From NHS patients to see medical records online – Telegraph]

You might be able to opt out of this until there is an identity and authentication infrastructure (usernames and passwords don’t count) in place, but you can’t opt out of anything if you’re applying for a visa. It’s a fact of life that state intrusion is unavoidable.

Five interviewees who traveled to Iran in recent months said they were forced by police at Tehran’s airport to log in to their Facebook accounts. Several reported having their passports confiscated because of harsh criticism they had posted online about the way the Iranian government had handled its controversial elections earlier this year.

[From Emergent Chaos: Fingerprinted and Facebooked at the Border]

Another good reason for not having your Facebook profile in your real name. My point was that data sharing with government is problem in many cases because we share identity-related data instead of the relevant credentials, or we share identity as a proxy for credentials. It’s not as if the “state” doesn’t know how to implement the right kind of system. They clearly do, and EURODAC is an excellent example. It doesn’t store name and address and goodness knows what else: but if your fingerprints are on it, that means you’ve applied for asylum in an EU member state.

In 2008, EURODAC processed 219,557 sets of fingerprints of asylum seekers, 61,945 sets of fingerprints of people crossing the borders irregularly and 75,919 sets of fingerprints of people apprehended while illegally staying on the territory of a Member State… 17.5% of the asylum applications in 2008 were subsequent (i.e. second or more) asylum applications.

[From EU’s biometric database effectively manages the Common European Asylum System :: PublicTechnology.net]

EURODAC shows that it is possible to build systems that work without storing or transmitting identity if you make an effort. This is one very useful kind of Privacy Enhanced Technology (or PET). Where can we look for ideas about how to build PETs into large-scale, government systems? The Commission had a meeting about this last year…

Caspar Bowden launched the Q&A session with a scathing attack on the Interim Report, highlighting a fundamental lack of both definition and categorisation of PETs. He went on to assess the results so far as being predictable as a result of questions which were too vague. He sited a list of terms which he suggested should be fundamental to any report on PETs, and which were missing: zero-sum, minimisation, subject access, transparency, threat model, onion routing, differential privacy…, and a “total blindness in the Report to any […] notion of personal data”.

[From Tech and Law: PETs – economic benefits – EU]

I wasn’t at the meeting, but I don’t doubt for one moment that Caspar’s criticisms are absolutely accurate. The fundamental lack of definition undermines the work being done. The meeting report includes a discussion about the Dutch Ministry of the Interior’s “4-pillar” list of PETs which is as good a place to start as any.

  • General PETs measures – this is what today’s Report is referring to, the stepping up of PETs to data and information security.
  • Separation of data – missing in the Report, and a key cornerstone of PETs, the splitting of identity domain and pseudo-identity domains. This at the heart of my view of digital identity,

    Many people do think eID could and should be implemented without full identification, i.e. more granular disclosure with pseudonymity – see e.g. Dave Birch’s brilliant and very readable paper “Psychic ID: A blueprint for a modern national identity scheme”.

    [From Tech and Law: PETs]

    I’m too modest to include further details, other than to say that one of the key points I tried to get over to the IBM audience was that pseudonymity is a potential solution, not a potential problem, which is how it tends to be perceived by passport and border control people.

    “pseudonymous” should here be understood as support for the widest and most nuanced understanding of the term from fully anonymous to what we would normally understand with a PKI Digital Signature

    [From Tech and Law: PETs – Stephan Engberg’s response]

    Quite. And it’s outside the scope of this post, but suffice to say that the spectrum of technologies already available to us — from Microsoft’s UProve to IBM’s Idemix — make large-scale PET implementation practical.

  • Privacy Management Systems – processing data within the borders of legislation, for which there are already ready-to-work prototypes developed by IBM and HP for the market.
  • Anonymisation – no registration of private data, or at least the destruction of personal data on processing.

Now, this last point is obviously difficult to consider if you are working for law enforcement, but there may be a trade-off here. If you can persuade people to use electronic means of communicating and transacting, then you may be much better off in law enforcement terms than if you insist of full disclosure, in which case people either don’t use the systems or go around them. An example is money transfer systems that try to replace hawala (informal money transfer networks). By requiring complex and detailed know-your-customer (KYC) procedures, customers are shut out of electronic funds transfer scheme and thus remain in hawala. But it would be better for society as a whole for them to be in electronic funds transfer networks even if we don’t know who they are because we can learn something from looking at the electronic transfer but we can learn nothing from cash transfers that are hidden. Anonymous electronic data is, in many circumstances, better than no data at all.

These opinions are my own (I think) and are presented solely in my capacity as an interested member of the general public [posted with ecto]

1 comment

Leave a Reply

Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.
%d bloggers like this:
Verified by MonsterInsights