Counterintuitive Cryptography

There was a post on Twitter in the midst of the coronavirus COV-19 pandemic news this week, that caught my eye. It quoted an emergency room doctor in Los Angeles asking for help from the technology community, saying “we need a platform for frontline doctors to share information quickly and anonymously”. It went on to state the obvious requirement that “I need a platform where doctors can join, have their credentials validated and then ask questions of other frontline doctors”.

This is an interesting requirement that tell us something about the kind of digital identity that we should be building for the modern world instead of trying to find ways to copy passport data around the web. The requirement, to know what someone is without knowing who they are, is fundamental to the operation of a digital identity infrastructure in the kind of open democracy that we (ie, the West) espouse. The information sharing platform needs to know that the person answering a question has relevant qualifications and experience. Who that person is, is not important.

Now, in the physical world this is an extremely difficult problem to solve. Suppose there was a meeting of frontline doctors to discuss different approaches and treatments but the doctors wanted to remain anonymous for whatever reason (for example, they may not want to compromise the identity of their patients). I suppose the doctors could all dress up as ghosts, cover themselves in bedsheet and enter the room by presenting their hospital identity cards (through a slit in the sheet) with their names covered up by black pen. But then how would you know that the identity card belongs to the “doctor” presenting it? After all the picture on every identity card will be the same (someone dressed as a ghost) and you have no way of knowing whether it was their ID cards or whether they were agents of foreign powers, infiltrators hellbent on spreading false information to ensure the maximum number of deaths. The real-world problem of demonstrating that you have some particular credential or that you are the “owner” of a reputation without disclosing personal information is a very difficult problem indeed.

(It also illustrates the difficulty of trying to create large-scale identity infrastructure by using identification methods rather than authenticating to a digital identity infrastructure. Consider the example of James Bond, one of my favourite case studies. James Bond is masquerading as a COV-19 treatment physician in order to obtain the very latest knowledge on the topic. He walks up to the door of the hospital where the meeting is being held and puts his finger on the fingerprint scanner at the door… at which point the door loudly says “hello Mr Bond welcome back to the infectious diseases unit”. Oooops.)

In the virtual world this is quite a straightforward problem to solve. Let’s imagine I go to the doctors information sharing platform and attempt to login. The system will demand to see some form of credential proving that I am a doctor. So I take my digital hospital identity card out from my digital wallet (this is a thought experiment remember, none of the things actually exist yet) and send the relevant credential to the platform.

The credential is an attribute (in this case, IS_A_DOCTOR) together with an identifier for the holder (in this case, a public key) together with the digital signature of someone who can attest to the credential (in thsi case, the hospital the employs the doctor). Now, the information sharing platform can easily check the digital signature of the credential, because they have the public keys of all of the hospital and can extract the relevant attribute.

But how do they know that this IS_A_DOCTOR attribute applies to me and that I haven’t copied it from somebody else’s mobile phone? That’s also easy to determine in the virtual world with the public key of the associated digital identity. The platform can simply encrypt some data (anything will do) using this public key and send it to me. Since the only person in the entire world who can decrypt this message is the person with the corresponding private key, which is in my mobile phone’s secure tamper resistant memory (eg, the SIM or the Secure Enclave or Secure Element), I must be the person associated with the attribute. The phone will not allow the private key to be used to decrypt this message without strong authentication (in this case, let’s say it’s a fingerprint or a facial biometric) so the whole process works smoothly and almost invisibly: the doctor runs the information sharing platform app, the app invisibly talks to the digital wallet app in order to get the credential, the digital wallet app asks for the fingerprint, the doctor puts his or her finger on the phone and away we go.

Now the platform knows that I am a doctor but does not have any personally identifiable information about me and has no idea who I am. It does however have the public key and since the hospital has signed a digital certificate that contains this public key, if I should subsequently turn out to be engaged in dangerous behaviour, giving out information that I know to be incorrect, or whatever else doctors can do to get themselves disbarred from being doctors, then a court order against the hospital will result in them disclosing who I am. I can’t do bad stuff.

This is a good example of how cryptography can deliver some amazing but counterintuitive solutions to serious real-world problems. I know from my personal experience, and the experiences of colleagues at Consult Hyperion, that it can sometimes be difficult to communicate just what can be done in the world of digital identity by using what you might call counterintuitive cryptography, but it’s what we will need to make a digital identity infrastructure that works for everybody in the future. And, crucially, all of the technology exists and is tried and tested so if you really want to solve problems like this one, we can help right away.

NSTICy questions

I’ve been reading through the final version of the US government’s National Strategy on Trusted Identities in Cyberspace (NSTIC). This is roughly what journalists think about:

What’s envisioned by the White House is an end to passwords, a system in which a consumer will have a piece of software on a smartsphone or some kind of card or token, which they can swipe on their computers to log on to a website.

[From White House Proposes A Universal Credential For Web : The Two-Way : NPR]

And this is roughly what the public think about it

Why don’t they just put a chip in all of us and get it over with? What part of being a free people do these socialists not understand?

[From White House Proposes A Universal Credential For Web : The Two-Way : NPR]

And this is roughly what I think about it: I think that NSTIC isn’t bad at all. As I’ve noted before I’m pretty warm to it. The “identity ecosystem” it envisages is infinitely better than the current ecosystem and it embodies many of the principles that I regard a crucial to the online future. It explicitly says that “the identity ecosystem will use privacy-enhancing technology and policies to inhibit the ability of service providers (presumably including government bodies) to link an individual’s transactions and says that by default only the minimum necessary information will be shared in transactions. They have a set of what they term the Fair Information Practice Principles (FIPPs) that share, shall we say, a common heritage with Forum friend Kim Cameron’s laws (for the record, the FIPPs cover transparency, individual participation, purpose specification, data minimisation, use limitation, data quality and integrity, security and accountability and audit).

It also, somewhat strangely, I think, says the this proposed ecosystem “will preserve online anonymity”, including “anonymous browsing”. I think this is strange because there is no online anonymity. If the government, or the police, or an organisation really want to track someone, they can. There are numerous examples which show this to be the case. There may be some practical limitations as to what they can do with this information, but that’s a slightly different matter: if I hunt through the inter web tubes to determine that that the person posting “Dave Birch fancies goats” on our blog comes from a particular house in Minsk, there’s not much I can do about it. But that doesn’t make them anonymous, it makes the economically anonymous, and that’s not the same thing, especially to people who don’t care about economics (eg, the security services). It’s not clear to me whether we as a society actually want an internet that allows anonymity or not, but we certainly don’t have one now.

The strategy says that the identity ecosystem must develop in parallel with ongoing “national efforts” to improve platform, network and software security, and I guess that no-one would argue against them, but if we were ever to begin to design an EUSTIC (ie, an EU Strategy for Trusted Identities in Cyberspace) I think I would like it to render platform, network and software security less important. That is, I want my identity to work properly in an untrusted cyberspace, one where ne’erdowells have put viruses on my phone and ever PC is part of a sinister botnet (in other words, the real world).

I rather liked the “envision” boxes that are used to illustrate some of the principles with specific examples to help politicians and journalists to understand what this all means. I have to say that it didn’t help in all cases…

The “power utility” example serves as a good focus for discussion. It expects secure authentication between the utility and the domestic meter, trusted hardware modules to ensure that the software configuration on the meter is correct and to ensure that commands and software upgrades do indeed come from the utility. All well and good (and I should declare an interest a disclose that Consult Hyperion has provided paid professional services in this area in the last year). There’s an incredible amount of work to be done, though, to translate these relatively modest requirements into a national-scale, multi-supplier roll-out.

Naturally I will claim the credit for the chat room “envision it”! I’ve used this for many years to illustrate a number of the key concepts in one simple example. But again, we have to acknowledge there’s a big step from the strategy to any realistic tactics. Right now, I can’t pay my kids school online (last Thursday saw yet another chaotic morning trying to find a cheque book to pay for a school outing) so the chance of them providing a zero-knowledge proof digital credential that the kids can use to access (say) BBC chatrooms is absolutely nil to any horizon I can envisage. In the UK, we’re going to have to start somewhere else, and I really think that that place should be with the mobile operators.

What is the government’s role in this then? The strategy expect policy and technology interoperability, and there’s an obvious role for government — given its purchasing power — to drive interoperability. The government must, however, at some point make some firm choices about its own systems, and this will mean choosing a specific set of standards and fixing a standards profile. They are creating a US National Project Office (NPO) within the Department of Commerce to co-ordinate the public and private sectors along the Implementation Roadmap that is being developed, so let’s wish them all the best and look forward to some early results from these efforts.

As an aside, I gave one of the keynote talks at the Smart Card Alliance conference in Chicago a few weeks ago, and I suggested, as a bit of an afterthought, after having sat through some interesting talks about the nascent NSTIC, that a properly implemented infrastructure could provide a viable alternative to the existing mass market payment schemes. But it occurs to me that it might also provide an avenue for EMV in the USA, because the DDA EMV cards that would be issued (were the USA to decide to go ahead and migrate to EMV) could easily be first-class implementations of identity credentials (since DDA cards have the onboard cryptography needed for encryption and digital signatures). What’s more, when the EMV cards migrate their way into phones, the PKI applications could follow them on the Secure Element (SE) and deliver an implementation of NSTIC that could succeed in the mass market with the mobile phone as a kind of “personal identity commander”.

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]

Paleo-crypto

In some of the workshops that I’ve been running, I’ve mentioned that I think that transparency will be one of the key elements of new propositions in the world of electronic transactions and that clients looking to develop new businesses in that space might want to consider the opportunities for sustained advantage. Why not let me look inside my bank and see where my money is, so to speak? If I log in to my credit card issuer I can see that I spent £43 on books at Amazon: if I log in to Amazon I can that I spent £43 but I can also see what books I bought, recommendations, reviews and so on. They have the data, so they let me look at it. If I want to buy a carpet from a carpet company, how do I know whether they will go bankrupt or not before they deliver? Can I have a look at their order book?
Transparency increases confidence and trust. I often use a story from the August 1931 edition of Popular Mechanics to illustrate this point. The article concerns the relationship between transparency and behaviour in the specific case of depression-era extra-judicial unlicensed wealth redistribution…

BANK hold-ups may soon become things of the past if the common-sense but revolutionary ideas of Francis Keally, New York architect, are put into effect. He suggests that banks be constructed with glass walls and that office partitions within the building likewise be transparent, so that a clear view of everything that is happening inside the bank will be afforded from all angles at all times.

[From Glass Banks Will Foil Hold-Ups]

I urge you to clink on the link, by the way, to see the lovely drawing that goes with the article. The point is well made though: you can’t rob a glass bank. No walls, no Bernie Madoff. But you can see the problem: some of the information in the bank is confidential: my personal details, for example. Thus, it would be great if I could look through the list of bank deposits to check that the bank really has the money it says it has, but I shouldn’t be able to see who those depositors are (although I will want third-party verification that they exist!).

Why am I talking about this? Well, I read recently that Bank of America has called in management consultants to help them manage the fallout from an as-yet-nonexistent leak of corporate secrets, although why these secrets be prove embarrassing is not clear. In fact, no-one knows whether the leak will happen, or whether it will impact BofA, although Wikileaks’ Julian Assange had previously mentioned having a BofA hard disk in his possession, so the market drew its own conclusions.

Bank of America shares fell 3 percent in trading the day after Mr. Assange made his threat against a nameless bank

[From Facing WikiLeaks Threat, Bank of America Plays Defense – NYTimes.com]

Serious money. Anyway, I’m interested in what this means for the future rather than what it means now: irrespective of what Bank of America’s secrets actually are because

when WikiLeaks, a whistle-blowing website, promised to publish five gigabytes of files from an unnamed financial institution early next year, bankers everywhere started quaking in their hand-made shoes. And businesses were struck by an alarming thought: even if this threat proves empty, commercial secrets are no longer safe.

[From Business and WikiLeaks: Be afraid | The Economist]

Does technology provide any comfort here at all? I think it does. Many years ago, I had the pleasant experience of having dinner with Nicholas Negroponte, John Barlow and Eric Hughes, author of the cypherpunk manifesto, at a seminar in Palm Springs. This was in, I think, 1995. I can remember Eric talking about “encrypted open books”, a topic that now seems fantastically prescient. His idea was to develop cryptographic techniques so that you could perform certain kinds of operations on encrypted data: in other words, you could build glass organisations where anyone could run some software to check your books without actually being able to read your books. Nick Szabo later referred back to the same concepts when talking about the specific issue of auditing.

Knowing that mutually confidential auditing can be accomplished in principle may lead us to practical solutions. Eric Hughes’ “encrypted open books” was one attempt.

[From Szabo]

Things like this seem impossible when you think of books in terms of paper and index cards: how can you show me your books without giving away commercial data? But when we think in terms of bits, and cryptography, and “blinding” it is all perfectly sensible. This technology seems to me to open up a new model, where corporate data is encrypted but open to all so that no-one cares whether it is copied or distributed in any way. Instead of individuals being given the keys to the database, they will be given keys to decrypt only the data that they are allowed to see and since these keys can easily be stored in tamper-resistant hardware (whereas databases can’t) the implementation becomes cost-effective. While I was thinking about this, Bob Hettinga reminded me about Peter Wayner’s “translucent databases“, that build on the Eric’s concepts.

Wayner really does end up where a lot of us think databases will be someday, particularly in finance: repositories of data accessible only by digital bearer tokens using various blind signature protocols… and, oddly enough, not because someone or other wants to strike a blow against the empire, but simply because it’s safer — and cheaper — to do that way.

[From Book Review: Peter Wayner’s “Translucent Databases”]

There are other kinds of corporate data that it may at first seem need to be secret, but on reflection could be translucent (I’ll switch to Peter’s word here because it’s a much better description of practical implementations). An example might be salaries. Have the payroll encrypted but open, so anyone can access a company’s salary data and see what salaries are earned. Publish the key to decrypt the salaries, but not any other data. Now anyone who needs access to salary data (eg, the taxman, pressure groups, potential employees, customers etc) can see it and the relevant company data is transparent to them. One particular category of people who might need access to this data is staff! So, let’s say I’m working on a particular project and need access to our salary data because I need to work out the costs of a proposed new business unit. All I need to know is the distribution of salaries: I don’t need to know who they belong to. If our payroll data is open, I can get on and use it without having to have CDs of personal data sent through the post, of whatever.

I can see that for many organisations this kind of controlled transparency (ie, translucency) will be a competitive advantage: as an investor, as customer, as a citizen, I would trust these organsations far more than “closed” ones. Why wait for quarterly filings to see how a public company is doing when you could go on the web at any time to see their sales ledger? Why rely on management assurances of cost control when you can see how their purchase ledger is looking (without necessarily seeing what they’re buying or who they are buying it from) when you can see it on their web page? Why not check staffing levels and qualifications by accessing the personnel database? Is this any crazier than Blippy?

These opinions are my own (I think) and are presented solely in my capacity as an interested member of the general public [posted with ecto]

My multiples

[Dave Birch] I watched a strange TV show on a plane back from the US. I was about a woman with “Multiple Personality Disorder” (remember that book Sybil — not the one by Benjamin Disraeli — from years ago). I make no comment about whether the disorder is real or not (the TV show wasn’t that interesting) but there’s no doubt in my mind that when it comes to the virtual world, multiple personalities are not only real, but desirable.

Here’s a good reason for not having your Facebook account in your real name (as I don’t):

Five interviewees who traveled to Iran in recent months said they were forced by police at Tehran’s airport to log in to their Facebook accounts. Several reported having their passports confiscated because of harsh criticism they had posted online about the way the Iranian government had handled its controversial elections earlier this year.

[From Emergent Chaos: Fingerprinted and Facebooked at the Border]

I’ve already created a new Facebook identity and posted a paen to Iran’s spiritual leaders just in case I am ever detained by revolutionary guards and forced to log in. But will this be enough? Remember what happened to film maker David Bond when he made his documentary about trying to disappear? The private detectives that he had hired to try and find him simply went through Facebook:

Pretending to be Bond, they set up a new Facebook page, using the alias Phileas Fogg, and sent messages to his friends, suggesting that this was a way to keep in touch now that he was on the run. Two thirds of them got in contact.

[From Can you disappear in surveillance Britain? – Times Online]

So even if you are careful with your Facebook personalities, your friends will blab. As far as I can tell, there’s no technological way around this: so long as someone knows which pseudonym is connect to which real identity, the link may be uncovered. Probably the best we can do is to make sure that the link is held by someone who will demand a warrant before opening the box.

Recognising the problem

[Dave Birch] An interesting series of talks at Biometrics 2010 reminded me how quickly face recognition software is improving. The current state of the art can be illustrated with some of the examples given by NIST in their presentation on testing.

  • A 1:1.6m search on 16-core 192Gb blade (about $40k machine) takes less than one second, and the speed of search continues to improve. So if you have a database of a million people, and you’re checking a picture against that database, you can do it in less than second.
  • The false non-match rate (in other words, what proportion of searches return the wrong picture) best performance is accelerating: in 2002 it was 20%, by 2006 it was 3% and by 2010 it had fallen to 0.3%. This is an order of magnitude fall every four years and there’s no reason to suspect that it will not continue.
  • The results seem to degrade by the log of population size (so that a 10 times bigger database delivers only twice the miss rate). Rather fascinatingly, no-one seems to know why, but I suppose it must be some inherent property of the algorithms used.

We’re still some way from Hollywood-style biometrics where the FBI security camera can spot the assassin in the Superbowl crowd.

What is often overlooked is that biometric systems used to regulate access of one form or another do not provide binary yes/no answers like conventional data systems. Instead, by their very nature, they generate results that are “probabilistic”. That is what makes them inherently fallible. The chance of producing an error can be made small but never eliminated. Therefore, confidence in the results has to be tempered by a proper appreciation of the uncertainties in the system.

[From Biometrics: The Difference Engine: Dubious security | The Economist]

So when you put all of this together, you can see that we are heading into some new territory. Even consumer software such as iPhoto has this stuff built in to it.

face-rec

It’s not perfect, but it’s pretty good. Consumers (and suppliers) do, though, have an unrealistic idea about what biometrics can do as components of a bigger system.

But Microsoft’s new gaming weapon uses “facial and biometric recognition” that creates a 3D model of a player. “It recognises a 3D model that has walked into the room and automatically logs that player in,” Mr Hinton said… “It knows when they are sneakily trying to log into their older brother’s account and trying to cheat the system… You can’t do it. Your face is the ultimate detection for the device.”

[From Game console ‘rejects’ under-age players | Herald Sun]

This sounds sort of fun. Why doesn’t my bank build this into its branches so that when I walk in?

Criminal inconvenience

[Dave Birch] It was identity theft week, or something like that, and since I’m about to start the CSFI’s 2010/2011 Research Programme into “Identity in Financial Services”, with support from Visa Europe, I’ve been thinking about the key aspects of the problem. For example: how well are current know-your-customer procedures working? After all, they are pretty stringent. To the point where the typical customer finds dealing with financial services organisations an absolute nightmare.

The ID banks require is getting beyond a joke. I’ve just been locked out of one of my online accounts, through no fault of my own, and they’re demanding I send them a certified document plus a utility/bank bill, but they won’t accept one printed online. Yet like many people, both for the environment and ease, I opt for paperless billing wherever I can, so I simply don’t get any printed statements anymore, leaving me at an ID disadvantage when banks refuse to count those as ID.

[From Martin Lewis’ Blog… | The bank ID farce: online accounts don’t accept online statements]

Still, I’m sure we’d all agree that it’s worth the massive imposition on customers, and the massive costs to companies, in order to crack down on ne’er-do-wells who are trying to defraud our banking system (at least, the ones who don’t work for banks). But since identity fraud appears to be at record levels, either these stringent controls are counter-productive (because only criminals will bother jumping through the hoops) or a total waste of money.

Drawing upon victim and impostor data now accessible because of updates to the Fair Credit Reporting Act, the data shows that identity theft impostors supply obviously erroneous information on applications that is accepted as valid by credit grantors. Thus, the problem does not necessarily lie in control nor in more availability of personal information, but rather in the risk tolerances of credit grantors. An analysis of incentives in credit granting elucidates the problem: identity theft remains so prevalent because it is less costly to tolerate fraud. Adopting more aggressive and expensive anti-fraud measures is extremely costly and jeopardizes customer acquisition efforts.

[From SSRN-Internalizing Identity Theft by Chris Hoofnagle]

Given the amount of trouble I find in accessing my own accounts — I tried to log in to my John Lewis card account this week and it asked me a password that I’d forgotten and when I followed the “forgotten password” link it asked me for a secret word or something that I didn’t even know I’d set — I can only assume that the total amount of time, effort and money wasted on this sort of thing across the financial services sector as a whole is enormous.

Share and share alike

[Dave Birch] I’m not sure if it was a good idea to have National Get Online Week at the same time as National Identity Fraud Prevention Week and at the same time as announcing record identity fraud figures!

The National Fraud Authority (NFA) said fraudsters who stole identities had gained £1.9bn in the past year. Their frauds had affected 1.8 million people, the NFA estimated.

[From BBC News – Identity fraud now costs £1.9bn, says fraud authority]

As Philip Virgo notes, there appear to be some conflicting messages here and there may be some danger of a lack of strategic co-ordination.

Just after Martha had described her plans to the “Parliament and the Internet” conference last week, those at the session on “On-line Safety” discussed the need to bring the two sets of messages together lest they cancel each other out.

[From Mixed messages: “Get Online Week” v. “National Identity Fraud Prevention Week” – When IT Meets Politics]

I’ve scoured the coverage to find out exactly what it is that the “Get Online” campaign and the “Fraud Prevention” campaign plan to do about identity infrastructure and I’ve looked through the Cabinet Office “Manifesto for a Network Nation” (which does not mention identity or authentication even once) to find out what the British equivalent of the US National Strategy for Trusted Identities in Cyberspace is but I’m afraid I’ve come up with a bit of a blank (although a search of the Get Online Week website did turn up one article that mentioned identity theft in 2008). Perhaps I’m looking in the wrong places and a correspondent can point me in the right direction.

The UK national security strategy that was released last week does at least mention identity theft as a problem (it says that “Government, the private sector and citizens are under sustained cyber attack today, from both hostile states and criminals. They are stealing our intellectual property, sensitive commercial and government information, and even our identities in order to defraud individuals, organisations and the Government”) but doesn’t actually mention identity or authentication, nor does it put forward any suggestion as to what might be done about the problem.

Listening in

[Dave Birch] Who should we be listening to when formulating digital identity strategy? Consumers? Experts? Politicians? Lobbyists? Consultants? Consider, for example, the issue of privacy. This is complicated, sensitive, emotive. And some of the voices commenting on it are loud. Take a look at the “Wal-Mart story” — the story that Wal-Mart are going to add RFID tags to some of their clothing lines — that has naturally attracted plenty of attention. One particular sets of concerns were founded on the idea that consumers could not have the tags “killed” and so would be tracked and traced by… well, marketeers, advertisers, sinister footsoliders of the New World Order, the CIA and so on. So what is the truth?

The tags are based on the EPC Gen 2 standard, which requires that they have a kill command that would permanently disable them. So the tags can, in fact, be disabled. Wal-Mart does not plan to kill the tags at the point of sale (POS), only because it is not using RFID readers at the point of sale.

[From Privacy Nonsense Sweeps the Internet]

As a consumer, I don’t want the tags to be turned off, because that means that the benefits of the tags are limited to Wal-Mart and not shared with me. I’d really like a washing machine that could read the tags and tell me if I have the wrong wash cycle. And there are plenty of other business models around tags that might be highly desirable to consumers.

If it adds £20 to the price of a Rolex to implement this infrastructure, so what? The kind of people who pay £5,000 for a Rolex wouldn’t hesitate to pay £5,020 for a Rolex that can prove that it is real. Imagine the horror of being the host of a dinner party when one of the guests glances at their phone and says “you know those jeans aren’t real Gucci, don’t you?”. Wouldn’t you pay £20 for the satisfaction of knowing that your snooping guest’s Bluetooth pen is steadfastly attesting to all concerned that your Marlboro, Paracetamol and Police sunglasses are all real.

[From Digital Identity: The Rolex premium]

So does the existence of convenience, business model, consumer interest and practicality mean I have no privacy concerns? Of course not! So what is a reasonable way forward?

Wal-Mart is demanding that suppliers add the tags to removable labels or packaging instead of embedding them in clothes, to minimize fears that they could be used to track people’s movements. It also is posting signs informing customers about the tags.

[From Wal-Mart to Put Radio Tags on Clothes – WSJ.com]

That seems like a reasonable compromise: make it easy for people to cut the tags off if they don’t want them. So is that the end of the story? I don’t think it is.

What could possibly violate our privacy with tracking pants in a store to make sure there aren’t too many extra-large sizes on the shelves?

[From Privacy wingnuts « BuzzMachine]

The thing is, I agree with Jeff Jarvis here that some people are, indeed, “wingnuts”. But that does not mean that there are no genuine concerns and it does not mean that anyone who is concerned about privacy (eg, me) is a wingnut. But what it does mean, I think, is that we need to implement new identity technologies in a privacy-enhancing fashion and make the “privacy settlement” with the public more explicit so that there is an opportunity for informed comment to shape it. It seems to me that some fairly simple design decisions can achieve both of these goals, something that I’ve referred to before when using Touch2id as an example.

Let’s make crime illegal

[Dave Birch] In today’s newspaper, I read that the Blackberry is not, after all, to be banned from Saudi Arabia as it has been from UAE.

The agreement, which involves placing a BlackBerry server inside Saudi Arabia, would allow the government to monitor users’ messages and allay official fears the service could be used for criminal purposes.

[From Saudi Arabia halts plan to ban BlackBerry instant messanging – Telegraph]

I don’t know whether it’s a good thing for messages to be in the clear or not. If I were an investment banker negotiating a deal, I might worry that someone at the Ministry of Snooping might pass my messages on to his brother at a rival investment bank, for example. After all, the idea that only authorised law enforcement officers would have access to my private information is absolutely no comfort at all.

A drugs squad detective, Philip Berry, sold a valuable contacts book containing the personal details of the criminal underworld to pay off his credit card debt, a court heard.

[From Corrupt drugs detective ‘sold underworld secrets to pay debt’ – Telegraph]

The idea that law enforcement would be helpless to stem the tide of international crime unless they can tap every call, read every email, open every letter, is (if you ask me) suspect. If I am sending text messages to a known criminal, you do not need to be able to read those message to decide that you might want to obtain a warrant to find out who I am calling or where I am. The fact that I am using a prepaid phone does not, by itself, render me immune to law enforcement activity.

Beyene’s role in the heist was to buy so-called dirty telephones and hire a van to use as a blocking vehicle,

[From Gunman jailed for 23 years over Britain’s biggest jewellery robbery – Telegraph]

In fact this gang was caught because the police found one of the mobile phones they had been using. It contained four anonymous numbers, and from these the police were able to track down the gang members. It wasn’t revealed how, but there at least two rather obvious ways to go about it: get a warrant to track the phones and correlate their movements with known criminals or get a warrant to find out which numbers those other phones have been calling and follow the chain until you get to a known number. Yes, this might require some police work, which is more expensive than having everything tracked automatically on a PC, but it is better for society. This reminds of a recent discussion about anonymous prepaid phones. I’m in favour of them, but plenty of people are against them. (Same for prepaid cards.) Ah, but you and the authorities in some countries might ask: how can you catch criminals who use anonymous prepaid phones? Forcing people to

Earlier this month, the FBI revealed that the suspected Times Square bomber had used an anonymous prepaid cell phone to purchase the Nissan Pathfinder and M-88 fireworks used in the bomb attempt.

[From Senators call for end to anonymous, prepaid cell phones]

Setting aside the fact that this guy was caught (despite the dreaded “anonymous prepaid call phone”) and had been allowed on a flight despite being on the no-fly list, the politicians are, I’m sure, spot on with their informed and intelligent policy. In fact, one of them said:

“We caught a break in catching the Times Square terrorist, but usually a prepaid cell phone is a dead end for law enforcement”.

[From Senators call for end to anonymous, prepaid cell phones]

Amazingly, the very same issue of the newspaper that reports on the captured UK armed robbers contains a story about a Mafia boss caught by… well, I’ll let you read for yourself:

One of Italy’s most wanted mafia godfathers has been arrested after seven years on the run after police traced him to his wife’s mobile registered in the name of Winnie the Pooh

[From Winnie the Pooh leads to gangster’s arrest – Telegraph]

So, basically, if you require people to register prepaid mobile phones then you raise the cost and inconvenience for the public but the criminals still get them (because they bribe, cheat and steal: that’s criminals for you). I imagine that in the Naples branch of Carphone Warehouse the name “Winnie the Pooh” on a UK identity card looks perfectly plausible: they would have no more chance of knowing whether it’s real or not than the Woking Carphone Warehouse would when looking at an Italian driving licence in the name of Gepetto Paparazzo. Again it’s not clear exactly what the police did, but from elements of the story it appears to be something like: the police discovered (through intelligence) that the godfather’s wife was calling an apparently random mobile phone number at exactly the same time every two weeks. From this they determined which phone was hers (the “Winnie the Pooh” phone) and they tracked it to Brussels. But suppose some foolproof method for obtaining the correct identities of purchasers were to be found. Would this then stop crime in, say, Italy? Of course not.

In an attempt to combat the cartel-related violence, Mexico enacted a law requiring cell phone users to register their identity with the carrier. Nearly 30 million subscribers didn’t do this because of a lack of knowledge or a distrust of what could happen to that information if it fell into the wrong hands. Unfortunately, the doubters were proven right, as the confidential data of millions of people leaked to the black market for a few thousand dollars, according to the Los Angeles Times.

[From Did Mexico’s cell phone registration plans backfire?]

The law just isn’t a solution. It might even make things worse.

Head in the clouds

[Dave Birch] At the recent European e-Identity Management Conference, Kim Cameron from Microsoft pointed out a few privacy and security concerns that relate to the cloud. This is important stuff, obviously. For one thing, the cloud is the new black. Remember this from a year ago?

All government departments are to be encouraged to procure new IT services based on a cloud computing model.

[From UK government CIO wants to build a “government app store” – 19 Jun 2009 – Computing]

This never meant that they actually would, or indeed, should have used the cloud for anything. I’m not sure if I’d want my medical records on Google Docs, one phished password away from universal access. Indeed, the idea of a special cloud for e-government wasn’t far behind:

Establishing a Government Cloud or ‘G-Cloud’. The government cloud infrastructure will enable public sector bodies to select and host ICT services from one secure shared network. Multiple services will be available from multiple suppliers on the network making it quicker and cheaper to switch suppliers and ensure systems are best suited to need.

[From News : NDS ]

Hold on. Suppose the cloud goes wrong, as one might imagine that a government IT cloud would have a propensity to do, what then?

In our opinion cloud computing, as currently described, is not that far off from the sort of thinking that drove the economic downturn. In effect both situations sound the same… we allowed radical experiments to be performed by gigantic, non-redundant entities.

[From MAYA Design: The Wrong Cloud?]

Hhhmmm. So this means that if the government cloud goes down, or more likely that the gateway goes down, then there are no government services. Surely the solution is to have lots of clouds, not one, so that citizens can use any of the clouds to connect to any of the services: it shouldn’t matter whether citizens want to sign on in person, at a kiosk, using the phone, through the set-top box or on a PC. All of these channels should federate their identity through to the government for access.


Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.