Apple are right and wrong

Greyscale backing image

I’m sure you’ve all seen this story by now.

Thousands of iPhone 6 users claim they have been left holding almost worthless phones because Apple’s latest operating system permanently disables the handset if it detects that a repair has been carried out by a non-Apple technician.

From ‘Error 53’ fury mounts as Apple software update threatens to kill your iPhone 6 | Money | The Guardian

Now, when I first glanced at this story on Twitter, my immediate reaction was to share the natural sense of outrage expressed by other commentators. After all, it seems to be a breach of natural justice that if you have purchased a phone and then had it repaired, it is still your phone you should still be able to use it.

I have my Volvo fixed by someone who isn’t a Volvo dealer and it works perfectly. The plumber who came round to fix the leak in our bathroom a couple of weeks ago doesn’t work for the company that built the house, nor did he install the original pipes and he has never fixed anything in or house before. (He did an excellent job, by the way, so hats off to British Gas HomeCare).

If you read on however, I’m afraid the situation is not so clear-cut and I have some sympathy for Apple’s actions, even though I think they chose the wrong way to handle the obvious problem. Obvious problem? Yes.

The issue appears to affect handsets where the home button, which has touch ID fingerprint recognition built-in, has been repaired by a “non-official” company or individual.

From ‘Error 53’ fury mounts as Apple software update threatens to kill your iPhone 6 | Money | The Guardian

Now you can see the obvious problem. If you’re using your phone to make phone calls and the screen is broken then what does it matter who repairs the screen as long as they repair it properly. But if you’re using your phone to authenticate access to financial services using touch ID then it’s pretty important that no one has messed around with the touch ID sensor to, for example, store copies of your fingerprint templates for later replay under remote control. The parts of the phone that other organisations are depending on as part of their security infrastructure (e.g., the SIM) are not just components of the phone like any other component because they feature in somebody else’s risk analysis. In my opinion, Apple is right to be concerned. Charles Arthur just posted a detailed discussion of what is happening.

TouchID (and so Apple Pay and others) don’t work after a third-party fix that affects TouchID. The pairing there between the Secure Element/Secure Enclave/TouchID, which was set up when the device was manufactured, is lost.

From Explaining the iPhone’s #error53, and why it puts Apple between conspiracy and rock-hard security | The Overspill: when there’s more that I want to say

Bricking people’s phones when they detect an “incorrect” touch ID device in the phone is the wrong response though. All Apple has done is make people like me wonder if they should really stick with Apple for their next phone because I do not want to run the risk of my phone being rendered useless because I drop it when I’m on holiday need to get it fixed right away by someone who is not some sort of official repairer.

 What Apple should have done is to flag the problem to the parties who are relying on the risk analysis (including themselves). These are the people who need to know if there is a potential change in the vulnerability model. So, for example, it would seem to me to be entirely reasonable in the circumstances to flag the Simple app and tell it that the integrity of the touch ID system can no longer be guaranteed and then let the Simple app make its own choice as to whether to continue using touch ID (which I find very convenient) or make me type in my PIN, or use some other kind of strong authentication, instead. Apple’s own software could also pick up the flag and stop using touch ID. After all… so what?

Touch ID, remember, isn’t a security technology. It’s a convenience technology. If Apple software decides that it won’t use Touch ID because it may have been compromised, that’s fine. I can live with entering my PIN instead of using my thumbprint. The same is true for all other applications. I don’t see why apps can’t make their own decision.

Apple is right to take action when it sees evidence that the security of the touch ID subsystem can no longer be guaranteed, but surely the action should be to communicate the situation and let people choose how to adjust their risk analysis?

Putting biometrics in context

Greyscale backing image

Last week, the Biometric Alliance Initiative (BAI), a European-funded project aiming at conjugating mass-market biometrics with bespoke certification processes, has just announced the availability of its new evaluation and certification benchmark. To anyone like me, who, at one time or another, got involved in biometric implementation deep enough to appreciate the Tower of Babel this actually is, the BAI is a stepping stone in easing up the process.

The benchmark, for biometric technologies used in biometric-based non-governmental solutions, aims to enable the evaluation and certification of biometric technologies in a consistent manner that encompasses all their various aspects while establishing a common approach for laboratories.

[From: Biometric Alliance Initiative Press Release- December 2015]

You can see why this is need. Biometric solutions, as most identification solutions you would say, were not initially engineered for the mass-market. They work brilliantly in closed-loop sovereign solutions where security is of utmost importance and where the convenience parameter can be considered as trivial.

The challenge with mass-market biometrics is not just a question of trade-offs between convenience and security, but also managing a range of issues from interoperability to environment. Not to mention the ageing factor, which could well be unkind to mass-market biometrics in the coming years, were the current roll-outs not sufficiently well designed. I thought it would be helpful to set out a few of the issues that might seem obvious but stand as challenges for mass-market biometrics.

Environment factors are complex. In contrast to most other verification methods, biometrics need to be split into a two-parameter equation: The biometric trait, and the biometric device. The biometric trait and the sensing device are both characterised by behaviours which are dependent on the environment. Some optical fingerprint sensors, for instance, which technically take a picture of the fingerprint, might give very poor results under direct sunlight. With that in mind, how about an access control for a building which does not work in spring and autumn, between 8h-9h and 16h-17h? And you having dry skin certainly does not help. That is just a simple picture. The underlying technologies of other sensors can make them prone to other settings like moisture or dirt, just as your biometric traits are. With that in mind, if you try to picture a small-scale solution consisting of varying technologies being translated by a multi-national company into all of its branches, I think you are currently grinning sarcastically.

Performance in terms of both transaction speed and precision (biometric error rates) are implementation dependent. The specifics of each use case might dictate bespoke operational ranges. The performance of a biometric match which is not only inherent to the biometric modality, but also to the form factor, might be perfectly acceptable for a use case (e.g payment) and fatal to another one (e.g transit). Try to think of hypothetical biometric gateways at London Waterloo tube station during peak hours, with some people not managing to go through due to false rejections, others taking ages to match and you’ll appreciate the need for some thorough testing and fine-tuning in that sense.

Interoperability builds mass markets, but biometric data formats, the way the biometric “image” is coded, are not always interoperable. With open ISO standards on one side of the spectrum, and an ever increasing panel of innovative offers in vendor-specific biometric and solution-specific encrypted biometric data formats on the other, the biometric market offer can be confusing. Incompatibility which might exist between different versions of the ISO standards makes things even worse. Rolling out a biometric solution without prior analysis of the supported formats might feel like inserting a video CD in a video tape recorder- there is certainly a film on the video CD, and the video CD can certainly be inserted (but not sure of getting it back though) into the VCR, but you wouldn’t be able to watch the movie.

Security, or rather insecurity in biometrics is not as straight-forward as it seems to be. The brilliance of Tsutomu Matsumoto or the Chaos Computer Club cannot be denied…but, because there is always a but, gelatine cannot fool all types of sensors, nor can they be a threat in all use-cases. Fake fingerprints certainly did work for Sean Connery in “Diamonds are Forever” to get past Tiffany Case’s fingerprint scanner back in 1971, but I highly doubt a particular set of materials would be sufficient to fool all sensors with all types of users ( I’m thinking of people like me with an abnormally high number of minutiae). Furthermore, security is not just the ease of fooling the sensor, it also invokes other factors linked to the authentication (multiple-factor solutions), the configuration chosen ( more restrictive, at the expense of security, or the opposite) or even the setting (assisted solution or automated).

Context is critical. Buying a cup of coffee and launching nuclear missiles are different contexts. The underlying technologies behind different biometric solutions are sensitive to different settings and to different requirements. And they are interdependent: some fancy solution exposed to an exotic environment could be more prone to security breaches, while being non-interoperable with other systems and slow.

The BAI framework takes up a new modus operandi in addressing these specifics. The expertise of well-established players in the field of testing and certification like Elitt and Paycert has helped implement the biometric factor into a feasible, transparent and repeatable testing and certification infrastructure. Other members, coming from varying perspectives, ranging from potential-end users to regulators, have largely contributed in giving their respective viewpoints on the feasibility and efficiency of each aspect of this framework hence giving an empiric tint to this framework.

Setting standards for any types of technology can be challenging. Setting the associated certification infrastructure is also challenging as it needs to be transparent, technically sound and of course repeatable – with consistent results when testing. For the payments industry, its major challenges will be technical compatibility – particularly the ability for the certification to adapt to use across all types of cards and payments devices – and security. Cardholder information is incredibly sensitive, and with high consequences for breaches, security will always be a high priority for users.

[Ludovic Verecque, Paycert’s view on the BAI]         

This approach aims at instilling high levels of trust not only amongst the wide spectrum of actors of the biometric market, but also amongst indirect players. The FIDO alliance, for instance, which delegates the verification method of the authenticator (which could be biometric) to open implementation, while focusing its post-verification protocols, can only be strengthened if the biometric factor has been properly tried, tested and deemed fit for the context. The whole chain of trust could hence be made stronger, right from the biometric device through the whole of the FIDO protocols.

Ensuring context-appropriate implementations is the key to sustainable biometric solutions, and this is what the Biometric Alliance Initiative — to which I have been contributing for the past two years — is all about. I expect this benchmark to lead to a much wider use of a much wider range of biometrics in the mass market in the coming year.

 

 

 

Connecting is getting easier, disconnecting is getting harder

Greyscale backing image

After I’d been blathering on at some event about how connecting things up is really but disconnecting them is really hard, someone sent me a link to a story illustrating an amusing case of the unexpected consequences of connectivity. A woman found out her husband was cheating on her with nanny because he had photos and texts on his iPhone, which was linked by iCloud to her iPad.

Gwen Stefani apparently discovered Gavin Rossdale was cheating on her after discovering some explicit texts and photos on the family’s iPad.

[From A guide on how to not let an Apple device ruin your marriage – NY Daily News]

I didn’t know who Gwen Stefani was, so I went off to goggle her on my skyper (as England’s greatest living poet, John Cooper Clarke, would put it) hoping that she might be a junior minister at the Home Office or an executive a technology company, but it turns out she’s a pop singer. Oh well. There’s no reason to expect pop stars to understand Apple’s settings any more than I do, so I put the story to one side. Until this morning, that is.

This morning I went through my browser history to try and find a page about a workshop that I was supposed to be going to. I couldn’t remember the name of the workshop, but I knew I’d been to the web site in the last day or two so I opened up my browser history. And found hundreds of web sites dealing with carpet remnants.

My wife and I are a very traditional couple. We share everything. It’s in our marriage vows. The bank account, the speeding tickets, the browser history. And we don’t have a nanny. So I don’t care about my wife seeing my browser history and she doesn’t care about me seeing hers. The reason I mention this episode though is to make a point: connecting things up is getting progressively easier, but working out who should be able to access what and when and under what circumstances is becoming increasingly complicated.

In fact, I’m tempted to say that it’s becoming so complicated that it will soon be beyond human comprehension. When I take a photo with my iPhone, I already have literally no idea where it will end up, and why some photos show up on my laptop and others don’t is completely baffling. (Although I have noticed that when I actually want to find a photo that I can remember taking a few months ago, I can never find it.)

Today it’s your photos, tomorrow it’s your financial transactions, soon it will be your identity that is unpredictably smeared through the interweb tubes with predictably chaotic results. Time for some thinking about identity partitioning and permissioning: more soon.

On the internet, no-one knows you’re a fridge

Greyscale backing image

Remember all those years ago (about 20 in fact) when there was that cartoon in the New Yorker “no one knows you’re a dog“? I got so sick of seeing that cartoon lazily reproduced by anyone who wanted to make a point about identity in the virtual world and the relationship between virtual and mundane identities, which to my mind remains poorly understood (even by me) and in desperate need of exploration. Well, on Twitter a couple of days ago I laughed out loud when someone posted the updated version: on the Internet, no one knows you’re a fridge. Maybe I’ll steal it to use for my talk at the University of Surrey Centre for the Digital Economy “ID for the Internet things” workshop this afternoon. You’ll remember that ID for the Internet of Things (with the hashtag #IDIoT) was one of Consult Hyperion’s “live five” transaction technology trends for 2015. At the start of the year, when we were talking to clients about what to keep an eye on this year, we said that the thingternet (as I prefer to call it) lacked security infrastructure and that this would be a natural focus for activity. As it turned out, this was correct.

ARM’s acquisition of Dutch company Offspark shows how chip vendors intend to integrate more security features in their software and hardware to help keep the Internet of Things safe. There are a few things vendors have to get right for IoT to take off on a larger scale, and security is one of them.

[From ARM acqusition highlights quest to embed IoT security | PCWorld]

Of course, ARM wasn’t the only chip company looking to evolve IoT security. While they announced they would add their trusted execution environment “Trustzone” to their newest designs, others were doing the same, which is of course good news for those of us concerned about security on the thingternet. 

Intel is going down the same route with features such as Enhanced Privacy ID, which Intel made available for other chip makers to implement in December.

[From ARM acqusition highlights quest to embed IoT security | PCWorld]

You can have security without privacy, as they say, but you can’t have privacy without security. Anyway, the fridge thing caught my eye because I happened to be reading the Economist Intelligence Unit’s recent report on “The Economics of Digital Identity“, in which Stephen Bonner, former head of Information Risk Management of Barclays, makes the important observation that while most of the focus today is on individuals and their personal data, increasingly digital identity will need to be closely tied to the use and ownership of smart products. Since I’d read Jerry Kaplan’s “Humans Need Not Apply” on my last plane ride, I’d been thinking about the issue of personhood (including the ability to own assets) for synthetic intelligence, I’d been thinking about issues around reputation management (and management of reputation in the context of punishing synthetic intellects). And then I saw a tweet from my former colleague and ethical thinker, Vic:

So. Should what Jerry Kaplan calls “forged labourers” need digital identities through legal personhood, or are they the property (in some way I can’t think through, because I’m not a lawyer) of governments, companies, individuals with an identity that is derived from their owner? I rather think that they will have to have some kind of digital identity and my reasoning is that interactions in the virtual world are interactions between virtual identities and in my specific worldview, virtual identities need underlying digital identities. Whether the underlying digital identities of robots need to be bound to real-world legal entities, as in the case of digital identities as we understand them today, is a different issue so let’s put it to one side for the time being. Let’s for a moment focus on security.

When my fridge negotiates with Waitrose to buy some more milk, what is really happening is that the virtual identity of my fridge is interacting with the virtual identity of Waitrose. That seems perfectly reasonable to me, and working out ways for the these virtual identities to transact is going to be part of the business strategy for a fair few of our clients over the next couple of years. The virtual identity of the fridge may have a number of attributes associated with its identifier, such as a credit limit for a delivery address or whatever, but the one attribute that it will not have is “IS_A_PERSON”. As I have claimed many times before, this might well turn out to be the most valuable attribute of all. More on this soon.

“Personal” computers weren’t

Greyscale backing image

Kicking off the session on “Old vs. New P2P” at Mobile Banking & Payments in New York, Steve Kirsch (the CEO of Token) made the strong point that somehow the era of the PC and the Internet left the basic payment “rails” unchanged. For a long time we’ve papered over the cracks — using 3D Secure, PCI-DSS and so on — but with the arrival of the smartphone we could all see that it was time for change. What we may have underestimated is just how big that change will be.

it can still feel natural to talk of the PC as the most fully-featured version of the internet, and mobile as the place where you have to make lots of allowances for limitations of various kinds… I’d suggest that we should think about inverting this – it’s actually the PC that has the limited, basic, cut-down version of the internet.

[From Mobile first — Benedict Evans]

I couldn’t agree more. And in my framing, it’s all to do with identity. The PC was never personal: it didn’t have a SIM. My laptop isn’t mine in the same sense that my smartphone is and, as a consequence, will never be able to deliver as personal a service. Now, I suppose you could argue that it’s silly to talk about smartphones as PCs because they are, after all, phones.

The study also showed that four in ten users could manage without the call-making capability on their handset.

[From Soft cell: 40% of Brits don’t make calls on smartphones – report — RT UK]

I rarely make calls on my smartphone and I rarely answer them either. Unless it’s the police, my CEO or my wife then I’ll let it go to voicemail or hit the “please text me if it’s anything important” button. Calling it a phone is just a figure of speech, like when you say you are going to dial a number to someone who has never seen a phone dial and has no idea why the word “dial” is used in that context.

So what is the smartphone for?

We’ve all seen a thousand conference slides that show the smartphone as a Swiss army knife: calendar, watch, contact book, diary, games console, social media gateway, radio and so on. But if we go back to Benedict’s point, then we can answer the question in a different way. My smartphone is… me. Well, as good as. It’s sort of proxy me.

a smartphone knows much more than a PC did… It can see who your friends are, where you spend your time, what photos you’ve taken, whether you’re walking or running and what your credit card is.

[From Mobile first — Benedict Evans]

We can all see the what the consequences are in payments and banking. The practical result of the identity-less PC vs. the proxy-identity smartphone is that when I want to transfer some money or pay a bill, I use my excellent Barclays mobile app. I’ll only use my laptop if I absolutely have to because I have to type stuff in (like setting up a new payee). Conversely, it seems bizarre that when I phone up my bank, or my insurance company, or my airline or whatever else, I’m asked to demonstrate my identity by getting involved in (as I heard someone describe it recently) an episode of Jeopardy hosted by Kafka — OK, Franz, let’s go with “places I have lived” — when they could just ask the other me. The mini-me. The mobile-me.

Similarly when I go into a bank branch or a retail outlet or a government office, why do they ask me for bits of paper that cannot possibly be verified when they could just ping mobile-me. App pops up on the phone, you put your finger on the sensor, job done. And just as the crucial role of the smartphone in disrupting the payments industry is to take payments, not make them, so the crucial role of the smartphone in disrupting the payments industry is to validate credentials, not present them. Since my mobile-me can check that your mobile-me is real, our mobile world ought to be much safer our internet world.

#IDIoT is a serious business

Greyscale backing image

The Gartner hype cycle is jolly bullish on autonomous vehicles, which I’m really looking forward to. According to Jerry Kaplan’s fascinating “Humans need not apply”, switching to autonomous vehicles in the US will save thousands of lives and billions of dollars every year. Personally, I couldn’t care less if I never drive a car for myself ever again, and I hope that Woking will become an autonomous vehicle only zone as soon as possible. Sadly, this won’t be for a while.

While autonomous vehicles are still embryonic, this movement still represents a significant advancement, with all major automotive companies putting autonomous vehicles on their near-term roadmaps.

[From Gartner’s 2015 Hype Cycle for Emerging Technologies Identifies the Computing Innovations That Organizations Should Monitor]

Gartner are even more bullish on what they call autonomous field vehicles (which I think means drones, combine harvesters and such like) and predict that these will be around in 2-5 years time, just like enterprise 3D printing and cryptocurrency exchanges. I couldn’t help but notice, though, that their very same hype cycle puts digital security at least 5-10 years out. So they are forecasting that there will be vehicles running around for some years before we are able to secure them, 3D printers inside organisations printing things for years before we are able to protect them and people trading money years before we can stop hackers from looting them. Actually, I agree with Gartner’s prediction, as it’s entirely congruent with my own #IDIoT line of thinking, which is that our developments in connection technologies are accelerating past our developments in disconnection technologies. And if you don’t care what I think about it, you probably do care what Vint Cerf thinks about it.

“Sometimes I’m terrified by it,” he said in a news briefing Monday at the Heidelberg Laureate Forum in Germany. “It’s a combination of appliances and software, and I’m always nervous about software — software has bugs.”

[From Vint Cerf: ‘Sometimes I’m terrified’ by the IoT | ITworld]

We’re busy going round connecting vehicles, equipment and money to the internet with having any sort of strategy in place for disconnecting them, which is much more difficult (doors are easy, locks are hard, basically). And with chips that we don’t even understand being built into everyday devices, the complexity of managing security is escalating daily. Look at the recently-launched “21” idea.

Its core business plan it turns out will be embedding ASIC bitcoin mining chips into everyday devices like USB battery chargers, routers, printers, gaming consoles, set-top boxes and — the piece de resistance — chipsets to be used by internet of things devices.

[From Meet the company that wants to put a bitcoin miner in your toaster | FT Alphaville]

Really? Chips in everything? What could possibly go wrong? Oh wait, it already has. There’s something missing here: an identity layer. Hardly a new idea and I’m not the only person going on about it.

Everyone and everything will have an identity… We can’t scale a world that we can’t talk to, can’t control and can’t secure. Everything, including your toaster, you fridge and your car, will have an identity.

[From Facing the new Big Bang: The IoT’s identity onslaught — Tech News and Analysis]

Yet nothing much is getting done, despite that fact that we already have plenty of case studies as to how bad the situation is already. Never mind smart fridges that give away your personal details or televisions that spy on you there are issues about the maintenance and upkeep of things in the field that create an identity management environment utterly different to anything are used to dealing with in the worlds of OIX, Mobile Connect, SAML and so on. 

Did you buy a smart TV or set-top box or tablet any time before January 2013? Do you watch YouTube on it, perhaps through an app? Bad news: Google has shut down the feed that pushed content into the app.

[From You buy the TV, Google ‘upgrades’ its software and then YouTube doesn’t work … | Technology | The Guardian]

It’s issues like this that make me want to focus on identity in the internet of things (or #IDIoT, as I call it) in the near term, so I was really flattered to be asked along by the good people at ForgeRock to talk about this at their London Identity Summit tomorrow. Really looking forward to exploring some of these ideas and getting feedback from people who know what they’re talking about. What’s more, Consult Hyperion and the Surrey Centre for the Digital Economy (CoDE) will be delivering a highly interactive workshop session designed specifically for the University of Surrey’s 5G Innovation Centre SME Technology Pioneer Members on 30th November 2015. This will include “business lab sessions” interleaved with presentations and discussion. We’ll be putting forward the #IDIoT structure to explore identity, privacy and security issues using our ‘3 Rs’ of Recognition, Relationship and Reputation. The event will be an opportunity to establish contacts with companies interested in the IoT space, as well as connecting with the broader University community and a select group of large enterprises so I’m really looking forward to it and, as you might imagine, you’ll read all about it here!

Mass market biometrics – convenience and trust

Greyscale backing image

Back in 2002, biometrics seemed futuristic to say the least. Minority Report was released in that year and I vaguely recall a scene where Tom Cruise trades-in his eyes (yes, his eyes!) to fool, what was supposed to be a retinal scanner.

We’re now in 2015 and biometrics do not seem that sci–fi anymore. Biometrics are insidiously creeping in our lives, via a plethora of services and solutions. But whilst I do passionately follow how widespread biometrics are getting, I still remain very sceptical when it comes to saying that biometrics are the ultimate answer to security.

Let’s take fingerprints for example. Granted, fingerprints are truly efficient when it comes to authentication. They are part of you, and they are unique. Unless I am in serious, serious trouble, I would not be ready to have new fingerprints stitched, were that procedure to be available.

Fingerprints are unique:

A fingerprint is the representation of dermal ridges of a finger. Dermal ridges form a combination of genetic and environmental factors; the genetic code in DNA gives general instructions on the way the skin should form in a developing fetus, but the specific way it forms is the result of random events such as the exact position of the fetus in the womb at a particular moment. This is the reason why even the fingerprints of identical twins are different.

[From Encyclopedia of Biometrics, Stan Z.Li, Anil Jain : Fingerprint Recognition, Overview.]

But, this perceived uniqueness is not without some loopholes:

Doddington et al developed a statistical framework based on the matching performance of individual users.[…]. Their work focused on determining user-induced variability. In particular, they identified four categories of users:

(sheep) users who are easily recognized,

(goats) users who are particularly difficult to be recognized,

(lambs) users who are easy to be imitated,

(wolves) users who are particularly successful at imitating others.

[From Revisiting Doddington’s Zoo: A Systematic Method to Access User-dependent Variabilities]

Fine then, my fingerprints are supposed to be unique. What if there was a “wolf” out there who knows he can access my biometrically locked services, consciously, not by hacking, but simply by the trick of his finger? I’d be having a “finger twin” (remember Joey in Friends in the hand twin episode), albeit an evil one.

This situation, though infinitesimally probable (and even more improbable when it comes to me, with my abnormally high number of minutiae, but that is another story!), does pose a pertinent question. Should I be able to repudiate a service which was authenticated biometrically?

The straightforward answer would be no. However, there have been, in the past, numerous cases in which innocent people have been wrongly singled out by means of fingerprint evidence.

In 2004, Brandon Mayfield was wrongly linked to the Madrid train bombings by FBI fingerprint experts in the United States.

Shirley McKie, a Scottish police officer, was wrongly accused of having been at a murder scene in 1997 after a print supposedly matching hers was found near the body.

[From “Why your fingerprints may not be unique” The Telegraph 21 April 2014]

These cases do prove one thing: An unlucky string of circumstances, though highly unlikely, could be enough to repudiate the alleged non-repudiable: fingerprints.

Mind you, I have not even stepped into the “conventional” debate – Tsutomu Matsumoto, the Japanese guy who made fake fingerprints out of gelatine – nor started a discussion on the challenges facing biometrics – varying physiological aspects in population and environmental effects on both the biometrics to be sensed and the sensor used. And I am miles away from two three-letter acronyms: FAR and FRR.

Mass market biometrics are currently only about convenience, not security. Not having to remember PINs is nice (particularly if you collect bank cards like I do), but relying solely on biometrics is hazardous.

Security is added, or rather implemented, by combining other factors (something you have, something you know), but here is the catch – the more you secure, the less convenient is the solution. Phone + fingerprint + PIN definitely imply that my evil twin finger would have to get hold of my phone, know my PIN to access my services, but would I, as a lazy client, be bothered if I had to have the phone on me, key in a PIN and place my finger on the reader for each access to a service?

But besides this well-known trade-off between convenience and security, there is another crucial aspect in biometrics: sustainability. Unlike “conventional” credentials which can be revoked and changed in case of attack, revoking compromised biometrics is certainly more difficult. Revocable biometric algorithms may be the answer, but I prefer make abstraction of it in this article. In view of ensuring the viable trust of future biometric solutions, emphasis should be laid on zero-flaw in current roll-outs.

L’Observatoire appelle également les acteurs à être vigilants durant les phases d’expérimentation de solutions fondées sur la biométrie, la compromission d’empreintes biométriques utilisées par celles-ci pouvant mettre en cause le déploiement de solutions futures à plus grande échelle.

The panel also calls on players to be vigilant during the experimental phases of solutions based on biometrics. The use of compromised fingerprint may seriously challenge the deployment of future solutions on a larger scale.

[From 2014: Rapport annuel de l’observatoire de la sécurité des cartes de paiement]

Trust, once shattered might be hard, impossible even, to rebuild, especially if the same client pool has been compromised. A case in point here is the Mauritian Biometric Identity Card Scheme. The fingerprints enrolled were stored on the chip, which is secure enough, and a not-so-secure centralised database. A couple of years, frenzied passion against biometrics and doubt-instilling database procedure malfunctions, were enough to convince legal authorities to destroy the much controversial biometric database. The Mauritians are paying the high price of a rapid and not sufficiently prepared solution. I’m not sure they’ve gauged the extent of the problem though.

Les empreintes digitales de 947 000 citoyens, collectées pour la nouvelle carte d’identité, ont été supprimées de la base de données. […]Les données biométriques seront désormais sauvegardées uniquement sur la puce insérée dans la carte.

The fingerprints of 947 000 Mauritian citizens previously collected for the new identity card scheme, have been deleted from the database. […] The biometric data shall be saved only on the identity card chip. 

[From Carte d’identité : Les empreintes digitales de 947000 citoyens détruites” L’express.mu: 1st September 2015]

Were I to be one those 947 000 enrolled, the court’s order to destroy the biometric database, limiting the credential to the chip, would not reassure me at all. There has been a point in time where the database was operational with people behind accessing them. Damage could already have been done, and leaving my fingerprint data on the identity card chip is like having a key in a safe when the duplicate key is either destroyed or lost somewhere.

Our approach to biometrics needs to change rapidly. The stars are getting lined up for biometrics. Demand for new authentication methods, enhanced reliability as well as more affordable price ranges are starting to build up a huge potential for future solution deployments. It is up to us to develop new archictectures. Assessing the expected convenience levels and maintaining the high levels of trust will ensure consistency in the security of biometric solutions.

It’s the convenience and trust, convenience and trust only. Security is the outcome of it.

 

 

 

App and pay is where it’s at

Greyscale backing image

A few weeks ago, I said that Apple Pay isn’t disruptive (for retail payments) and I made the point that its real impact will be “in-app”. I want to explore and emphasis this point in the light of more recent developments. Specifically…

The big news is that it will expand to the UK market next month

[From Apple Pay to be available in UK – Business Insider]

Apple Pay is coming to the UK. Now, when Apple Pay was first announced in the USA, our basic analysis of it for our clients was that it was an incredibly important development in the payment world, but not because of the use of the NFC. The fact that Apple had decided to use tokenisation, we told people, makes tokenisation as big a deal as chip and PIN. It will change the way business gets done, because it brings chip and PIN security to online and mobile transactions. In fact, I bored a number of people on this topic, to the point where it became part of my spoof write-up of Money2020 in Las Vegas last year

“Well, for the big merchants it’s not about tap-and-pay it’s about app-and-pay” he told Osama Bedier from Poynt.

[From Casino Royale-with-Cheese, Part 7]

At the end of the year, we made “in-app” one of our “live five” areas for our clients to explore in 2015 (along with the blockchain, as it happens) and started trying to persuade people to pay attention to it as area of massive opportunity.

Much of the discussion around ApplePay, tokenisation, NFC and retail has naturally focused on the “tap and pay” simplicity of the proposition. However, there are lots of reasons for thinking that this will be a sideshow rather than the main event.

[From Live Five for Fifteen]

The good people of the GSMA invited me to Mobile World Congress in Barcelona earlier in the year to explain this point to a general audience, where I predicted that tokenisation would accelerate a shift away from the check out and the conventional POS terminal as the nexus between the consumer and the merchant drifts away from physical space and into the mobile phone.

while much of the talk at the Congress was about what I’ve previously called the “last millimetre” using NFC, RFID (and now Loop) to link the phone to the point of sale (POS) in the store, the really disruptive impact of the Apple Pay, tokenisation and strong authentication via mobile would be away from the “traditional” POS because bringing chip-and-PIN levels of security and convenience to in-app transactions will change the way that we pay pretty quickly.

[From In-app and on-message in Barcelona]

I made exactly this point again a couple of weeks ago, when I was interviewed by the BBC in connection with the UK Apple Pay launch [audio, starts at 30 minutes in]. On the whole, I think. Consult Hyperion got a consistent message out to our clients and then to the wider marketplace. But is it the right message?

It is. I was interested to note some comments by people far more important and influential than I, comments that might be taken to mean that I may have perhaps been too conservative in my proclamations, around the announcement of Apple coming to the UK.

John Collison, one of the cofounders of $3.5 billion (£2.25 billion) payment processing startup Stripe, says this feature, not the contactless mobile payments, is getting businesses most excited… John Lunn, senior global director for the mobile-payment company Braintree, which was bought by Paypal for $800 million (£512.18 million) in 2013, also thinks Apple Pay’s in-app element is the most exciting thing about it.

[From Apple Pay in-app purchase power could be its most important feature, say Stripe, Braintree – Business Insider]

Well when people like John Lunn, who I can personally testify is a very smart guy, go on to say that “everybody’s talking about the in-store stuff, but actually when you look at the presentation when they launched it, the merchants that were sitting behind Tim Cook were online” I think that tell us the direction of travel pretty accurately.

As my colleague Tim Richards pointed out earlier in the week, tokenisation is a really big deal. App-and-pay changes industry dynamics in a way that tap-and-pay does not.

The Turing Wars have begun

Greyscale backing image

On the internet, as they used to say, no-one knows you’re a dog. That’s not, as far as I can tell, too much of a problem at the moment because dogs have quite poor keyboard skills and little interest in most kinds of internet fraud. The real problem, as things have turned out, is that on the internet no-one knows you’re a bot. Now, I see this emergent property of Moore’s Law and Metcalfe’s Law as fascinating and chaotic and there are some environments in which it is jolly amusing as well. Fake social media fans, for example.

Today, he says he manages 10,000 robots for roughly 50 clients, who pay Mr. Vidmar to make them appear more popular and influential.

[From Inside a Twitter Robot Factory – WSJ.com]

Rappers fighting over fake fans is funny but, as is easy to imagine, there are environments (almost all of them as far as I can see) in which there is no humour, only havoc. A very good current example of this is Bitcoin trading.

Bots control/contribute more than 70% of the volume on OKCoin Futures.

[From Further lies at OKCoin, where does it end? : BitcoinMarkets]

There’s no problem with this, as far as I know, and I don’t see why we should stop bots on Bitcoin exchanges when we allow them on Wall Street, especially when they might offer an accelerated evolutionary path by exploring different strategies.

The exchanges are already rife with trading bots; these are shark infested waters. Bots dance around each other in a chaotic swirl. They employ so many diverse strategies. It’s like so many microbes competing in the primordial ooze.

[From High Frequency Trading on the Coinbase Exchange]

Another environment that, unlike Bitcoin, I see as a fantastically useful economic model of the “real” world is World of Warcraft. This is infested with bots. If you want to see this for yourself, take a look at this amusing (but not suitable for work) YouTube clip of a guy playing WoW only to discover that he’s the only human playing. Last month, there was a WoW crackdown that saw more than 100,000 bots kicked out so I suppose Bitcoin exchanges could have a crackdown try to kick them out too if they want to, but in the absence of a working identity infrastructure the arms race may already be lost. The WoW bot maker had revised their technology to be undetectable. WoW revised their technology to detect it. And so it goes on.

The Turing Wars, as I call them, are only just beginning. These Turing Wars will not be limited to fun and games, to fintech bloggers battling over influence leaque tables or investment banks battling over bonds. There are considerable real-world implications as to possession or otherwise of the IS_A_PERSON credential and without it I can see a likely international cyberwar battleground that will replace WoW battlefields at the epicentre of bot vs. bot evolution, turning the Internet of Things into a wasteland.

In March, two students at the Technion, the Israel Institute of Technology, created a swarm of bots that caused a phony traffic jam on Waze, the navigation software owned by Google… The Waze software, believing that the bots were on the road, started to redirect actual traffic down different streets, even though there was no traffic jam to avoid.

[From Friends, and Influence, for Sale Online – NYTimes.com – NYTimes.com]

When you don’t know who IS_A_PERSON and who IS_A_DOG and who is neither, you cannot interact online in a functional way. We must grasp the nettle, so to speak, and actually do something about this. Who is better placed, right now, to determine whether I am a person or a dog or a bot? Surely it must be my bank and surely this must give my bank a key role in the future? All my bank needs to do is to issue me with some kind of digital passport that I can show to WoW or Waze or Wall Street? Right?

Authentication yes, identification… hhmmm…

Greyscale backing image

I had the great good fortune to be asked by the GSMA to chair the Mobile Identity session at this year’s Mobile World Congress in Barcelona. During the absolutely excellent session, which featured input from Telesign, Payfone, Early Warning, Telenor, the UK Cabinet Office and Nok Nok, I happened to mention in passing that I thought that a global mobile-centric authentication push (perhaps using FIDO) was possible and that it would make life easier for many people, but that it wasn’t clear to me at all that a global identification platform was getting any closer.

B_GKGYvWwAANwkz

A couple of people asked me about this afterwards, and so I thought it would make an interesting blog topic to look at real-world, population-scale identification as discussed in the session. I’ll use Pakistan as an example. Pakistan has very strong identification laws around mobile and rigorously-enforced mandatory SIM registration.

[Pakistanis] have to show their IDs and fingerprints. If the scanner matches their print with the one in a government database, they can keep their SIM card. If not, or if they don’t show up, their cellphone service is cut off.

[From Pakistanis now need to be fingerprinted to have a cellphone – Business Insider]

This will help to stop criminals and terrorists from obtain mobile phones and operating with impunity in Pakistan because it depends on the integrity of the national identity register. Oh, wait…

The famous green-eyed ‘Afghan girl’ immortalised by the National Geographic magazine on its 1985 cover has been living in Pakistan on fake documents, prompting authorities to launch a probe. Four officials were suspended on Wednesday for allegedly issuing fake Computerised National Identity Card (CNIC) to Sharbat Gula and her two ‘sons’.

[From National Geographic Afghan Girl living on fake identity card in Pak : World, News – India Today]

National identity registers are a single source of failure and a natural honeypot for crime and corruption, as Pakistan has discovered.

The National Database and Registration Authority [NADRA] reports that it has deployed a state-of-the-art facial matching system with the capabilities to stop fraud and forgery in identity documents, yet people are still able to obtain forged identity cards. This was very puzzling to understand given the supposed surety, accuracy and privacy of NADRA database that such a scam was still happening even after the introduction of new chip-based identity cards.

[From Identity theft persists in Pakistan’s biometric era | Privacy International]

It’s not “puzzling” as at all as far as I am concerned.

Identity theft is more common in single reference systems such as centralised national population registers, as they create a single point of failure, and centralisation increases rather than reduces the potential for fraud. Doppelganger matches also become more likely in large scale databases.

[From Biometric Smart ID Cards: Dumb Idea :: SACSIS.org.za]

So while it makes sense for service providers to rely on biometric authentication to digital identities that they themselves will bind to virtual identities (with attributes), it is not so clear that it makes sense for service providers to rely on biometric identities established by third parties. In fact, when it comes to mobile phones, in this case I might go even further and say that it is not at all clear to me that we should be attempting to stop the bad guys from using mobile identities at all!

Surely it would be better to have criminals running around with iPhones, sending money to each other using mobile networks and generally becoming data points in the internet of things than to set rigorous, quite pointless identity barriers to keep them hidden.

[From Search Results SIM registration]

There’s a further point to make here, away from the exigencies of national security and the war on terror and in the world of business. As the banks have long understood, the issue of identification is inextricably linked to liability. There’s a world of difference between me as an operator saying to a service provider that “this is subscriber XYZ and it’s the same person who logged in last time and it’s still the same handset and SIM” and saying to a service provider that “this is Dave Birch”. I know I sound like a broken record on this, but it the overwhelmingly majority of interactions, who you are is not the point. The point is whether you are allowed to do something, whether you have credit, whether you are a subscriber or whatever. Trying to work out who someone “really” is means a world of legal pain.

According to the Post, “…sources say Instagram, owned by Facebook, ran into “serious legal problems” over its verification process and has been forced to pause it. Some suspect Twitter, which also has a verification system, had an issue with Instagram’s.”

[From Instagram is no longer verifying accounts – Business Insider]

Therefore it seems to me that in business terms, it makes sense for service providers to rely on bank identification since banks already have to comply with know-your-customer regulation. For this work, however, there must be a kind of identity “safe harbour” (i.e., if the person turns out to be using a false identity that the liability rests with the bank but if the bank has followed KYC procedures then it has no liabilty) from zealous prosecutors otherwise the wheels of commerce will become gummed up with identity junk.

Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.