I was delighted to be asked to present a keynote at the FIDO Authenticate Summit and chose to focus on digital identity governance, which is something of a hot topic at the moment. Little did I know that the day before my session was recorded the European Commission would propose a monumental change to eIDAS, the Europe Union’s digital identity framework – one of the main examples I was planning to refer to. I hastily skimmed the proposed new regulation before the recording but have since had the time to take a more detailed look.
As Consult Hyperion, and as many other analysts, predicted, Covid-19 has driven the adoption and use of contact-free technology at the point of service. A recent survey funded by the National Retail Foundation, found that no-touch payments have increased for 69 percent of US retailers surveyed, since January 2020. In May, Mastercard reported that 78% of all their transactions across Europe were contactless.
Fraudsters are always looking for ways to take advantage of potential weaknesses or even inexperience in new payment devices. A recent news story promoted a man in the middle attack in which two phones are used to transfer and manipulate the transaction message between a stolen contactless card and the point of sale terminal.
As if lockdown were not bad enough, many of us are now faced with spending the next year with children unable to spend their Gap Year travelling the more exotic parts of the world. The traditional jobs within the entertainment and leisure sectors that could keep them busy, and paid for their travel, are no longer available. The opportunity to spend time with elderly relatives depends on the results of their last COVID-19 test.
I recognize that we are a lucky family to have such ‘problems’. However, they are representative of the issues we all face as we work hard to bring our families, companies and organizations out of lockdown. When can we open up our facilities to our employees, customers and visitors? What protection should we offer those employees that must or choose to work away from home? What is the impact of the CEO travelling abroad to meet new employees or customers, sign that large deal or deliver the keynote at that trade fair in Las Vegas?
QR codes are everywhere because anyone can read them, anyone can use them, anyone can write them. This is in part because there is no security infrastructure. The result in China, where there was little card infrastructure in place beforehand, was the near-ubiquity of QR in the world’s biggest mobile payments market.
“Ogilvy & Mather and Ipsos concluded in a survey of China’s mobile payment market that ‘[Chinese] mobile payment has permeated all aspects of life and changed basic, everyday habits.’”
It seemed to us that fraud would be an inevitable consequence of this QR-centric approach, that is indeed what happened. Last year, for example, the South China Morning Post reported that in March 2017 some 90m Yuan were stolen via QR code scams in Guangdong alone (a suspect in one case was found to have replaced merchants legitimate bar codes with fake ones that embedded a virus to steal personal information) and that in China as a whole, a quarter of viruses and trojans were coming in via QR.
Now, while even the man who invented QR codes says that they are an interim technology, there’s no denying that they are here to stay. Hence it makes sense to find a way to make them more secure, and the obvious way to do this is two-factor authentication (2FA). It turns out that the Chinese regulators have come to the same conclusion and have implemented the equivalent of the European Union (EU) Second Payment Services Directive (PSD2) Regulatory Technical Standards (RTS) on Secure Customer Authentication (SCA).
“Under new rules released by the People’s Bank of China [in December 2017], all transactions over 500 yuan (US$76) will be subject to additional levels of verification. As the transaction value passes each trigger point – 1,000 yuan, 5,000 yuan and unlimited – so the security checks will increase.”
Introducing further authentication methods makes obvious sense. Just as in the UK we have contactless for low-value payments but 2FA for higher-value payments (ie, chip and PIN for cards or CDCVM for mobile), QR will be used for low-value payments but 2FA will be required for higher-value payments. Of course, in the Chinese system, QR works just as well on-line as in-person whereas in our system we don’t use chip and PIN online.
This is where we (ie, the industry) should focus our efforts in 2018, since card-not-present fraud is currently growing at 9% per annum in the UK. So what is the way to use chip and PIN online? Well, we already know – it’s the combination of web and mobile browsing with mobile wallets for transactions. When I see a web form asking me to type in my card details – in 2018 already! – my heart sinks. I’ve used ApplePay in-browser a couple of times now (which is the equivalent of using chip and PIN online, as it uses the token in the wallet on the iPhone to complete a web transactions) and I’m already frustrated that more web sites don’t use this kind of solution. If we put our minds to it, we can have online payments that are as ubiquitous as in China, but more secure.
I’ve been reading a lot of comment about the US EMV migration recently and there seems to be pretty universal condemnation of the process (some of it from me). In the UK, we had chip and PIN day (St.Valentine’s Day 2006) and that, pretty much, was that. But in the US, the migration has been piecemeal, confusing and fraught with problems. But why?
Critics have told me that banks opted for a signature versus a PIN code because it saves them large amounts of money by not having to store PIN codes for everyone. Banks, on the other hand, say they feared that their customers would have a difficult time remembering a four digit code.
As far as I know, neither of these is true. Some issuers preferred chip and signature because it has higher interchange, not because US consumers are morons who uniquely amongst the nations of the Earth cannot remember a four digit personal identification number (PIN) that they use several times every day. Merchants wanted PIN because the fraud rate on PIN is two orders of magnitude less than with signature. Consumers wanted speed and, since they were given that by the no-signature online-authorised stripe transactions that they were familiar with, there was no traction for contactless (which delivers speed and convenience in an EMV environment and provides fertile ground for mobile payments).
The typical US consumer approaches a POS with some trepidation, I imagine, since it is completely opaque as to the experience that awaits them. Tap, swipe, dip, PIN or sign, hand over the card or keep it… every transaction is an adventure. I suppose many stakeholders take the position that it doesn’t really matter because mobile and in-app are going to steadily erode card transactions (Jupiter is reporting that almost half of US consumers already use some form of contactless payment, and a fifth already use it every day – mostly Starbucks I’d imagine). At some point in the imaginable future, “tap and pay” and “app and pay” will together exceed both EMV and magnetic stripe transactions at retail point of sale (POS) and at this point (the plastic singularity or, as I prefer it, #cardmaggedon or the #cardocalypse) signature versus PIN will seem to our children something of a medieval argument along the lines of angels on the head of a PIN. Right now, though, it is still a live debate.
My own decidedly unscientific survey involved a shopping spree one recent morning to no fewer than seven different retail locations, which revealed exactly seven different chip-capable payment terminals instructing customers to “Please Swipe Card.”
However, until such time, we should probably make an effort to improve the user experience (UX) for the typical consumer and make cards work better for the merchants. As I recall from the excellent NYPAY discussion on the topic, US merchants are particularly aggrieved by the rise in chargebacks that they have seen over the past few months.
Chargebacks for card-present transactions increased 50% following the Oct. 1 EMV liability shift,
You understand why this, I’m sure. It’s because before 1st October, if you spotted a $3.95 charge at Starbucks on your statement and you knew that you couldn’t possibly have made that transaction, then you would call up your issuer and complain and they would just eat the charge because it would have been more trouble than it’s worth to go back to Starbucks, pull the receipt, check the signature if there was one etc etc. However, after 1st October, if you spot a bogus $3.95 charge on your account and call up, the issuer will check the transaction codes and, if you had a chip card but it was swiped by a merchant who didn’t have (or didn’t use) a chip reader, then the $3.95 is charged back to the merchant. The net result is — entirely as expected and as it should be — that merchants see big increases in card-present chargebacks as previously hidden magnetic stripe fraud is revealed.
A good way to reduce that previously hidden fraud would be to simply give customers the option to block magnetic stripe transactions from cards with a chip on them. Why are the banks not giving consumers the option to disable stripe transactions? My debit card has embossing and a magnetic stripe on it for absolutely no reason that I can fathom since I never use it a non-chip ATM and in practice I don’t need it when abroad. I’ve just returned from trips to Rome and Munich where I never once used cash and never needed an ATM (I used my Caxton FX pre-paid card in shops and ticket machines and I used Uber for transport).
Proof that I was in Rome and that it’s not empty blog rhetoric.
I want my bank to auto-decline any magnetic stripe transaction made using my chip-enabled contactless debit card and I want the ability to set that parameter from my excellent mobile banking app. Why is this so difficult? Meanwhile, back in the US, the mounting annoyance with chip and PIN continues. Perhaps it’s time for the networks to announce the sunset date for magnetic stripes: perhaps 1st January 2019, after which time no new cards will be issued with magnetic stripes or embossing?
The latest CIFAS Fraudscape figures for the UK show identity theft up by half again in 2015. And there’s no end in sight. I’m genuinely not sure whether the fraudsters are getting smarter or the public is getting stupider. It does seem to me that some of the frauds being perpetrated might well be beyond the defensive capabilities of even the most advanced technology.
A taxpayer who bought and handed over £15,000 in Apple iTunes gift card vouchers is one of “hundreds” of HMRC customers to be defrauded in the past month, a scam bulletin says.
So much of the fraud going on depends, in one way or another, on the lack of an identity infrastructure and the useless proxies that support our daily interactions. That taxpayer had no reasonable way to determine whether they were talking to HMRC or not. There’s not going to be a green light on the phone that tells you the caller is who they say they are, although I can imagine how a some sort of digital passport that can check whether other digital passports are valid and I’m sure someone could come up with good mobile UX for it. The consequences are pretty significant.
The annual cost of fraud in the UK could be as high as £193bn a year, far higher than a government estimate of £50bn, according to a new report. The latest Annual Fraud Indicator, based on research from Portsmouth university, has estimated that private sector losses could be as high as £144bn a year — much larger than the public sector figure of £37.5bn. It also counted the cost of fraud against individuals.
Well, let’s not panic. After all, £193 billion doesn’t buy as much as it used to. Let’s call it £200 billion for a round figure. Against this, card fraud is a miserable half a billion, about a quarter of a percent. Hardly worth worrying about. And, of course, thanks to EMV and 3D Secure and all that, it’s going down. Oh wait…
Statistics by Financial Fraud Action (FFA) UK show fraud losses on UK payment cards totalled £567.5 million in 2015, representing an 18% increase from £479 million one year before.
OK, so it’s going up but we should be doing about it? Since there doesn’t seem to much enthusiasm for a general identity infrastructure to actually fix the problem, we should probably continue to focus on better authentication against revocable tokens in tamper-resistant hardware for payments for the time being (although that really isn’t going to stop people from sending gift vouchers to the “inland revenue”) and then see if we can move that model into other areas. If I can have a token that says I can pay by Visa but does not give away my actual PAN, then why can’t I have a token that says I’m over 18 without giving away my age or allowed to drive a car without giving away my address?
Earlier in the week I blogged about mobile banking security, and I said that in design terms it is best to assume that the internet is in the hands of your enemies. In case you think I was exaggerating…
The thieves also provided “free” wireless connections in public places to secretly mine users’ personal information.
Personally, I always use an SSL VPN when connected by wifi (even at home!) but I doubt that most people would ever go to this trouble or take the time to configure a VPN and such like. Anyway, the point is that the internet isn’t secure. And actually SMS isn’t much better, which is why it shouldn’t really be used for securing anything as important as home banking.
The report also described how gangs stole mobile security codes – which banks automatically send to card holders’ registered mobile phones to verify online transactions – by using either a Trojan virus in the smartphone or a device that intercepted mobile signals up to a kilometre away.
Of course, no-one who takes security seriously ever wanted to do things this way in the first place (which is why, for example, we used a SIM Toolkit application for M-PESA). This is hardly a new opinion or me going on about things with the wisdom of hindsight.
I saw Charles Brookson, the head of the GSMA security group, make a very interesting point recently. Charles was talking about the use of SMS for mobile banking and payment services and he made the point that SMS has, to all intents and purposes, no security whatsoever.
In case you’re interested, that blog post comes from 2008 and if I remember correctly I’d made a presentation around that time drawing on a story from 2007 to illustrate that the mass market use of SMS for secure transactions might prove to be unwise despite the convenience.
Identity theft and a fraudulent SIM swap cost a children’s charity R90 000.
These are all symptoms of the fact that nobody listens to me about mobile banking security. Well, sort of. I’m sure other people have made the same point about keeping private keys in tamper-resistant hardware so that all bank-customer communications are securely encrypted and digitally-signed at all times, but since I’ve been making the same point for two decades (back to the days of the proposed “Genie Passport” at BT Cellnet) and despite the existence proof of M-PESA nothing much seems to be happening. Or at least it wasn’t. But perhaps this era is, finally, coming to an end. Here is what the US Department of Commerce’s National Institute of Standards and Technology (NIST) say about out-of-band (OOB) text messaging in their latest Digital Authentication Guideline (July 2016):
OOB using SMS is deprecated, and will no longer be allowed in future releases of this guidance.
I looked up “deprecated” just to make sure I understood, since I assumed it meant something other than a general disapproval. According to my dictionary: “(chiefly of a software feature) be usable but regarded as obsolete and best avoided, typically because it has been superseded: this feature is deprecated and will be removed in later versions”. So: as of now, no-one should be planning to use SMS for authentication.
The NIST guideline goes on to talk about using push notifications to applications on smart phones, which is how we think it should be done. But how should this work in the mass market? The banks and the telcos and the handset manufacturers and the platforms just do not agree on how it should all work. But surely we all know what the answer is, which is that all handsets should have a Trusted Execution Environment (like the iPhones and Samsungs do) and third-parties should be allowed access to it on open, transparent and non-discriminatory terms. The mobile operators should use the SIM to offer a basic digital identity services (as indeed some are beginning to do with the GSMA’s Mobile Connect). The banks should use standard identity services from the SIM and store virtual identities in the TEE. There you go, sorted.
[Note: there’s no need to read this paragraph if you don’t care what happens under the hood] Now, when the Barclays app loads up on my phone it would bind the digital identity in my SIM to my Barclays identity and use the TEE for secure access to resources (e.g. the screen). Standard authentication services via FIDO should be in place so that Barclays can request appropriate authentication as and when required. Then when Barclays want to send me a message they generate a session key and encrypt the message. Then they encrypt the session key using the public key in my Barclays identity. Then they send the message to the app. The only place in the world where the corresponding private key exists is in my SIM, so the app sends the encrypted session key to the SIM and gets back the key it can then use to decrypt the message itself. In order to effect the use of the private key, the SIM requires authentication, so the TEE takes over the screen and the fingerprint reader and I swipe my finger or enter a PIN or whatever. (You could, of course, in true Apple style simply ignore the SIM and put the private key in the TEE, but I don’t want to get sidetracked.)
Why is this all so hard? Why don’t I have a secure “Apple Passport” or “Telefonica Passport” or “British e-Passport” on my iPhone right now with secure visas for all the places I want to visit like my bank and Manchester City Football Club and Waitrose?
It seems to me that there is little incentive for the participants to work together so long as each of them thinks that they can win and control the whole process. Apple and Google and Samsung and Verizon and Vodafone all want to charge the bank a dollar per log in (or whatever) and the banks are worried that if they pay up (in what might actually be a reasonable deal at the beginning) then they will be over a barrel in the mass market. Is it possible to find some workable settlement between these stakeholders so that we can all move on? Or a winner?
I very rarely use Internet banking these days and it seems I’m not alone. Almost every interaction with my bank takes place through one of my mobile banking applications: my Barclays banking application, my Barclays PingIt application (which I assume will soon disappear inside WhatsApp and Waitrose and Hailo and so on), my Simple application, my Barclaycard application, my American Express application and so on and so forth. Thinking about it, the only time I can remember using my home banking application in recent times was to search back through transactions to check on some payments on behalf of one of my kids at university and to set up a new payee for Faster Payments. Unusually, I appear to represent the man using the Clapham ISP in this respect, as the latest figures from the British Bankers’ Association show.
The number of internet banking logins made by Brits each day fell last year, as customers continued to migrate to apps, BBA research shows… The number of payments made using banking apps hit 347 million last year, a 54% rise. Internet banking still has the edge here, used for 417 million payments in 2015, but this was up just two per cent.
The BBC were kind enough to invite me to talk about this on Breakfast TV, because some of the members of the public that they had been talking to expressed concerns about the security of mobile banking. As this is a core area of expertise for Consult Hyperion (in fact, one of the biggest projects that we are working on right now deals with planning, executing and testing mobile app security strategy for one of the world’s biggest banks), I took the opportunity to reassure viewers that not only was mobile banking safe it was, in my opinion, much safer than internet banking. You can watch it here [at 25:50].
There are several reasons for this — the fact that the phone contains a smart card and tamper-resistant memory, the fact that the phone tracks you and (perhaps the most mundane of all) that if you lose your phone you notice fairly quickly — but the main point is that if you carry out any form of methodical risk analysis you will see that the mobile phone in essence offers a bundle of security countermeasures that work to reinforce each other. Of course we must be vigilant, but mobile security is doing OK.
Note also that mobile security extends across other channels: mobile is often used to secure internet login anyway. Right now this is often through the not-very-secure use of text messages but there are initiatives such as the GSMA’s Mobile Connect out there trying to introduce some real security. This is where I expect to see further real innovation in the not too distant future and why I keep posting repetitive tweets about annoying internet logins and anticipating the advent of Apple ID. Since just about everything on the Internet is insecure, the obvious way to improve the security of end applications is to (essentially) ignore the Internet completely in security terms. Just assume that everything sent across the Internet has no defence whatsoever against even the most basic assaults on integrity, confidentiality and availability. In planning terms, assume that the Internet is owned and operated by your nemesis! Thus, everything that goes across the Internet must be encrypted and digitally signed.
If we are going to do this then we need a place to store the private keys that are needed to make the encryption and signing work properly. We can’t store them inside PCs because by and large PCs are just as insecure as the Internet. But since everyone has smart phone, a rather obvious thing to do is to store keys inside the tamper-resistant storage that the handsets provide. After all, if the “secure enclave” (Apple’s name for the ARM Trusted Execution Environment, TEE) inside your Apple iPhone is safe enough to store payment tokens then it is safe enough to store a variety of the virtual identities that I need to operate in the online world. I’ll blog about how this might work in the banking case later in week, but at this point I just want to re-iterate what I told the BBC. When it comes down to it, mobile isn’t as secure as web, it’s much more secure than the web.
I’m sure you’ve all seen this story by now.
Thousands of iPhone 6 users claim they have been left holding almost worthless phones because Apple’s latest operating system permanently disables the handset if it detects that a repair has been carried out by a non-Apple technician.
Now, when I first glanced at this story on Twitter, my immediate reaction was to share the natural sense of outrage expressed by other commentators. After all, it seems to be a breach of natural justice that if you have purchased a phone and then had it repaired, it is still your phone you should still be able to use it.
I have my Volvo fixed by someone who isn’t a Volvo dealer and it works perfectly. The plumber who came round to fix the leak in our bathroom a couple of weeks ago doesn’t work for the company that built the house, nor did he install the original pipes and he has never fixed anything in or house before. (He did an excellent job, by the way, so hats off to British Gas HomeCare).
If you read on however, I’m afraid the situation is not so clear-cut and I have some sympathy for Apple’s actions, even though I think they chose the wrong way to handle the obvious problem. Obvious problem? Yes.
The issue appears to affect handsets where the home button, which has touch ID fingerprint recognition built-in, has been repaired by a “non-official” company or individual.
Now you can see the obvious problem. If you’re using your phone to make phone calls and the screen is broken then what does it matter who repairs the screen as long as they repair it properly. But if you’re using your phone to authenticate access to financial services using touch ID then it’s pretty important that no one has messed around with the touch ID sensor to, for example, store copies of your fingerprint templates for later replay under remote control. The parts of the phone that other organisations are depending on as part of their security infrastructure (e.g., the SIM) are not just components of the phone like any other component because they feature in somebody else’s risk analysis. In my opinion, Apple is right to be concerned. Charles Arthur just posted a detailed discussion of what is happening.
TouchID (and so Apple Pay and others) don’t work after a third-party fix that affects TouchID. The pairing there between the Secure Element/Secure Enclave/TouchID, which was set up when the device was manufactured, is lost.
Bricking people’s phones when they detect an “incorrect” touch ID device in the phone is the wrong response though. All Apple has done is make people like me wonder if they should really stick with Apple for their next phone because I do not want to run the risk of my phone being rendered useless because I drop it when I’m on holiday need to get it fixed right away by someone who is not some sort of official repairer.
What Apple should have done is to flag the problem to the parties who are relying on the risk analysis (including themselves). These are the people who need to know if there is a potential change in the vulnerability model. So, for example, it would seem to me to be entirely reasonable in the circumstances to flag the Simple app and tell it that the integrity of the touch ID system can no longer be guaranteed and then let the Simple app make its own choice as to whether to continue using touch ID (which I find very convenient) or make me type in my PIN, or use some other kind of strong authentication, instead. Apple’s own software could also pick up the flag and stop using touch ID. After all… so what?
Touch ID, remember, isn’t a security technology. It’s a convenience technology. If Apple software decides that it won’t use Touch ID because it may have been compromised, that’s fine. I can live with entering my PIN instead of using my thumbprint. The same is true for all other applications. I don’t see why apps can’t make their own decision.
Apple is right to take action when it sees evidence that the security of the touch ID subsystem can no longer be guaranteed, but surely the action should be to communicate the situation and let people choose how to adjust their risk analysis?