Is your mobile banking app exposed by someone else’s software?

This post was written in collaboration with Neal Michie, Director, Product Management, Verimatrix.

Banks are facing massive disruption and change from many directions. The rise of app-only banks has made the need for traditional banks to have compelling app services an imperative. Banks have of course been building mobile apps for several years. If not already, they will soon be the most important channel for engaging with and serving customers. However, mobile banking apps will also become the primary focus of hackers, intent on getting access to other people’s information and money.

How have mobile banking apps evolved?

From lightweight apps supporting basic banking functions they have now evolved into full service branches including all manner of sophisticated features such as biometric identification, payments, loyalty and personal finance management driven by data science and machine learning.

Some of the things you might expect to find feature rich mobile apps include:

  • Numerous APIs supporting the large number of features – creating a large attack surface through API Abuse.
  • A mixture of native code, managed and web-code – providing different levels of control over how sensitive data is handled.
  • Dozens of dependencies on third party SDKs and libraries – each outside of the direct control of the bank.
  • Hundreds of thousands of lines of code – making isolating and auditing critical security functionality very difficult.
  • Use of WebViews to render web-code – providing great flexibility but making the app more reliant on external services for its security.

Third-party components are a particular issue

The use of third-party components may expose banks to new security risks. If these risks are not carefully managed and the customer data is compromised, then banks risk regulatory fines and reputation damage, not to mention the inconvenience and worry caused to customers.

To give you an idea of the size of the issue – third-party components can provide all manner of functions such as remote user monitoring, instrumentation, error and crash reporting, profiling, binding the user to the device, analytics, cryptography, UI enhancing, financial charts and many more. Components may be proprietary or open source code. They are integrated into the app where they may gather or process data and connect to third-party backend servers that may or may not be under the control of the bank.

The risk isn’t just theoretical. The well-publicised attack on British Airways breached the airline through known vulnerabilities in third party code. The business risk: a fine equivalent to 1.9% of revenue and long-term reputation harm – we are still talking about it 2 years later.

While not exhaustive, some of main security issues to watch out for include:

  • Auditing: Banks may not have access to the source code of proprietary third-party SDKs and libraries. This makes the task of ensuring that the software is safe much more difficult. Penetration testing techniques can be used to monitor the behaviour of the component, but this is no substitute for a source code review.
  • Deployment model: Often third-party components need to connect to the third-party backend services. Sometimes those backend services have to be operated by the third party. Other times it may be possible to bring those backend services in-house. Obviously bringing them in-house gives more control to the bank to ensure any sensitive data stays within their infrastructure. Where this is not possible, then the bank may unwittingly introduce a “data processor” in GDPR terms into their service without the necessary oversight.
  • Known vulnerabilities: Third party components may in turn utilise various other proprietary or open source libraries. Some of these dependencies may have known vulnerabilities which leave the bank app exposed.
  • Potential information disclosure in transit: Third party SDKs may not have adequate transport security enabled. For instance, no or weak TLS certificate pinning may be implemented, making any communication between mobile apps and the backend susceptible to man-in-the-middle attacks. This can potentially leak sensitive information in transit.
  • Potential information disclosure at rest: Third party SDKs may gather and handle sensitive customer or bank information, and this may be cached in the persistent storage without encryption or without being cleared from memory after use. A backup of the device/app can potentially expose sensitive information if not adequately protected.
  • Potential information disclosure due to misconfiguration: Backend service endpoints necessarily allow connections from the mobile apps. If these are misconfigured, then they may potentially leak sensitive information. A recent example was the exposure of personal data by Google Firebase. Data exposed included email addresses, usernames, passwords, phone numbers, full names, GPS information, IP addresses and street addresses.

How can we mitigate the identified security risks?

  • Risk Assessment: A risk assessment of third-party components will help to determine whether using “black box” components is acceptable. Consult Hyperion’s Structured Risk Assessment (SRA) methodology is designed for just this, ensuring that technical decisions have the right business context.
  • Code review and penetration testing: Source code review and penetration testing should be a standard process in any bank, employed for every release. Banks should go beyond the use of automated scanning and analysis tools. These may help in catching common issues but will not cover everything. For third party components, the options available could include reverse engineering (which is costly) or dynamic testing, where the behaviour of the component and its external communications are monitored real-time as it is used.
  • Tamper detection and integrity protection: Banks should also utilise tamper and integrity protection to protect code and builds – often known as App Shielding or In-App Protection. This should cover the integration of any third-party components where possible – either separately or as a whole. By anchoring third party libraries and obfuscating their boundaries, vulnerabilities become much hard to find and exploit. This additional layer of security can safeguard mobile banking apps and provide security assurance against reverse engineering and runtime hacks.

Conclusion

Protecting the personal and financial information of customers is a fundamental responsibility of banks. Doing so in mobile banking apps is more important than ever. The use of third-party components makes this more challenging. The contract a bank has with a third party may only provide very limited protection when things go wrong and it will certainly not cover the detrimental risk of losing customer trust and business. Great care is needed therefore, when choosing, integrating and deploying third party components.  On the other hand, third party components allow banks to provide their customers with better services quicker, so it is imperative that they employ best practice to ensure they serve their customers well, maintaining their trust.

About the Co-Author
Neal Michie

Neal Michie serves as Director of Product Management for Verimatrix‘s award-winning code protection solutions. Helping countless organisations instil trust in their IoT and mobile applications, Neal oversees Verimatrix’s foundational security products that are relied upon by some of the world’s largest organisations. He champions the need to position security as a top-notch concern for IoT companies, seeking to elevate code protection to new heights by serving as a sales enabler and brand protector. Neal brings more than 18 years of software development experience and spent the last decade building highly secure software solutions, including the first to be fully certified by both Mastercard and Visa. A graduate of Heriot-Watt University with an advanced degree in electrical and electronics engineering, Neal is an active and enthusiastic participant in Mobey’s expert groups.

The Machine Stops – Predictions & Reflections on Technology Strategy

Predictions from 1909

This essay is about a work of science-fiction, of which many features have come to pass. I re-read it this week, as it seemed that even more might be, and not necessarily to our advantage, in the world of Covid-19, and I wanted to confirm or deny my memory. In any case, science-fiction is a great background for technology strategising, helping to get beyond limited thinking based on incrementalism.

I took my English Literature ’O’ Level in 1974 and three works from the syllabus have stayed with me since: Macbeth, Lord of the Flies (which I had read a couple of years earlier) and one that no-one’s ever heard of: a science-fiction short story, The Machine Stops, by E.M Forster. That’s right, E.M. Forster, better known for acute observation of middle-class Edwardian manners (A Passage to India, A Room with a View, Howard’s End…). Apparently, he wrote it to demonstrate how easy it was to generate science-fiction akin to H.G. Wells. Indeed, it bears a certain resemblance to The Time Machine, except for an inversion: in Forster’s dystopian far-future, the effete leisured class live underground, while the rough outlaws live on the surface.

Forster’s ‘civilised’ tribe live in a world of pure ideas, only loosely connected, if at all, with sensory perception. I think what I found shocking was the protagonist flying over the Himalayas, glancing out and immediately shutting the blind, with the dismissive thought “no ideas here”. Having shuttled back and forth between England, Australia and America for much of my life until then, at a time when few did, I was appalled. I used to strain to remain awake, whenever it was even half-light, in order to take in everything, and speculate (and later research) on the physical make-up of the land and the people it supported. In fact, I still do!

Air travel was by fleets of airships, so Forster backed the wrong aeronautical horse, so to speak. Although, he explicitly stated that civilisation had given up the dream of beating the sun in Westward travel, as we have, having attained it in a limited fashion with Concorde, for not quite three decades. For the same reason, partly: the availability of real-time electronic communication.

The civilised world is run by ‘the Machine’; a kind of internet, with mechanical appendages; imagine the Internet of Things is an established reality. FaceTime has been invented, and so has Zoom: people’s time is mostly spent in isolation in their identical cells, giving or receiving webinars, on abstruse but useless topics. Alexa will pick up on any expression of discomfort and diagnostic kit and treatments will be lowered from the ceiling, in the manner of oxygen masks in planes. People never travel to things, but things to people, as if by Amazon. “And of course she had studied the civilization that had immediately preceded her own — the civilization that had mistaken the functions of the system, and had used it for bringing people to things, instead of for bringing things to people. Those funny old days, when men went for change of air instead of changing the air in their rooms!”. Not all predictions were correct in 2020; Google was just a big book, which everyone had, principally as a manual for getting the machine to satisfy all reasonable wants.

The natural atmosphere was supposed to be not capable of supporting human life and a respirator was needed at all times, in the unusual event that anyone had—how shall we say—a reasonable excuse to leave the home. I re-read the story partly to determine why that was, imagining disease. Actually, the supposition was either false or greatly exaggerated; what was the case was that the atmosphere stimulated the senses in a way that overwhelmed those used, and possibly adapted, to the sterile air produced by the machine. Notwithstanding the lack of a pandemic, it was certainly the case that humans physically repelled each other and social distancing was the norm.

The denouement has an increasing level of seemingly random and, at first, minor breakdowns in the operation of the machine. In my mind, these were because the machine’s designers could not anticipate all changes in its external environment.

There is, however, a ‘mending apparatus’ which automatically patches the machine. But when that starts to malfunction… The moral is that society should not, by becoming completely dependent on its own creations, become detached from understanding the nuts and bolts of technology. That is something your favourite consultants will never do!

Back to the story. It is clear that the Chinese had taken over the world at some earlier time. Perhaps when, as now, they concerned themselves with acquiring and applying the whole gamut of technical skills.

PSD2, Curtains for Direct Carrier Billing?

The Second Payment Services Directive, aka PSD2, contains much that is admirable, some that is debatable and yet more that is downright mysterious. As we await the forthcoming final version of the  Regulatory Technical Standards (RTS) on Strong Customer Authentication (SCA), putting everyone on a 21-month implementation cycle, I thought I’d cast an eye over one of the, as yet, largely undiscovered areas of the directive; namely the exclusion from SCA for direct carrier billing (DCB). Like so much in PSD2 no exemption comes without penalty.

It’s the directive itself that excludes direct carrier billing from regulation, in Article 3, where it specifically excludes:

(f) payment transactions by a provider of electronic communications networks or services provided in addition to electronic communications services for a subscriber to the network or service:

(i) for purchase of digital content and voice-based services, regardless of the device used for the purchase or consumption of the digital content and charged to the related bill; or

(ii) performed from or via an electronic device and charged to the related bill within the framework of a charitable activity or for the purchase of tickets;

provided that the value of any single payment transaction referred to in points (i) and (ii) does not exceed EUR 50 and:

— the cumulative value of payment transactions for an individual subscriber does not exceed EUR 300 per month, or

— where a subscriber pre-funds its account with the provider of the electronic communications network or service, the cumulative value of payment transactions does not exceed EUR 300 per month;

If you care to deconstruct this it means that PSD2 doesn’t apply to direct carrier billing – payments made using a subscriber’s existing mobile account – if the subscriber doesn’t spend more than €300 a month or pay more than €50 on any single payment. Which is a useful exclusion for network operators and providers of DCB services, but does rather put a limit on any ambitions to extend and grow these services into genuine competitors for consumer payments.  The exclusion also doesn’t apply to physical goods, limiting any expansion plans in that area.

Fail to meet those conditions and DCB automatically falls into the jaws of the RTS on Strong Customer Authentication, requiring two factor authentication to be applied, subject to the normal exemptions not being invoked. Given that banks, who have a track record of applying authentication to consumer payments, are finding meeting the SCA requirements challenging it’s not immediately obvious how mobile operators are going to address this, although you’d imagine that they could use the mobile handset itself as the possession factor.  Nonetheless, forcing customers to enter passwords or implementing a handset based biometric through an app isn’t going to do anything for the customer payment experience which hitherto has largely been invisible.

The problem is that doing nothing is not an option. Not implementing SCA means capping the amount customers can spend each month and failing to do that will mean customers have the automatic right to apply for a refund as payments over the limit will, in PSD2 terms, be unauthorised. T&Cs will need to be rewritten to make sure the operators can get their money back, although in the absence of regulatory guidance it’s not clear that the directive might not override that – if PSD2 is about one thing it’s about the pre-eminence of consumer rights. Oh, and go over that limit and the operator will find themselves considered a payment service provider under the regulatory conditions of PSD2 with all that it entails.

Some DCB providers have already taken the initiative and become Electronic Money Institutions, which means they don’t have to worry about the restrictions but do have to suffers the slings and arrows of Strong Customer Authentication, outrageous or otherwise.  Others seem so far less bothered, although no doubt the proposed regulatory penalties when published will concentrate minds. What’s really interesting is that the other side of PSD2 – the so called XS2A, Access to Account, via bank implemented APIs – actually opens up a real opportunity for any mobile operator or DCB player smart enough to spot it. After all, if you can connect to any consumer’s bank account to draw funds or examine their spending patterns you’re halfway to a pervasive retail consumer payments solution.

As for the other half, well that’s what we at Consult Hyperion are paid to solve. We think that the elements to allow this are already in place, all it needs now is someone with the foresight to take advantage of them. At that point the European Commission may well get the kind of innovation and competition in consumer payments that it desires, but in the meantime we’ll just continue twiddling our thumbs waiting for the RTS.

Crossing continents for knowledge sharing

Chyp believes that collaboration and knowledge sharing across markets can help the advancement of the industry and this is particularly true in transport ticketing. For example, we have found that our work for TfL with a large population and high journey count is not all directly applicable to smaller countries who cannot make such significant investments in infrastructure to serve small populations.

Mumbai-visit-TfN-in-Leeds

Recently, we have been working for MMRDA in Mumbai, India. While the environment is very different in some respect, compared to the UK, they have large passenger numbers and administer a system that makes extensive use of private transport operators, two factors similar to Transport for the North (TfN).

Sharing knowledge not only helps speed to market of deployments but creates a trusted environment and one with credibility. MMRDA asked Chyp to facilitate meetings for them in the UK with transport operators and suppliers in order that they could learn from those who have done it before or are planning to deliver a similar project. The result was a tour of the UK starting in London and taking in Transport for the North. The picture above shows the meeting which was held in Leeds and included presentations from:

Transport for the North

  • Alastair Richards (Director Integrated and Smart Travel (IST))
  • Jo Tansley Thomas (Programme Manager (IST))
  • John Elliott (ABT Back Office Requirements Team Lead (Consult Hyperion))

MMRDA

  • Ashish Chandra (PWC India)

Partnerships are hard to form. We hope that MMRDA will benefit from the organisations they met and their sharing in experience planning and deploying ABT in complex environments in the UK, remembering that differences can be as important to learn as similarities.

#IDIoT, Part 97: Wearables again

In the July 2000 edition of Harper’s Magazine, Dennis Cass wrote about Silicon Valley:

Let’s go Silicon Valley! Wherein the author stalks the flighty, green-backed webhead in his natural habitat

From Let’s go | Harper’s Magazine

He wrote about “the kinds of things you’ve heard bores like Nicholas Negroponte drone on about in Wired magazine, like shoes that can send e–mail to other shoes”. I wrote this down at the time, because I remember thinking it was an interesting perspective from a non-technologist looking at what technologists were doing. And it was a funny example. Shoes that can send e-mail to other shoes!

Yesterday, through the miracle of Twitter, I noticed that this dystopia is almost upon us.

Smart Shoes You Can Control With Your Smartphone.

From Smart Shoes You Can Control With Your Smartphone

It’s only taken a couple of decades to get this point, but it’s something to celebrate. Even our shoes will be getting hacked from now on.

SMS authentication isn’t security. And that’s official

Earlier in the week I blogged about mobile banking security, and I said that in design terms it is best to assume that the internet is in the hands of your enemies. In case you think I was exaggerating…

The thieves also provided “free” wireless connections in public places to secretly mine users’ personal information.

From Gone in minutes: Chinese cybertheft gangs mine smartphones for bank card data | South China Morning Post

Personally, I always use an SSL VPN when connected by wifi (even at home!) but I doubt that most people would ever go to this trouble or take the time to configure a VPN and such like. Anyway, the point is that the internet isn’t secure. And actually SMS isn’t much better, which is why it shouldn’t really be used for securing anything as important as home banking.

The report also described how gangs stole mobile security codes – which banks automatically send to card holders’ registered mobile phones to verify online transactions – by using either a Trojan virus in the smartphone or a device that intercepted mobile signals up to a kilometre away.

From Gone in minutes: Chinese cybertheft gangs mine smartphones for bank card data | South China Morning Post

Of course, no-one who takes security seriously ever wanted to do things this way in the first place (which is why, for example, we used a SIM Toolkit application for M-PESA). This is hardly a new opinion or me going on about things with the wisdom of hindsight.

I saw Charles Brookson, the head of the GSMA security group, make a very interesting point recently. Charles was talking about the use of SMS for mobile banking and payment services and he made the point that SMS has, to all intents and purposes, no security whatsoever.

From SOS SMS | Consult Hyperion

In case you’re interested, that blog post comes from 2008 and if I remember correctly I’d made a presentation around that time drawing on a story from 2007 to illustrate that the mass market use of SMS for secure transactions might prove to be unwise despite the convenience.

Identity theft and a fraudulent SIM swap cost a children’s charity R90 000.

From Standard, MTN point fingers in fraud case | ITWeb

These are all symptoms of the fact that nobody listens to me about mobile banking security. Well, sort of. I’m sure other people have made the same point about keeping private keys in tamper-resistant hardware so that all bank-customer communications are securely encrypted and digitally-signed at all times, but since I’ve been making the same point for two decades (back to the days of the proposed “Genie Passport” at BT Cellnet) and despite the existence proof of M-PESA nothing much seems to be happening. Or at least it wasn’t. But perhaps this era is, finally, coming to an end. Here is what the US Department of Commerce’s National Institute of Standards and Technology (NIST) say about out-of-band (OOB) text messaging in their latest Digital Authentication Guideline (July 2016):

OOB using SMS is deprecated, and will no longer be allowed in future releases of this guidance.

From DRAFT NIST Special Publication 800-63B

I looked up “deprecated” just to make sure I understood, since I assumed it meant something other than a general disapproval. According to my dictionary: “(chiefly of a software feature) be usable but regarded as obsolete and best avoided, typically because it has been superseded: this feature is deprecated and will be removed in later versions”. So: as of now, no-one should be planning to use SMS for authentication.

The NIST guideline goes on to talk about using push notifications to applications on smart phones, which is how we think it should be done. But how should this work in the mass market? The banks and the telcos and the handset manufacturers and the platforms just do not agree on how it should all work. But surely we all know what the answer is, which is that all handsets should have a Trusted Execution Environment (like the iPhones and Samsungs do) and third-parties should be allowed access to it on open, transparent and non-discriminatory terms. The mobile operators should use the SIM to offer a basic digital identity services (as indeed some are beginning to do with the GSMA’s Mobile Connect). The banks should use standard identity services from the SIM and store virtual identities in the TEE. There you go, sorted.

[Note: there’s no need to read this paragraph if you don’t care what happens under the hood] Now, when the Barclays app loads up on my phone it would bind the digital identity in my SIM to my Barclays identity and use the TEE for secure access to resources (e.g. the screen). Standard authentication services via FIDO should be in place so that Barclays can request appropriate authentication as and when required. Then when Barclays want to send me a message they generate a session key and encrypt the message. Then they encrypt the session key using the public key in my Barclays identity. Then they send the message to the app. The only place in the world where the corresponding private key exists is in my SIM, so the app sends the encrypted session key to the SIM and gets back the key it can then use to decrypt the message itself. In order to effect the use of the private key, the SIM requires authentication, so the TEE takes over the screen and the fingerprint reader and I swipe my finger or enter a PIN or whatever. (You could, of course, in true Apple style simply ignore the SIM and put the private key in the TEE, but I don’t want to get sidetracked.)

Why is this all so hard? Why don’t I have a secure “Apple Passport” or “Telefonica Passport” or “British e-Passport” on my iPhone right now with secure visas for all the places I want to visit like my bank and Manchester City Football Club and Waitrose?

It seems to me that there is little incentive for the participants to work together so long as each of them thinks that they can win and control the whole process. Apple and Google and Samsung and Verizon and Vodafone all want to charge the bank a dollar per log in (or whatever) and the banks are worried that if they pay up (in what might actually be a reasonable deal at the beginning) then they will be over a barrel in the mass market. Is it possible to find some workable settlement between these stakeholders so that we can all move on? Or a winner?

The internet of blockchains, or something

I’ve said a few times that I think the Internet of Things is where mobile was a couple of decades back. Some of us had mobile phones, and we loved them, but we really didn’t see what they were going to turn in to. I mean, I was always bullish about mobile payments, but even so… the iPhone 6s that’s next to me right now playing “Get Out Of Denver” by Eddie & the Hot Rods out through a Bluetooth speaker is far beyond anything that I might have imagined when dreaming of texting a Coke machine to get a drink. We’re in the same position now: some of us have rudimentary Internet of Things bits and bobs, but the Internet of Things itself will be utterly beyond current comprehension.

Specialized elements of hardware and software, connected by wires, radio waves and infrared, will be so ubiquitous that no one will notice their presence

From The Computer for the 21st Century – Scientific American

That was Mark Weiser’s prediction of the Internet of Things from 1991. It seems pretty accurate, and a pretty good description of where we are headed, with computers and communications vanishing from view, embedded in the warp and weft of everyday life. What I’m not sure Mark would have spent much time thinking about is what a total mess it is. Whether it’s wireless kettles or children’s toys, it’s all being hacked. This is a point that was made by Ken Munro during his epic presentation of smart TVs that spy on you, doorbells that give access to your home network and connected vibrators with the default password of “0000”  at Consult Hyperion’s 19th annual Tomorrow’s Transactions Forum back in April. I’d listen to Ken about this sort thing if I were you.

Speaking during a Q&A session for the upcoming CRN Security Summit, Ken Munro, founder of Pen Test Partners, claimed that security standards are being forgotten in the stampede to get IoT devices to market.

From Security standards being forgotten in IoT stampede, says expert | CRN

We’ve gone mad connecting stuff up, just because we can, and we don’t seem concerned about the nightmare in the making. I gave a talk about this at Cards & Payments Australia. The point of my talk was that I’m not sure how financial services can begin to exploit the new technology properly until something gets done about security. There’s no security infrastructure there for us to build on, and until there is I can’t see how financial services organisations can do real business in this new space: allowing my car to buy its own fuel seems a long way away when hackers can p0wn cars through the interweb tubes. I finished my talk with some optimism about new solutions by touching on the world of shared ledgers. I’m not the only one who thinks that there may be a connection between these two categories of new, unexplored and yet to be fully understood technology.

Although I’m a little skeptical of the oft-cited connection between blockchains and the Internet of Things, I think this might be where a strong such synergy lies.

From Four genuine blockchain use cases | MultiChain

The reason for the suspicion that there may be a relationship here is that one of the characteristics of shared ledger technology is that in an interesting way it makes the virtual world more like the mundane world. In the mundane world, there is only one of something. There’s only one of the laptops but I’m writing this post on and there’s only one of the chairs that I’m sitting on and there is only one of the hotel rooms that I’m sitting in. In the mundane world you can’t clone things. But in the virtual world, you can. If you have a virtual object, it’s just some data and you can make as many copies of it as you want. A shared ledger technology, however, can emulate the mundane in the sense that if there is a ledger entry recording that I have some data, then if I transfer the data to you, it’s now yours and no longer mine. The obvious example of this in practice is of course bitcoin where this issue of replication is the “double spending problem” well known to electronic money mavens.

The idea of applying the blockchain technology to the IoT domain has been around for a while. In fact, blockchain seems to be a suitable solution in at least three aspects of the IoT: Big Data management, security and transparency, as well as facilitation of micro-transactions based on the exchange of services between interconnected smart devices.  

From IoT and blockchain: a match made in heaven? | Coinfox

 The idea of shared ledgers as a mechanism to manage the data associated with the thingternet, provide a security infrastructure for the the thingternet and to provide “translucent” access for auditing, regulation, control and inspection of the thingternet strikes me as an idea worth exploring. That’s not to say that I know which shared ledger technology might be best for this job, nor that I have any brilliant insight into the attendant business models. It’s just to say that shared ledgers might prove to be a solution a class of problems a long way away from uncensorable value transfer.

Apple are right and wrong

I’m sure you’ve all seen this story by now.

Thousands of iPhone 6 users claim they have been left holding almost worthless phones because Apple’s latest operating system permanently disables the handset if it detects that a repair has been carried out by a non-Apple technician.

From ‘Error 53’ fury mounts as Apple software update threatens to kill your iPhone 6 | Money | The Guardian

Now, when I first glanced at this story on Twitter, my immediate reaction was to share the natural sense of outrage expressed by other commentators. After all, it seems to be a breach of natural justice that if you have purchased a phone and then had it repaired, it is still your phone you should still be able to use it.

I have my Volvo fixed by someone who isn’t a Volvo dealer and it works perfectly. The plumber who came round to fix the leak in our bathroom a couple of weeks ago doesn’t work for the company that built the house, nor did he install the original pipes and he has never fixed anything in or house before. (He did an excellent job, by the way, so hats off to British Gas HomeCare).

If you read on however, I’m afraid the situation is not so clear-cut and I have some sympathy for Apple’s actions, even though I think they chose the wrong way to handle the obvious problem. Obvious problem? Yes.

The issue appears to affect handsets where the home button, which has touch ID fingerprint recognition built-in, has been repaired by a “non-official” company or individual.

From ‘Error 53’ fury mounts as Apple software update threatens to kill your iPhone 6 | Money | The Guardian

Now you can see the obvious problem. If you’re using your phone to make phone calls and the screen is broken then what does it matter who repairs the screen as long as they repair it properly. But if you’re using your phone to authenticate access to financial services using touch ID then it’s pretty important that no one has messed around with the touch ID sensor to, for example, store copies of your fingerprint templates for later replay under remote control. The parts of the phone that other organisations are depending on as part of their security infrastructure (e.g., the SIM) are not just components of the phone like any other component because they feature in somebody else’s risk analysis. In my opinion, Apple is right to be concerned. Charles Arthur just posted a detailed discussion of what is happening.

TouchID (and so Apple Pay and others) don’t work after a third-party fix that affects TouchID. The pairing there between the Secure Element/Secure Enclave/TouchID, which was set up when the device was manufactured, is lost.

From Explaining the iPhone’s #error53, and why it puts Apple between conspiracy and rock-hard security | The Overspill: when there’s more that I want to say

Bricking people’s phones when they detect an “incorrect” touch ID device in the phone is the wrong response though. All Apple has done is make people like me wonder if they should really stick with Apple for their next phone because I do not want to run the risk of my phone being rendered useless because I drop it when I’m on holiday need to get it fixed right away by someone who is not some sort of official repairer.

 What Apple should have done is to flag the problem to the parties who are relying on the risk analysis (including themselves). These are the people who need to know if there is a potential change in the vulnerability model. So, for example, it would seem to me to be entirely reasonable in the circumstances to flag the Simple app and tell it that the integrity of the touch ID system can no longer be guaranteed and then let the Simple app make its own choice as to whether to continue using touch ID (which I find very convenient) or make me type in my PIN, or use some other kind of strong authentication, instead. Apple’s own software could also pick up the flag and stop using touch ID. After all… so what?

Touch ID, remember, isn’t a security technology. It’s a convenience technology. If Apple software decides that it won’t use Touch ID because it may have been compromised, that’s fine. I can live with entering my PIN instead of using my thumbprint. The same is true for all other applications. I don’t see why apps can’t make their own decision.

Apple is right to take action when it sees evidence that the security of the touch ID subsystem can no longer be guaranteed, but surely the action should be to communicate the situation and let people choose how to adjust their risk analysis?

Putting biometrics in context

Last week, the Biometric Alliance Initiative (BAI), a European-funded project aiming at conjugating mass-market biometrics with bespoke certification processes, has just announced the availability of its new evaluation and certification benchmark. To anyone like me, who, at one time or another, got involved in biometric implementation deep enough to appreciate the Tower of Babel this actually is, the BAI is a stepping stone in easing up the process.

The benchmark, for biometric technologies used in biometric-based non-governmental solutions, aims to enable the evaluation and certification of biometric technologies in a consistent manner that encompasses all their various aspects while establishing a common approach for laboratories.

[From: Biometric Alliance Initiative Press Release- December 2015]

You can see why this is need. Biometric solutions, as most identification solutions you would say, were not initially engineered for the mass-market. They work brilliantly in closed-loop sovereign solutions where security is of utmost importance and where the convenience parameter can be considered as trivial.

The challenge with mass-market biometrics is not just a question of trade-offs between convenience and security, but also managing a range of issues from interoperability to environment. Not to mention the ageing factor, which could well be unkind to mass-market biometrics in the coming years, were the current roll-outs not sufficiently well designed. I thought it would be helpful to set out a few of the issues that might seem obvious but stand as challenges for mass-market biometrics.

Environment factors are complex. In contrast to most other verification methods, biometrics need to be split into a two-parameter equation: The biometric trait, and the biometric device. The biometric trait and the sensing device are both characterised by behaviours which are dependent on the environment. Some optical fingerprint sensors, for instance, which technically take a picture of the fingerprint, might give very poor results under direct sunlight. With that in mind, how about an access control for a building which does not work in spring and autumn, between 8h-9h and 16h-17h? And you having dry skin certainly does not help. That is just a simple picture. The underlying technologies of other sensors can make them prone to other settings like moisture or dirt, just as your biometric traits are. With that in mind, if you try to picture a small-scale solution consisting of varying technologies being translated by a multi-national company into all of its branches, I think you are currently grinning sarcastically.

Performance in terms of both transaction speed and precision (biometric error rates) are implementation dependent. The specifics of each use case might dictate bespoke operational ranges. The performance of a biometric match which is not only inherent to the biometric modality, but also to the form factor, might be perfectly acceptable for a use case (e.g payment) and fatal to another one (e.g transit). Try to think of hypothetical biometric gateways at London Waterloo tube station during peak hours, with some people not managing to go through due to false rejections, others taking ages to match and you’ll appreciate the need for some thorough testing and fine-tuning in that sense.

Interoperability builds mass markets, but biometric data formats, the way the biometric “image” is coded, are not always interoperable. With open ISO standards on one side of the spectrum, and an ever increasing panel of innovative offers in vendor-specific biometric and solution-specific encrypted biometric data formats on the other, the biometric market offer can be confusing. Incompatibility which might exist between different versions of the ISO standards makes things even worse. Rolling out a biometric solution without prior analysis of the supported formats might feel like inserting a video CD in a video tape recorder- there is certainly a film on the video CD, and the video CD can certainly be inserted (but not sure of getting it back though) into the VCR, but you wouldn’t be able to watch the movie.

Security, or rather insecurity in biometrics is not as straight-forward as it seems to be. The brilliance of Tsutomu Matsumoto or the Chaos Computer Club cannot be denied…but, because there is always a but, gelatine cannot fool all types of sensors, nor can they be a threat in all use-cases. Fake fingerprints certainly did work for Sean Connery in “Diamonds are Forever” to get past Tiffany Case’s fingerprint scanner back in 1971, but I highly doubt a particular set of materials would be sufficient to fool all sensors with all types of users ( I’m thinking of people like me with an abnormally high number of minutiae). Furthermore, security is not just the ease of fooling the sensor, it also invokes other factors linked to the authentication (multiple-factor solutions), the configuration chosen ( more restrictive, at the expense of security, or the opposite) or even the setting (assisted solution or automated).

Context is critical. Buying a cup of coffee and launching nuclear missiles are different contexts. The underlying technologies behind different biometric solutions are sensitive to different settings and to different requirements. And they are interdependent: some fancy solution exposed to an exotic environment could be more prone to security breaches, while being non-interoperable with other systems and slow.

The BAI framework takes up a new modus operandi in addressing these specifics. The expertise of well-established players in the field of testing and certification like Elitt and Paycert has helped implement the biometric factor into a feasible, transparent and repeatable testing and certification infrastructure. Other members, coming from varying perspectives, ranging from potential-end users to regulators, have largely contributed in giving their respective viewpoints on the feasibility and efficiency of each aspect of this framework hence giving an empiric tint to this framework.

Setting standards for any types of technology can be challenging. Setting the associated certification infrastructure is also challenging as it needs to be transparent, technically sound and of course repeatable – with consistent results when testing. For the payments industry, its major challenges will be technical compatibility – particularly the ability for the certification to adapt to use across all types of cards and payments devices – and security. Cardholder information is incredibly sensitive, and with high consequences for breaches, security will always be a high priority for users.

[Ludovic Verecque, Paycert’s view on the BAI]         

This approach aims at instilling high levels of trust not only amongst the wide spectrum of actors of the biometric market, but also amongst indirect players. The FIDO alliance, for instance, which delegates the verification method of the authenticator (which could be biometric) to open implementation, while focusing its post-verification protocols, can only be strengthened if the biometric factor has been properly tried, tested and deemed fit for the context. The whole chain of trust could hence be made stronger, right from the biometric device through the whole of the FIDO protocols.

Ensuring context-appropriate implementations is the key to sustainable biometric solutions, and this is what the Biometric Alliance Initiative — to which I have been contributing for the past two years — is all about. I expect this benchmark to lead to a much wider use of a much wider range of biometrics in the mass market in the coming year.

 

 

 

Connecting is getting easier, disconnecting is getting harder

After I’d been blathering on at some event about how connecting things up is really but disconnecting them is really hard, someone sent me a link to a story illustrating an amusing case of the unexpected consequences of connectivity. A woman found out her husband was cheating on her with nanny because he had photos and texts on his iPhone, which was linked by iCloud to her iPad.

Gwen Stefani apparently discovered Gavin Rossdale was cheating on her after discovering some explicit texts and photos on the family’s iPad.

[From A guide on how to not let an Apple device ruin your marriage – NY Daily News]

I didn’t know who Gwen Stefani was, so I went off to goggle her on my skyper (as England’s greatest living poet, John Cooper Clarke, would put it) hoping that she might be a junior minister at the Home Office or an executive a technology company, but it turns out she’s a pop singer. Oh well. There’s no reason to expect pop stars to understand Apple’s settings any more than I do, so I put the story to one side. Until this morning, that is.

This morning I went through my browser history to try and find a page about a workshop that I was supposed to be going to. I couldn’t remember the name of the workshop, but I knew I’d been to the web site in the last day or two so I opened up my browser history. And found hundreds of web sites dealing with carpet remnants.

My wife and I are a very traditional couple. We share everything. It’s in our marriage vows. The bank account, the speeding tickets, the browser history. And we don’t have a nanny. So I don’t care about my wife seeing my browser history and she doesn’t care about me seeing hers. The reason I mention this episode though is to make a point: connecting things up is getting progressively easier, but working out who should be able to access what and when and under what circumstances is becoming increasingly complicated.

In fact, I’m tempted to say that it’s becoming so complicated that it will soon be beyond human comprehension. When I take a photo with my iPhone, I already have literally no idea where it will end up, and why some photos show up on my laptop and others don’t is completely baffling. (Although I have noticed that when I actually want to find a photo that I can remember taking a few months ago, I can never find it.)

Today it’s your photos, tomorrow it’s your financial transactions, soon it will be your identity that is unpredictably smeared through the interweb tubes with predictably chaotic results. Time for some thinking about identity partitioning and permissioning: more soon.


Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.