KYC at a distance

We live in interesting times. Whatever you think about the Coronavirus situation, social distancing will test our ability to rely on digital services. And one place where digital services continue to struggle is onboarding – establishing who your customer is in the first place.  

One of the main reasons for this, is that regulated industries such as financial services are required to perform strict “know your customer” checks when onboarding customers and risk substantial fines in the event of compliance failings. Understandably then, financial service providers need to be cautious in adopting new technology, especially where the risks are not well understood or where regulators are yet to give clear guidance.

Fortunately, a lot of work is being done. This includes the development of new identification solutions and an increasing recognition that this is a problem that needs to be solved.

The Paypers has recently published its “Digital Onboarding and KYC Report 2020”. It is packed full of insights into developments in this space, features several Consult Hyperion friends and is well worth a look.

You can download the report here: https://thepaypers.com/reports/digital-onboarding-and-kyc-report-2020

Fraudsters target loyalty schemes for easier gains

It has become practically impossible to keep up with the number of loyalty-related security breaches. In today’s edition of “Who Got Hit?”, we read that Tesco is sending security warnings to 600,000 Tesco Clubcard loyalty members following fraudulent activities[1]. The breach is suspected to be attackers trying to ‘brute-force’ their way into the loyalty system, using stolen credentials, potentially from a different breach. In recent years, fraud associated with loyalty has been on the rise. According to a 2019 report by Forter was an 89% increase in loyalty related fraud, from the previous year.

Consult Hyperion’s Live 5 for 2020

At Consult Hyperion we take a certain amount of enjoyment looking back over some of our most interesting projects around the world over the previous year or so, wrapping up thoughts on what we’re hearing in the market and spending some time thinking about the future. Each year we consolidate the themes and bring together our Live Five.

2020 is upon us and so it’s time for some more future gazing! Now, as in previous years, how can you pay any attention to our prognostications without first reviewing our previous attempts? In 2017 we highlighted regtech and PSD2, 2018 was open banking and conversational commerce, and for 2019 it was secure customer authentication and digital wallets — so we’re a pretty good weathervane for the secure transactions’ world! Now, let’s turn to what we see for this coming year.

Hello 2020

Our Live Five has once again been put together with particular regard to the views of our clients. They are telling us that over the next 12 months retailers, banks, regulators and their suppliers will focus on privacy as a proposition, customer intimacy driven by hyper-personalisation and personalized payment options, underpinned by a focus on cyber-resilience. In the background, they want to do what they can to reduce their impact on the global environment. For our transit clients, there will be a particular focus on bringing these threads together to reduce congestion through flexible fare collection.

So here we go…

1. This year will see privacy as a consumer proposition. This is an easy prediction to make, because serious players are going to push it. We already see this happening with “Sign in with Apple” and more services in this mould are sure to follow. Until quite recently privacy was a hygiene factor that belonged in the “back office”. But with increasing industry and consumer concerns about privacy, regulatory drivers such as GDPR and the potential for a backlash against services that are seen to abuse personal data, privacy will be an integral part of new services. As part of this we expect to see organisations that collect large amounts of personal data looking at ways to monetise this trend by shifting to attribute exchange and anonymised data analytics. Banks are an obvious candidate for this type of innovation, but not the only one – one of our biggest privacy projects is for a mass transit operator, concerned by the amount of additional personal information they are able to collect on travellers as they migrate towards the acceptance of contactless payment cards at the faregate.

2. Underpinning all of this is the urgent need to address cyber-resilience. Not a week goes by without news of some breach or failure by a major organisation putting consumer data and transactions at risk. With the advent of data protection regulations such as GDPR, these issues are major threats to the stability and profitability of companies in all sectors. The first step to addressing this is to identify the threats and vulnerabilities in existing systems before deciding how and where to invest in countermeasures.

Our Structured Risk Analysis (SRA) process is designed to help our customers through this process to ensure that they are prepared for the potential issues that could undermine their businesses.

3. Privacy and Open Data, if correctly implemented and trusted by the consumer, will facilitate the hyper-personalisation of services, which in turn will drive customer intimacy. Many of us are familiar with Google telling us how long it will take us to get home, or to the gym, as we leave the office. Fewer of us will have experienced the pleasure of being pushed new financing options by the first round of Open Banking Fintechs, aimed at helping entrepreneurs to better manage their start-up’s finances.

We have already demonstrated to our clients that it is possible to use new technology in interesting ways to deliver hyper-personalisation in a privacy-enhancing way. Many of these depend on the standardization of Premium Open Banking API’s, i.e. API’s that extend the data shared by banks beyond that required by the regulators, into areas that can generate additional revenue for the bank. We expect to see the emergence of new lending and insurance services, linked to your current financial circumstances, at the point of service, similar to those provided by Klarna.

4. One particular area where personalisation will have immediate impact is giving consumers personalised payment options with new technologies being deployed, such as EMV’s Secure Remote Commerce (SRC) and W3C’s payment request API. Today, most payment solutions are based around payment cards but increasingly we will see direct to account (D2A) payment options such as the PSD2 payment APIs. Cards themselves will increasingly disappear to be replaced by tokenized equivalents which can be deployed with enhanced security to a wide range of form factors – watches, smartphones, IoT devices, etc. The availability of D2A and tokenized solutions will vastly expand the range of payment options available to consumers who will be able to choose the option most suitable for them in specific circumstances. Increasingly we expect to see the awkwardness and friction of the end of purchase payment disappear, as consumers select the payment methods that offer them the maximum convenience for the maximum reward. Real-time, cross-border settlement will power the ability to make many of our commerce transactions completely transparent. Many merchants are confused by the plethora of new payment services and are uncertain about which will bring them more customers and therefore which they should support. Traditionally they have turned to the processors for such advice, but mergers in this field are not necessarily leading to clear direction.

We know how to strategise, design and implement the new payment options to deliver value to all of the stakeholders and our track record in helping global clients to deliver population-scale solutions is a testament to our expertise and experience in this field.

5. In the transit sector, we can see how all of the issues come together. New pay-as-you-go systems based upon cards continue to rollout around the world. The leading edge of Automated Fare Collection (AFC) is however advancing. How a traveller chooses to identify himself, and how he chooses to pay are, in principle, different decisions and we expect to see more flexibility. Reducing congestion and improving air quality are of concern globally; best addressed by providing door-to-door journeys without reliance on private internal combustion engines. This will only prove popular when ultra-convenient. That means that payment for a whole journey (or collection or journeys) involving, say, bike/ride share, tram and train, must be frictionless and support the young, old and in-between alike.

Moving people on to public transport by making it simple and convenient to pay is how we will help people to take practical steps towards sustainability.

So, there we go. Privacy-enhanced resilient infrastructure will deliver hyper-personalisation and give customers more safe payment choices. AFC will use this infrastructure to both deliver value and help the environment to the great benefit of all of us. It’s an exciting year ahead in our field!



Horizon Brief

I had the pleasure of attending a “Horizon Brief” organised by the Centre for the Study of Financial Innovation for Dentons. The well-informed speakers, ably chaired by Andrew Hilton (Director of the CSFI), were lawyer Dominic Grieve (previously the Attorney General and, until last week, Chair of Parliament’s Intelligence and Security Committee), lawyer Anton Moiseienko from Royal United Services Institute Centre for Financial Crime and Security, lawyer Richard Parlour (Chairman of the EU Task Force on Cybersecurity Policy for the Financial Sector) and lawyer Antonis Patrikos from Dentons’ Privacy and Cybersecurity Practice.

Margot James, the Minister for Digital was quoted in The Daily Telegraph that the UK must “get over” privacy and cyber security fears and adopt technology such as online identities. While this Minister was advocating online identities, another Minister was ending government funding for the government’s own Verify digital identity service. And more recently another Minister has scrapped the online age verification plan that would have at least bootstrapped digital identity into the mass market.

During the questions, I noted it might seem that the government has no actual strategy. As Mr. Grieve pointed out in response to my question, there is a tension at the heart of government strategy. I will paraphrase, but the issue is that the government wants to accumulate data but the accumulation of data raises the likelihood of cyberattack. How do we deal with this tension and make progress? This point was illustrated rather well later in the week, when the Parliament’s Joint Human Rights Committee recommended that The Government should “explore the practicality and usefulness of creating a single online registry that would allow people to see, in real time, all the companies that hold personal data on them and what data they hold.”

The Chair of the Committee, the lawyer Harriet Harman, said “It should be simple to know what data is shared about individuals and it must be equally easy to correct or delete data held about us as it was to us to sign up to the service in the first place”. As far as I can see, this completely impractical, expensive and pointless mechanism for logging in to some government website to find out if you signed up for the Wetherspoons loyalty scheme will be of no benefit whatsoever. The vast majority of the population neither know nor care what the Tesco Clubcard database holds about them so long as they get money off vouchers now and then. The Committee’s concerns about privacy are real and valid (and at Consult Hyperion we share them) but their proposed solution will not address them. Apart from anything else, what will stop hackers from getting into the database, finding out that you have an account at Barclays and then using this to phone you up and asking you to transfer your money into a safe account?

I wonder if the lawyers are aware that technologists can help resolve this fundamental paradox. Having had a few years’ experience in delivering highly secure systems to the financial sector, my colleagues at Consult Hyperion are familiar with a number of cryptographic techniques – such as homomorphic encryption, cryptographic blinding, zero-knowledge proofs and verifiable credentials – that can deliver apparently paradoxical results. It is possible to store data and perform computations on it without reading it, it is possible to determine that someone is over 18 without seeing their age and it is possible to find out whether you ate at a certain restaurant without disclosing your name.

Right now, the use of these technologies is nothing more than a hygiene factor for the companies involved. But as legislation (and social pressure) steadily converts personal information into toxic waste, more and more companies will want to avoid it. Privacy will become part of the overall package that a company offers to its customers and we understand the technologies that can deliver it and how to deploy them at population scale. Give us a call – our number’s not a secret.

Identity Week

The opening keynote at identity week in London was given by Oliver Dowden, the Minister for implementation at the Cabinet office and therefore the person in charge of the digital transformation of government. At Consult Hyperion we think digital identity is central to digital transformation of government (and the digital transformation of everything else, for that matter) so I was looking forward to hearing the UK government’s vision for digital identity. I accompanied the Minister on his visit to the IDEMIA stand where he was shown a range of attractive burgundy passports.

In his keynote, the Minister said that the UK is seen as being at the cutting edge of digital identity and that GOV.UK Verify is at the heart of that success.

(For foreign visitors, perhaps unfamiliar with this cutting edge position, a spirit of transparency requires me to note that back on 9th October 2016, Mr. Dowden gave written statement HCWS978 to Parliament, announcing that the government was going to stop funding Verify after 18 months with the private sector responsible for funding after that.)

Given that the government spends around £1.5 billion per annum on “identity, fraud, error, debt, how much identity costs to validate, and how much proprietary hardware and software bought”, it’s obviously important for them to set an effective strategy. Now, members of the public, who don’t really know or care about digital ID might be saying to themselves, “why can’t we just use ‘sign in with Apple’ to do our taxes?”, and this is a good point. Even if they are not saying it right now, they’ll be saying it soon as they get used to Apple’s mandate that all apps that allow third-party sign-in must support it.

Right now you can’t use a GOV.UK Verify Identity Provider to log into your bank or any other private sector service provider. But in his speech the Minister said that he looks forward to a time when people can use a single login to “access their state pension and the savings account” and I have to say I agree with him. Obviously you’d want a different single login for gambling and pornography, but that’s already taken care of as, according to Sky News, “thanks to its ill-conceived porn block, the government has quietly blundered into the creation of a digital passport – then outsourced its development to private firms, without setting clear limits on how it is to be used”. One of these firms runs the world’s largest pornography site, Pornhub, so I imagine they know a thing or two about population-scale identity management.

Back to the Minister’s point though. Yes, it would be nice to have some sort of ID app on my phone and it would be great if my bank and the HMRC and Woking Council and LinkedIn would all let me log in with this ID. The interesting question is how you get to this login. Put a PIN in that and we’ll come back to it later.

The Minister made three substantive points in the speech. He talked about:

  • The creation of a new Digital Identity Unit, which is a collaboration between DCMS and Cabinet Office. The Unit will help foster co-operation between the public and private sector, ensure the adoption of interoperable standards, specification and schemes, and deliver on the outcome of the consultation.
  • A consultation to be issued in the coming weeks on how to deliver the effective organisation of the digital identity market. Through this consultation the government will work with industry, particularly with sectors who have frequent user identity interactions, to ensure interoperable ‘rules of the road’ for identity.
  • The start of engagement on the commercial framework for consuming digital identities from the private sector for the period from April 2020 to ensure the continued delivery of public services. The Government Digital Service will continue to ensure alignment of commercial models that are adopted by the developing identity market to build a flourishing ecosystem that delivers value for everyone.

The Minister was taken away on urgent business and therefore unable to stay for my speech, in which I suggested that the idea of a general-purpose digital identity might be quite a big bite to take at the problem. So it would make sense to look at who else might provide the “digital identities from the private sector” used for the delivery of public services. Assuming the current GOV.UK Verify identities fail to gain traction in the private sector, then I think there are two obvious private sector coalitions that might step in to do this for the government: the big banks and the big techs.

For a variety of reasons, I hope that the big banks are able to come together to respond to the comments of Mark Carney, the Governor of the Bank of England, on the necessity for a digital identity in the finance sector to work with the banks to develop some sort of financial services passport. I made some practical suggestions about this earlier in the year and have continued to discuss the concept with potential stakeholders. I think it stacks up, but we’ll have to see how things develop.

On the other hand, if the banks can’t get it together and the big techs come knocking, they are already showing off their solutions. I’ll readily admit that when the Minister first said “private sector identities”, the first thought to flash across my brain was “Apple”. But I wouldn’t be at all surprised to go over to the HMRC web site fairly soon to find a “log in with Amazon” and “log in with Apple” next a button with some incomprehensible waffle about eIDAS that I, and most other normal consumers I’m sure, will simply ignore.

How do you use Apple ID to log into the Inland Revenue? Easy: you log in as you do now after sending off for the password and waiting for it to come in the post and that sort of thing and then once you are connected tell them the Apple ID that you want to use in the future. If you want to be “jackdaniels@me.com” or whatever, it doesn’t matter. It’s just an identifier for the Revenue to recognise you in the future. Then next time you go to the Inland Revenue, you log in as jackdaniels@me.com, something pops up on your iPhone and you put your thumb on it or look at it, and bingo you logged in to fill out your PAYE.

Yet another GDPR article – the story so far

How time flies, GDPR has just had its first birthday!

This past year you will have been inundated with articles and blogs about GDPR and the impact on consumers and businesses alike. According to the UK’s Information Commissioner, Elizabeth Denham, GDPR and its UK implementation, the Data Protection Act (DPA) 2018, has marked a “seismic shift in privacy and information rights”. Individuals are now more aware of their information rights and haven’t been shy about demanding it. In the UK, the ICO received around 14,000 personal data breach reports and over 41,000 data protection concerns from the public from 25 May 2018 to 1 May 2019, compared to around 3,300 PDB reports and 21,000 data protection concerns in the preceding year. Beyond Europe, the regulation has had a remarkable influence in other jurisdictions, where they have either enacted or are in the process of enacting a ‘GDPR equivalent’ law – something similar is underway in Brazil, Australia, California, Japan and South Korea.

With all the good intentions of GDPR some of its provisions contradict, other, equally well-intentioned EU laws. Bank Secrecy Laws on one hand, require that customers’ personal data should be protected and used for the intended purpose(s), except where otherwise consented to by the customer. AMLD4/5 on the other hand, requires that identifying personal data in ‘suspicious transactions’ should be passed on to appropriate national authorities (of course without the customer’s consent/ knowledge). Then PSD2 requires banks to open up customers’ data to authorised Third Party Providers (TPPs), subject to obtaining the customer’s consent. One issue that arises out of this is the seeming incongruity between Article 94 PSD2’s explicit consent, and GDPR’s (explicit) consent.

Under GDPR, consent is one of the lawful bases for processing personal data, subject to the strict requirements for obtaining, recording, and managing it, otherwise it’s deemed invalid. In some cases, a lack of good understanding of these rules has resulted in poor practices around consent processing. That is why organisations like the Kantara Initiative are leading the effort in developing specifications for ‘User Managed Access’ and ‘Consent Receipt’.

In addition, EU regulators have been weighing in to clarify some of the conundrums. For example, the Dutch DPA issued a guidance on the interplay of PSD2/GDPR, which shows that there’s no straightforward answer to what seems like a relatively simple question, as one might think. The EDPB has also published an opinion on the interplay between GDPR, and the slowly but surely evolving ePrivacy regulation. Suffice to say, correctly navigating the compliance requirements of all these laws are indeed challenging, but possible.

What will the second year of GDPR bring?

While regulators are keen to enforce the law, their priority is transparent co-operation, not penalties. The ICO has provided support tools, and guidance, including a dedicated help line and chat services to support SMEs. They are also in the process of “establishing a one-stop shop for SMEs, drawing together the expertise from across our regulatory teams to help us better support those organisations without the capacity or obligation to maintain dedicated in-house compliance resources.” However, for those who still choose to ‘wilfully or negligently break the law’, GDPR’s recommended administrative fines may help to focus the mind on what is at stake, in addition to the ‘cleaning up’ costs afterward. Supervisory Authorities require time and resources to investigate and clear the backlog as a result of the EU wide increase in information rights queries and complaints of the past one year. The UK’s ICO, and its Netherlands and Norwegian counterparts are collaborating to harmonise their approaches and establish a “matrix” for calculating fines. France’s CNIL has led the way with the $57 million Google fine earlier in the year, however, the ICO has confirmed that there will soon be fines for “a couple of very large cases that are in the pipeline, so also,the Irish DPC expects to levy substantial” fines this summer.

A new but important principle in GDPR is the ‘accountability principle’ – which states that the data controller is responsible for complying with the regulation and must be able to demonstrate compliance. So, it is not enough to say, ‘we have it,’ you must be able to produce ‘appropriate evidence’ on demand to back it up. The ICO states in its ‘GDPR – one year on’ blog that “the focus for the second year of the GDPR must be beyond baseline compliance – organisations need to shift their focus to accountability with a real evidenced understanding of the risks to individuals in the way they process data and how those risks should be mitigated.” By now one would expect that most organisations would have put in the effort required beyond tick boxes to achieve an appropriate level of compliance with the regulation so they can reap the reward of continued business growth borne out of trust/loyalty from their customers.

One of the methods of demonstrating GDPR accountability is through a Data Protection Impact Assessment­ (DPIA) – a process by which organisations can systematically analyse, identify, and minimise the data protection risks of their project or plan ‘before going live.’ GDPR does not mandate a specific DPIA process, but expects whichever methodology chosen by the data controller to meet the requirements specified in its Article 35(7).

At Consult Hyperion, we have a long track record of thinking about the risks associated with transactional data, so much so that we published and continue to use our own Structured Risk Analysis (SRA) methodology. Our approach, in response to the needs of our customers, has always been to describe the technological risks in a language that allow the business owner, who ultimately owns the risk, to make a judgement. Building on this we have developed a business focused approach to GDPR compliant DPIA to help our customers, for the products we design, review, or develop for them.

If you’re interested in finding out more, please contact: sales@chyp.com

Money 20/20 – Digital Identity Day

 

Where better to spend a day talking about digital identity than the Venetian in Vegas with its rather synthetic identity.

In giving the topic a full day track, the Money 20/20 organisers have recognised the increasing importance of the topic. However it is a topic that is not straightforward. Andrew Nash from Capital One was right when he said everyone has a different definition of identity. It’s a bit ironic – identity doesn’t have an identity. Here are three questions to summarise what we heard:

Is digital identity just about KYC or the broader sharing of personal data?

There is clearly still a lot of pain with KYC. Idemia explained how in the US, with its fragmented environment, doing basic things creating digital drivers licences that can be used across the country is hard.

But there is shift of focus from the narrow KYC problem towards the broader issue helping people to make their personal data portable in a way that removes friction – the “F” word of Identity, as Neil Chapman from Forgerock put it. 

Filip Verley from Airbnb made a useful bridge between these two aspects. It is no surprise that reputation is fundamental to the Airbnb platform. Reputation is the where the value is – Airbnb users don’t care what the name of a renter is but they do want to know they are reputable. But for that to work well that reputation needs to be anchored to the real identity that Airbnb has checked – i.e. their KYC.

Who is digital identity for – the person or the organisation?

Quite rightly there is now widespread acceptance that digital identity needs to be person centric. As well as the privacy point, there are practical reasons why it makes sense to put the person at the centre. For example, the person is in the best place to say which of the residential addresses associated with them is the one where they are actually living.

This is not the same as saying people own their identity. The organisations that provide services to people also have a stake in digital identity too. That’s why in Canada, as Joni Brennan explained, stakeholders across the economy are collaborating through the DIACC to address a need that is bigger than any one of them.

(Bianca Lopes, Joni Brennan and I talking about Digital Identity in Canada)

What will enable interoperable digital identities?

Unsurprisingly there was good representation from the DLT / blockchain crowd including Civic and Shyft. Heather Vescent gave a great overview of the standardisation work around Decentralised Identifiers (DIDs) and the desire of that community to create a new identity layer on the internet – perhaps an 8th “user” layer on top of the OSI 7-layered model of old. Whilst this work is being done through W3C it is still early days.

In contrast, FIDO2 is now a candidate recommendation in W3C and is already supported by Chrome 70 for Android (released last week) meaning that ubiquitous strong device based authentication (which includes biometrics) should not be far off. It’s great to see an initiative that, after a lot of hard work, looks like its about to become mainstream providing a real step forwards towards a more secure digital world.

 

 

TLS, DSS, and NCS(C)

As I was scanning my list of security-related posts and articles recently, my eye was drawn by the first sentence of an article on (Google security engineer) Adam Langley’s blog, indicating that Her Majesty’s Government does not understand TLS 1.3. Of course, my first thought was that since HMG doesn’t seem to understand the principles of encryption itself, it’s hardly surprising that they don’t understand TLS. However, these aren’t the thoughts of an understandably non-technical politician but instead those of Ian Levy, the Technical Director of the National Cyber Security Centre at GCHQ – someone you’d hope does understand encryption and TLS. Now normally, I would read this type of article without feeling the need to comment. So what’s different?

Well, following the bulk of the article discussing how proxies are currently used by enterprises to examine and control the data leaving their organisation, by in effect masquerading as the intended server and intercepting the TLS connection, is the following throwaway line:

For example, it looks like TLS 1.3 services are probably incompatible with the payment industry standard PCI-DSS…

Could this be true? Why would it be true? The author provided no rationale for this claim. So, again in the spirit of Adam Langley, “it is necessary to write something, if only to have a pointer ready for when people start citing it as evidence.”

Adam’s own response – again following a discussion about how the problem with proxies is their implementation, not with TLS – is that

…the PCI-DSS requirements are general enough to adapt to new versions of TLS and, if TLS 1.2 is sufficient, then TLS 1.3 is better. (Even those misunderstanding aspects of TLS 1.3 are saying it’s stronger than 1.2.)

which would seem to make sense. Not only that, but

[TLS 1.3] is a major improvement in TLS and lets us eliminate session-ticket encryption keys as a mass-decryption threat, which both PCI-DSS- and HIPAA-compliance experts should take great interest in.

In turn, Ian follows up to clarify that it’s not TLS itself that could present problems, but the audit process employed by organisations

The reference to regulatory standards wasn’t intended to call into question the ability of TLS 1.3 to meet the data protection standards. It was all about the potential to affect (badly) audit regimes that regulated industries have to perform. Right or wrong, many of them rely on TLS proxies as part of this, and this will get harder for them.

So that’s alright. TLS 1.3 is not incompatible with PCI DSS. So what is the problem?  Well, helpfully, Simon Gibson outlined this in 2016:

…regulated industries like healthcare and financial services, which have to comply with HIPAA or PCI-DSS, may face certain challenges when moving to TLS 1.3 if they have controls that say, “None of this data will have X, Y, or Z in it” or “This data will never leave this confine and we can prove it by inspecting it.” In order to prove compliance with those controls, they have to look inside the SSL traffic. However, if their infrastructure can’t see traffic or is not set up to be inline with everything that is out of band in their PCI-DSS, they can’t show that their controls are working. And if they’re out of compliance, they might also be out of business.

So the problem is not that TLS 1.3 is incompatible with PCI DSS. It’s that some organisations may have defined controls with which they will no longer be able to show compliance. They may still be compliant with PCI DSS – especially if the only change is to upgrade to TLS 1.3 and keep all else equal – but cannot demonstrate this. So what’s to be done?

Well, you could redefine the controls if necessary. If your control requires you to potentially degrade, if not break, the very security that you’re using to achieve compliance in the first place, is it really suitable? In the case of the two example controls above, however, neither of them should actually require inspection of SSL traffic.

For the organisation to be compliant in the first place, access to the data must only be possible to authorised personnel on authorised (i.e. controlled) systems. If you control the system, you can stop that data leaving the organisation more effectively by prohibiting its access to arbitrary machines in the external world. After all, you have presumably restricted access to any USB and other physical storage connectors, and you hopefully also have controls around visual and other recording devices in the secured area. It is difficult in today’s electronic world to think of a situation where a human (other than the cardholder) absolutely must have access to a full card number without (PCI DSS-compliant) alternatives being available.

So TLS 1.3 is a challenge to organisations who are using faulty proxies and/or inadequate controls already. It certainly doesn’t make you instantly non-compliant with PCI DSS.

Given this, we, as humble international payments security consultants, are left puzzled by the NCSC’s line about TLS 1.3 and PCI DSS compatibility. At worst, organisations need to redefine their audit processes to use the enhanced security of TLS 1.3, rather than degrade their security to meet out of date compliance procedures. But, of course, this is the type of problem we deal with all the time, as we’re frequently called in to help payment institutions address security risks and compliance issues. TLS 1.3 is just another tool in a complex security landscape, but it’s a valuable one that we’re adding to our toolkit in order to help our clients proactively manage their cyber defences.

Who would have ex-Spectre-d this?

At Consult Hyperion we’re always interested in the latest news in cyber security and in case you haven’t heard, 2018 has started with the news that the most processors found inside current computers, tablets, phones and cloud servers are vulnerable to a new class of attack. These attacks have been named Meltdown and Spectre, and are caused by common optimisations built into modern processors. Processors designed by Intel, AMD and ARM are all affected to varying degrees and, as it is a hardware issue (possibly dating back to 1995 if some reports are correct), it could affect any operating system. It’s likely the machine you’re reading this on is affected – whether it’s running Windows, Macs, iOS, Android or is in “the cloud”!!

At a basic level, these vulnerabilities break down the fundamental security barriers between an application and the operating system (OS). This means that a malicious application running on your processor may be able to read your, or your OS’s, secrets which may include passwords, keys or possibly payment data, present in processor caches or memory.

I’m not going to discuss how the vulnerabilities achieve what they do (there’s plenty of sites which attempt to do this), however I’d rather consider its impact on people, such as our clients, who may be handling sensitive data on mobile devices – e.g. payments, banking information. If you do want to understand the low-level details of the vulnerabilities and how they work, I suggest looking at https://spectreattack.com/ which has links to the original papers on both Spectre and Meltdown.

So, what can be done about it? The good news is that whilst the current processors cannot be fixed, several operating system patches have already been released to try and mitigate these problems.

However, my concern is that as this is a new class of attack, Spectre and Meltdown may be the tip of a new iceberg. Even over the last week, the issue has changed from it only affecting Intel processors, to now including AMD and ARM to some extent. I suspect that over the coming weeks and months, as more security researchers (and probably less savoury characters as well) start looking into this class of attack, there may be additional vulnerabilities discovered. Whether they would already be mitigated by the patches coming out now, we’ll have to see.

It should also be understood that for the vulnerability to be exploited, there are a few conditions which must be met:

1. You must have a vulnerable processor (highly likely)
2. You must have a vulnerable OS (i.e. unpatched)
3. An attacker must be able to execute their malicious code on your device

 
For point 1, most modern devices will be vulnerable to some extent, so we can probably assume the condition is always met.

Point 2 highlights two perennial problems, a.) getting people to apply software updates to their devices and b.) getting access to appropriate software updates.

For many devices, software updates are frequent, reliable and easy to install (often automatic) and there are very few legitimate reasons for consumers to not just take the latest updates whenever they are made available. We would always recommend that consumers apply security updates as soon as possible.

A bigger problem for some platforms is the availability of updates in the first place. Within the mobile space, Microsoft, Apple and Google all regularly release software updates; however, many Android OEMs can be slow to release updates for their devices (if they release them at all). Android devices are notorious for not running the latest version of Android – for example, Google’s latest information (https://developer.android.com/about/dashboards/index.html – obtained 5th January 2018 and represents devices accessing the Google Play Store in the prior 7 days) shows that for the top 81% of devices in use:

• 0.5% of devices are running the latest version of Android – Oreo (v8.0, released August 2017)
• 25% are running Nougat (v7.x, released August 2016)
• 30% running Marshmallow (v6.0, released October 2015)
• 26% running Lollipop (v5.x, released November 2014).

 
It should be noted that Google’s Nexus and Pixel devices have a commitment to receiving updates for a set period of time, and Google is very keen to encourage OEMs to improve their support for prompt and frequent updates – for example, the Android One (https://www.android.com/one/) programme highlights that these devices get regular software updates.

If you compare to iOS, it’s estimated (https://data.apteligent.com/ios/) that less than a month after it was released in December 2017, over 75% of iOS devices are already running iOS 11.

The final requirement is Point 3 – getting malicious code onto your device. This could be via a malicious application installed on a device, however, the malicious code could also come via a website as it’s been shown that even JavaScript sandboxed in a browser can exploit these vulnerabilities. As its not unheard of for legitimate websites to unwittingly serve up 3rd-party adverts which contain malicious code, a user doesn’t have to be accessing malicious websites for the problem to occur. Several browsers are receiving patches to try and prevent Meltdown and Spectre working via this route. Regarding malicious applications, we’d always recommend that applications are only ever installed from legitimate sources, however malicious apps still regularly appear in legitimate app stores, so this is not fool-proof.

Thinking specifically about mobile banking and HCE payment applications, which is what interests many of our customers – these applications should already be including protections to prevent, or at least detect, malicious attacks. These protections typically include numerous measures such as root/jailbreak detection, code obfuscation, data minimisation, white-box cryptography and so on.

If anything, these latest vulnerabilities are a useful reminder that security is not a single task within a project plan, ticked off when complete before moving onto the next sprint or task. Rather, it is an ongoing concern for the lifetime of the system – something that Consult Hyperion quietly helps its customers with. A year ago, few would have considered this class of attack to either have been possible, let alone something which needs to be actively mitigated.

Password security

The publication by NIST of an updated version of its digital identity guidelines marks a significant change in its approach to identity management. It highlights the importance of implementing digital identity in context, with three different elements replacing the previously monolithic Level of Assurance. These Levels are the Identity Assurance Level for identity proofing, the Authenticator Assurance Level for authentication and the Federation Assurance Level for use in a federated environment. Criteria for each Assurance type run from Level 1 to Level 3. This is intended to provide greater flexibility in implementation, for example combining pseudonymity with strong authentication for privacy purposes. Although optional, federation is positively encouraged for reasons of user experience, cost and privacy.

Risk management features prominently in the guidelines, with risk assessments used to determine appropriate identity choices according to system requirements. Although the requirements are technology agnostic, they are prescriptive regarding the assurance levels required for particular purposes. One area in which the guidelines are particularly refreshing is in their approach to passwords. Drawing on research into passwords exposed during data breaches, the use of unwieldy complexity rules is discouraged. Instead, it is suggested that users should be allowed to make passwords as long as they wish, encouraging the use of pass phrases and excluding very short passwords.

Faced with restrictive rules, many users will select predictable passwords which just meet the system requirements but are easily guessed. It is suggested that passwords should be checked against a blacklist of obvious choices and known compromised passwords, to filter these out. Randomly-generated secrets are therefore preferred to user-generated secrets.

The guidelines also highlight the importance of usability, supporting the use of password managers and only requiring passwords to be changed when there is evidence of compromise. There is some flexibility regarding displaying passwords on screen, depending on the context. In order to maintain an adequate level of security, a mechanism for limiting the number of possible failed authentication attempts is required.

This new, more person-centric approach from NIST follows on from UK government guidance published by GCHQ in 2016, advising ‘dramatic simplification’ of password management policies. This guidance also focused on achieving security by implementing processes which are easier for people to follow and therefore less susceptible to being undermined by users attempting to take short cuts through the system.

CHYP’s involvement in research has highlighted for us the difference between the way people say they behave and how they actually behave online. This kind of performativity may take the form of people describing how careful they are online (perhaps repeating recent official advice), while doing something conflicting on screen even as they are speaking. A similar effect can be seen when comparing figures produced from a user survey by the Gambling Commission, to usage statistics reported by gambling companies. The companies are able to draw statistics directly from their systems, while the survey figures are composed of gamblers’ reporting of their own behaviour. These discrepancies highlight the importance of observation when developing policies based on user behaviour.

It is encouraging to see a more effective approach to combination of privacy, security and usability in Identity Management being promoted at the highest levels. Even in local hospitals, it is now common to see screens showing simply ‘tap your pass or enter your passphrase’, where previously unpredictable processes were in place. Organisations such as FIDO have done a great deal to promote standardisation.

For a standalone organisation to adopt the new NIST rules would seem both positive and achieveable. They are in any case intended to be used within the US government. However, where organisations are already working in partnership and have existing legacy agreements regarding security requirements, it may be necessary to revisit these and agree a new set of password rules to replace existing, outdated approaches. Standardisation and education can go a long way towards supporting this process, although for larger organisations and those with multiple partners, it may take longer.

Publications such as ‘Why Johnny can’t encrypt’ and ‘Users are not the enemy’ have long been recognised for highlighting enduring issues with implementing security software. While education is important, attempts to fundamentally change people will inevitably fail, resulting in escalating support costs and unpredictable security risks. People are simply not equipped to adjust that quickly. In comparison, machines are generally designed by people and comparatively easily modified. Even with the advent of AI, machines are likely to remain reasonably malleable.

Where most user interaction involves people and machines, security tends also to involve mathematics. The NIST guidelines prescribe the use of appropriate cryptography at every stage. This is essential to securing the system but does not of itself guarantee that the system will remain secure. Appropriate system design and implementation are crucial to ensuring secure operations. This is exemplified by the recent flaw discovered in the WPA2 WiFi protocol. A mathematical proof is available for the security of the protocol but there is a vulnerability in the key management, which is not covered by the proof.

As in any system, a mathematical proof has to be ‘situated’ to be useful. Effective risk modelling will take into account the wider context of the system, focusing in on the most critical areas for greater attention. This process may have to be revisited over time, as the surrounding environment evolves. The increasing interconnectedness of the Internet of Things will require greater attention to disconnection technologies to preserve system integrity over time.


Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.