Early on in the pandemic my colleagues at Consult Hyperion and I did a lot of research to explore how it might impact our customers and our customers’ customers, just as I am sure every other organisation in the payments sector did. We looked at a lot of speculative forecasts, we looked at research and analysis from quite a wide range of organisations in the financial sector and beyond, we spoke to a number of people in the industry and we took part in a fair few discussions and debates on the topic. As a result of this, we identified a number of strategic areas where stakeholders in the payment space should be developing or at least preparing their strategies and where they should be planning for some changes to take them through and beyond the COVID-19 crisis.
The ongoing COVID-19 crisis has been ruthlessly exposing fragile business models and weak balance sheets across a whole range of industries but perhaps never more so than in the travel business. In fairness, no one could have anticipated a global, government dictated total shutdown and no business models could ever be flexible enough to support such an improbable scenario. Still, it’s become clear that many travel industry companies are effectively broke and that the payments model they rely on is broken. Going forward we need a better and more sustainable approach to payments in the industry.
Most travel industry payments rely on payments cards so it’s worth starting by recapping on how most card payment models work. When a cardholder makes a payment to a merchant – either in store or, increasingly, on-line, this is routed to the merchant’s card acquirer. The acquirer has a direct relationship with the merchant in the same way that a card issuer has a direct relationship with cardholders and the acquirer will route the payment request to the relevant issuer – usually by sending the request to a payment scheme who uses the card number to identify the correct issuer. If the issuer approves the transaction then the response is routed back through the same path and the purchase completed. This is no different from any other card payment, although there are hidden complexities where the merchant is an online travel agent sourcing flights, hotels, etc from multiple underlying vendors. However, that’s a detail.
Using mobile devices for securing payments has been, and continues to be, a key area of interest for Consult Hyperion and our customers. We have helped many of our clients in this space from: providing advice on the market landscape, advising on security, testing security, developing security architectures, and building solutions. Apple’s purchase of Mobeewave a couple of weeks ago has caught our, and everyone else’s, attention. This gives us some time to reflect on this and consider what it means for the SoftPOS industry and ecosystems.
As I was scanning my list of security-related posts and articles recently, my eye was drawn by the first sentence of an article on (Google security engineer) Adam Langley’s blog, indicating that Her Majesty’s Government does not understand TLS 1.3. Of course, my first thought was that since HMG doesn’t seem to understand the principles of encryption itself, it’s hardly surprising that they don’t understand TLS. However, these aren’t the thoughts of an understandably non-technical politician but instead those of Ian Levy, the Technical Director of the National Cyber Security Centre at GCHQ – someone you’d hope does understand encryption and TLS. Now normally, I would read this type of article without feeling the need to comment. So what’s different?
Well, following the bulk of the article discussing how proxies are currently used by enterprises to examine and control the data leaving their organisation, by in effect masquerading as the intended server and intercepting the TLS connection, is the following throwaway line:
For example, it looks like TLS 1.3 services are probably incompatible with the payment industry standard PCI-DSS…
Could this be true? Why would it be true? The author provided no rationale for this claim. So, again in the spirit of Adam Langley, “it is necessary to write something, if only to have a pointer ready for when people start citing it as evidence.”
Adam’s own response – again following a discussion about how the problem with proxies is their implementation, not with TLS – is that
…the PCI-DSS requirements are general enough to adapt to new versions of TLS and, if TLS 1.2 is sufficient, then TLS 1.3 is better. (Even those misunderstanding aspects of TLS 1.3 are saying it’s stronger than 1.2.)
which would seem to make sense. Not only that, but
[TLS 1.3] is a major improvement in TLS and lets us eliminate session-ticket encryption keys as a mass-decryption threat, which both PCI-DSS- and HIPAA-compliance experts should take great interest in.
In turn, Ian follows up to clarify that it’s not TLS itself that could present problems, but the audit process employed by organisations
The reference to regulatory standards wasn’t intended to call into question the ability of TLS 1.3 to meet the data protection standards. It was all about the potential to affect (badly) audit regimes that regulated industries have to perform. Right or wrong, many of them rely on TLS proxies as part of this, and this will get harder for them.
So that’s alright. TLS 1.3 is not incompatible with PCI DSS. So what is the problem? Well, helpfully, Simon Gibson outlined this in 2016:
…regulated industries like healthcare and financial services, which have to comply with HIPAA or PCI-DSS, may face certain challenges when moving to TLS 1.3 if they have controls that say, “None of this data will have X, Y, or Z in it” or “This data will never leave this confine and we can prove it by inspecting it.” In order to prove compliance with those controls, they have to look inside the SSL traffic. However, if their infrastructure can’t see traffic or is not set up to be inline with everything that is out of band in their PCI-DSS, they can’t show that their controls are working. And if they’re out of compliance, they might also be out of business.
So the problem is not that TLS 1.3 is incompatible with PCI DSS. It’s that some organisations may have defined controls with which they will no longer be able to show compliance. They may still be compliant with PCI DSS – especially if the only change is to upgrade to TLS 1.3 and keep all else equal – but cannot demonstrate this. So what’s to be done?
Well, you could redefine the controls if necessary. If your control requires you to potentially degrade, if not break, the very security that you’re using to achieve compliance in the first place, is it really suitable? In the case of the two example controls above, however, neither of them should actually require inspection of SSL traffic.
For the organisation to be compliant in the first place, access to the data must only be possible to authorised personnel on authorised (i.e. controlled) systems. If you control the system, you can stop that data leaving the organisation more effectively by prohibiting its access to arbitrary machines in the external world. After all, you have presumably restricted access to any USB and other physical storage connectors, and you hopefully also have controls around visual and other recording devices in the secured area. It is difficult in today’s electronic world to think of a situation where a human (other than the cardholder) absolutely must have access to a full card number without (PCI DSS-compliant) alternatives being available.
So TLS 1.3 is a challenge to organisations who are using faulty proxies and/or inadequate controls already. It certainly doesn’t make you instantly non-compliant with PCI DSS.
Given this, we, as humble international payments security consultants, are left puzzled by the NCSC’s line about TLS 1.3 and PCI DSS compatibility. At worst, organisations need to redefine their audit processes to use the enhanced security of TLS 1.3, rather than degrade their security to meet out of date compliance procedures. But, of course, this is the type of problem we deal with all the time, as we’re frequently called in to help payment institutions address security risks and compliance issues. TLS 1.3 is just another tool in a complex security landscape, but it’s a valuable one that we’re adding to our toolkit in order to help our clients proactively manage their cyber defences.
At Consult Hyperion we’re always interested in the latest news in cyber security and in case you haven’t heard, 2018 has started with the news that the most processors found inside current computers, tablets, phones and cloud servers are vulnerable to a new class of attack. These attacks have been named Meltdown and Spectre, and are caused by common optimisations built into modern processors. Processors designed by Intel, AMD and ARM are all affected to varying degrees and, as it is a hardware issue (possibly dating back to 1995 if some reports are correct), it could affect any operating system. It’s likely the machine you’re reading this on is affected – whether it’s running Windows, Macs, iOS, Android or is in “the cloud”!!
At a basic level, these vulnerabilities break down the fundamental security barriers between an application and the operating system (OS). This means that a malicious application running on your processor may be able to read your, or your OS’s, secrets which may include passwords, keys or possibly payment data, present in processor caches or memory.
I’m not going to discuss how the vulnerabilities achieve what they do (there’s plenty of sites which attempt to do this), however I’d rather consider its impact on people, such as our clients, who may be handling sensitive data on mobile devices – e.g. payments, banking information. If you do want to understand the low-level details of the vulnerabilities and how they work, I suggest looking at https://spectreattack.com/ which has links to the original papers on both Spectre and Meltdown.
So, what can be done about it? The good news is that whilst the current processors cannot be fixed, several operating system patches have already been released to try and mitigate these problems.
However, my concern is that as this is a new class of attack, Spectre and Meltdown may be the tip of a new iceberg. Even over the last week, the issue has changed from it only affecting Intel processors, to now including AMD and ARM to some extent. I suspect that over the coming weeks and months, as more security researchers (and probably less savoury characters as well) start looking into this class of attack, there may be additional vulnerabilities discovered. Whether they would already be mitigated by the patches coming out now, we’ll have to see.
It should also be understood that for the vulnerability to be exploited, there are a few conditions which must be met:
2. You must have a vulnerable OS (i.e. unpatched)
3. An attacker must be able to execute their malicious code on your device
For point 1, most modern devices will be vulnerable to some extent, so we can probably assume the condition is always met.
Point 2 highlights two perennial problems, a.) getting people to apply software updates to their devices and b.) getting access to appropriate software updates.
For many devices, software updates are frequent, reliable and easy to install (often automatic) and there are very few legitimate reasons for consumers to not just take the latest updates whenever they are made available. We would always recommend that consumers apply security updates as soon as possible.
A bigger problem for some platforms is the availability of updates in the first place. Within the mobile space, Microsoft, Apple and Google all regularly release software updates; however, many Android OEMs can be slow to release updates for their devices (if they release them at all). Android devices are notorious for not running the latest version of Android – for example, Google’s latest information (https://developer.android.com/about/dashboards/index.html – obtained 5th January 2018 and represents devices accessing the Google Play Store in the prior 7 days) shows that for the top 81% of devices in use:
• 25% are running Nougat (v7.x, released August 2016)
• 30% running Marshmallow (v6.0, released October 2015)
• 26% running Lollipop (v5.x, released November 2014).
It should be noted that Google’s Nexus and Pixel devices have a commitment to receiving updates for a set period of time, and Google is very keen to encourage OEMs to improve their support for prompt and frequent updates – for example, the Android One (https://www.android.com/one/) programme highlights that these devices get regular software updates.
If you compare to iOS, it’s estimated (https://data.apteligent.com/ios/) that less than a month after it was released in December 2017, over 75% of iOS devices are already running iOS 11.
Thinking specifically about mobile banking and HCE payment applications, which is what interests many of our customers – these applications should already be including protections to prevent, or at least detect, malicious attacks. These protections typically include numerous measures such as root/jailbreak detection, code obfuscation, data minimisation, white-box cryptography and so on.
If anything, these latest vulnerabilities are a useful reminder that security is not a single task within a project plan, ticked off when complete before moving onto the next sprint or task. Rather, it is an ongoing concern for the lifetime of the system – something that Consult Hyperion quietly helps its customers with. A year ago, few would have considered this class of attack to either have been possible, let alone something which needs to be actively mitigated.
The last few months have been exciting if, like Consult Hyperion, you are attracted by the mobile POS (mPOS) sector. We’ve seen significant announcements from Mastercard and Worldpay and heard interesting rumours about the current work within the PCI Security Council, suggesting that the use of off-the-shelf mobile devices as card acceptance devices is likely to happen in the near future.
Targeted at small to medium sized and mobile merchants who do most of their business in cash or cheques, but have the occasional customer who prefers to transact by card, the mPOS dongle (card reading device) has been seen by these merchants as their first venture into the “expensive” world of credit and debit cards. However, the cost of the dongle and the power required to run it are often cited as barriers to the adoption of mPOS services.
Magnetic stripe dongles are effectively given away; their cost refunded through reductions in the fees levied against the initial transactions; their power derived from the phone, when inserted in the audio port. Chip & PIN dongles are more complex and so more expensive requiring their own power supply or battery. The business case to subsidize the additional cost of these devices through reductions in transaction fees is more challenging.
The higher cost and more power-hungry elements of a Chip & PIN dongle are the display and keypad. If we can replace these components with the capabilities of an off-the-shelf smartphone, can we bring down the cost and power requirements of the Chip & PIN dongle closer to that of the magnetic stripe version? If we can deliver the service entirely through a mobile application, can we simplify our distribution channels? These are the sort of questions that get the team at Consult Hyperion excited as they present big information security challenges, which we like.
Generic, off-the-shelf mobile devices have none of the physical and electronic countermeasures designed into a payment terminal to secure the personal and account information in the payment transaction. Nor do they have the specific assets required by the payment scheme such as the secure PIN entry capabilities. Equally, the Acquirer doesn’t have any control over the other applications loaded onto the phone or tablet, which could include malware designed to impact the performance of their mPOS service or monitor any communications to or from it.
So, the challenge is; can we develop applications for generic off-the-shelf mobile devices that deliver, as far as practical, similar levels of security to the hardware in the payment terminal, whilst withstanding repeated attack from hackers interested in capturing assets that they could use to attack the payment schemes’ international networks?
There are many companies delivering solutions which could protect the mPOS application against some of these threats and/or give the Acquirer a level of assurance about the identity of the individuals involved in the transaction. However, no one solution is likely to deliver against all of the PCI’s security standards, should they be published, and not every solution works on every mobile device.
So, the team designing your mPOS solution for off-the-shelf mobile devices must understand in detail the threats to which the application will be exposed, the most cost-effective countermeasures against those threats, how they work together and how they need to evolve in response to new fraudulent attacks. Experience would suggest that they will need to understand in detail the operation of the EMV payment application, transaction security and the smartphone operating system, whilst having considerable experience of implementing the best-of-breed information security tools.
People with such experience are few and far between. Many are my friends and colleagues, which makes my job interesting, exciting and rewarding. It looks like a busy end to the year!
The new administrations in the UK and USA are apparently planning to work together to create a new transatlantic America First / Buy British trade alliance. This will, it seems, include financial services.
A deal to reduce barriers between American and British banks through a new “passporting” system was being considered by Mr Trump’s team
Now what this passporting might mean is anyone’s guess, since this is just a newspaper story based on gossip, but I think it might be a little more complex to arrange than it seems at first because of the nature of banking regulation in the United States. If a British bank were to get a US banking passport this would presumably be equivalent to the implicit granting of a national bank charter and state regulators do not seem enthusiastic about the granting of more national bank charters. We know this, because at the end of 2016 the US Office of the Comptroller of the Currency (OCC) said that it was going provide a new national bank charter for fintech companies.
“The OCC will move forward with chartering financial technology companies that offer bank products and services and meet our high standards and chartering requirements,” said Comptroller of the Currency Thomas Curry
The reason for wanting to do this is obvious: right now, if I want to create a competitor to Venmo or Zelle, I have to either have to be regulated as a payment processor and have regulated banks involved or go and get regulated by 50 different state regulators under 50 different regulatory regimes, most of which remain rooted in a previous, pre-internet age. This seems anachronistic. Surely an American company should be able to a get a licence and get going. Well, the OCC’s proposal is attracting a lot of negative comment.
A turf war is brewing between US state and federal regulators over oversight of the financial technology sector after New York’s top watchdog sent a stinging letter to the Office of the Comptroller of the Currency (OCC), telling it to back off plans for a national bank charter for fintech firms.
Now I saw a few comments about this and other responses from state regulators that cast them in the role of Luddites standing in the way of progress but I have to say I agree with them. I mean, I am not a lawyer or anything, I don’t really understand US banking regulation and I couldn’t make any sensible comments on the proposals myself, but I think that the US regulatory environment is broadly speaking unfit for purpose and might benefit from at least a cursory examination of the direction of regulation in one or two other jurisdictions including Europe, for example and India.
The fundamental problem with the OCC proposals to my mind is that they are about a national charter for banking as a whole. They do not distinguish between the payments business and other parts of the banking business. Hence the charter means extending systemically risky credit creation activities in new directions. I don’t see any immediate problem that this solves. And the state regulators may well be right that it potentially makes the problems associated with banking regulation much worse.
Connected to this is the worry that a national charter would encourage large ‘too big to fail’ institutions – a small number of tech-savvy firms that dominate different types of financial services simply because they are able to get a national charter.
Whatever you think about Facebook they are not too big to fail. If Facebook screw up and lose a ton of money and go out of business then that is tough luck on their employees and their shareholders but it’s nobody else’s problem. That’s how capitalism is supposed to work. But if Facebook obtained a national banking charter they would immediately become too big to fail and no matter the greed or incompetence of their management, the government will be on the hook to bail them out just as the Roman senate was forced to bail out the banks there two millennia hence.
(In case you are curious, in 33BCE the emperor had to create 100 million sesterces of credit (a trifling couple of billion dollars in today’s money) through the banks to save them from collapse. Plus ca change, as they didn’t say in Ancient Rome).
If you look at what is happening in other jurisdictions, what you see is a separation of payments and banking so that the systemically less risky payment activities, which many people see as somewhat less than optimal in the world’s largest economy, can be reinvigorated while the systemically more risky credit business and investment banking business are left alone. In the European Union there is the regulatory category of the payment institution (PI). In Europe, Facebook is therefore a payment institution and not a bank. They don’t want to lend people money, they want to facilitate buying and selling and for that they need access to core payment systems and that’s all to the well and good. Similarly, in India, the regulator created the new category of payment bank (PB) so that mobile operators and others could start providing electronic payment services to what will soon be the world’s most populous nation.
The reasons for going down this path are entirely logical. If you leave innovation to the banking system then you end up in the situation of India as was or Nigeria as it is. A huge population, phones everywhere, talented and entrepreneurial people, huge and unfulfilled demand and… Nothing happening. I’m sure you’re all utterly bored with me reminding you, but the key innovations in technology in banking do not originate in banks. That’s the nature of the beast. The four digit PIN code was invented by a Scottish engineer. The payment card was invented by New York lawyer. M-PESA was invented by a telco. Bitcoin was invented by… Well, for all I know, it may well have been the head of Citibank or programmer number 2216 in the North Korean army, but you get my point.
This is why I think that the OCC should leave the regulation of credit institutions where it is now and propose instead a new national charter for payment institutions amalgamating the European PI and Electronic Money Institution (ELMI). Allow these American Payment Institutions (let’s shorten this to APIs to avoid confusion) to issue electronic money but not to provide credit, allow membership of payment schemes (e.g., the UK’s Faster Payment Service, Visa and so on), ensure customer balances are held in Tier 1 capital and so on. This way, Apple and Verizon can apply for a national charter and start providing competitive payment services that will benefit businesses and consumers and the existing banks will just have to suck up the loss of payment revenues for the greater good.
The passporting of such institutions should be much less controversial than the passporting of credit institutions. Surely it will be to everyone’s benefit if the “fintech” passporting agreements give UK and EU payment institutions the right to operate nationally in the United States, in return giving recipients of my proposed American Payment Institution charter the right to operate in the UK and EU? This would allow innovation and competition in the fintech space without creating yet another financial time bomb that bankers will inevitably trigger.
The wonderful people at Payments NZ invited me around the globe to their conference “The Point” in Auckland this year and flattered me by asking me to
give a keynote talk on the topic of “Cardmageddon” (the day when cards are no longer more than half of non-cash payments) and
be the prize in their raffle.
Naturally, I accepted both offers.
It was a terrific event (you can download the presentations from the event here) and I thoroughly enjoyed both roles. I made a big deal about APIs and XS2A in my presentation because I wanted the audience to understand just what a range of organisations consumers are likely to give access to their bank accounts to. In particular, I said that I thought that retailers would be quick to take advantage of the possibilities here, but I also mentioned messaging and social networks. This latter case is one that I have discussed a couple of times before. Here’s where I came back to it a couple of years ago:
I can remember discussing with some clients at the time what sort of services they might be able to offer to Facebook or other social networks that were empowered through an Electronic Money Issuing (ELMI) license and Payments Institution (PI) licence.
In work for one of our clients around about the same, I firmly predicted that Facebook would do just this because the advantage of being able to instruct transfers without having the regulatory overhead of being a bank were so great. These were hardly Nostradamus-style prognostications, merely rather obvious interpolations of technology and regulatory trends. And, frankly, the cost of obtaining and maintaining these licences is so trivial to a Facebook or a Google or an Apple that it was a no-brainer to assume that they would apply. Well, guess what…
The Sunday Business Post reports that Facebook has received a licence from the Central Bank to operate a financial payments service, two years after applying for authorisation. A subsidiary of the social media giant can now act as a payments provider and electronic money issuer, as well as provide credit transfers and remittance services across the EU, as a result of the regulatory approval.
Interesting phrasing. They can “provide credit transfers”. So the day when my teenage son’s dreams will at last come true are not far off. I’ll be able to send you a tenner in WhatsApp just as easily as I can send you my location and neither of us will need a bank account to do this. This means real, and real serious, competition coming into the payments space. This is great, because competition will drive new services for consumers. But it does make me wonder whether some more regulatory intervention is on the horizon.
To see why I think this, reflect on the Second Payment Services Directive (PSD2) — the home of the aforementioned XS2A — and why it is going to have a major impact on banks. This has been clear for some time and, indeed, I have been droning on about it for years. Let’s just recap on the principle for a moment. The point is that because banks occupy a privileged place in society they are required to provide some services that are for society’s good rather for their own good. XS2A is an example. In return for their privileges, banks have to deliver on certain responsibilities. So the regulator’s argument is that banks have to open up their APIs to 3rd parties in order to allow those third-parties to create new products and services that otherwise would not exist. The result of all of this is that society as a whole is better off.
Note that the banks themselves are not prevented from creating new products or services using these APIs either. I written before about the “Amazonisation of banking” and on a number of different engagements for financial services clients, my colleagues at Consult Hyperion have looked at the possibilities of opening up in this field. But back to The Point, where the very clear-thinking Victoria Richardson, General Manager Payments Direction at the Australian Payments and Clearing Association (APCA), set the meme of the event when she talked about banks having to shift their perspective from “API horror” to “API opportunity” and I genuinely think that, in the UK at least, some banks have started to do this.
So now the dust has settled, the banks are opening up their APIs and are seeing new opportunities from accessing data. This is not because banks wanted to do this, but because they were given no choice. But if this argument applies to banks, that they are required to open up their APIs because they have a special responsibility to society, then why shouldn’t this principle also apply to Facebook? You may be aware that Facebook recently blocked an insurance company from having access to customers Facebook data, which the insurance company wanted to know in order to provide better quotes and special offers and so on.
Facebook will allow people to use their accounts to log in to the Admiral app, and for verification purposes, but will not allow the insurer to view users’ posts to work out discounts.
It seems to me that these issues are equivalent. On the one hand we are saying the banks cannot stop other regulated institutions from having access to customers accounts provided that they obtain the customers’ permission first and use strong authentication and so on and so forth, so why on the other hand shouldn’t the same should apply to Facebook. Why shouldn’t a regulated institution such as an insurance company obtain access to customers’ data provided those customers give consent for them to do so? If I want to give GEICO access to my LinkedIn account on the grounds that I think it will get me a better deal on car insurance, why shouldn’t I? If an insurer decides to up my life insurance premium because they see me in a hot dog-eating competition on Facebook why shouldn’t they? After all, the more information insurers have, the more accurately they can price the risks. And if I don’t want to pay a higher premium, then I should stop smoking, bungie-jumping and eating Scotch eggs before breakfast. This is, by the way, hardly a new idea.
Startup Lenddo has launched a ‘social network’ credit card in Colombia that will see applicants approved or declined based on their reputations on Facebook and Twitter.
You can see the obvious benefits for financial services organisations if they can have access to social media accounts, almost as great as the benefits that social media platforms will obtain from having access to bank accounts. Come to that, why shouldn’t all regulated institutions have access to LinkedIn or Twitter or whatever else given the informed consent of customers? These platforms are crucial to the way that society functions nowadays so why should they not be required to be open platforms just as banks are? That would be a level playing field, wouldn’t it?
When people say “blockchain” they mean different things. And some of the things they mean are just absolutely, categorically different. Implications of public open blockchain designs and private blockchain designs vary drastically. I emphasis this distinction because it is key – the different designs assume and imply totally different things.
Both types are important but for different reasons, for different markets and for different use cases. I think we have passed the time when “Bitcoin bad – Blockchain good” seemed an eye opener. What this kind of argument did is it drew the attention of financial incumbents from the Bitcoin-like permissionless space to the private, permissioned space. Which makes sense for their business models. But I think they are not paying enough attention to the permissionless space. I think you are not either!
I bet you hadn’t anticipated such a steep rise of Ethereum (the price of native Ethereum currency soared 10 times from the beginning of 2015 and Ethereum’s market cap reached 1.5 billion dollars). You may have even missed the creation of the first human-free organisation. Even if you try to keep an eye on the public blockchain world, you only get reminded of its existence when Bitcoin price surges to its 2-year high (it now trades at over 700$) and all the mainstream media cover this.
Both public and private shared ledgers (
Blockchains) are essentially shared book-keeping (and computing) systems, one class – open for everyone to use (public), another – restricted to a certain group of members (private). And this is it. Open for everyone to use means lower entry barriers, it means identity-free and regulation-free shared book-keeping (and computing). What could be restricted by identity policies and financial regulations goes around this. You can, say, restrict a person from buying bitcoins by setting high KYC requirements to online exchanges (for users not to be able to change dollars for bitcoins if they are not KYC’d). You can even cut his or her internet connection. You can issue a court order to close a business that accepts bitcoins as money. And so on and so forth.
A lot of this effort looks similar to trying to stop the Internet, but I suppose the regulators can dream!
Public technology service and native digital rights
“Proof-of-work is inefficient”. So what? Let it go! Think of what’s the idea behind it and what it tries to achieve, regardless of this inefficiency. Regardless – because even if proof-of-work is not ideal, there are other permissionless technologies already developed and many more that are work in progress. Some of best minds in the world are looking to provide the benefits of permissionless shared ledger environment without the drawbacks of original Bitcoin’s proof-of-work. Just assume that they will solve that problem and move your thinking on.
What the blockchain delivers is permissionless book-keeping (and computing) public technology service (with the unchangeable and transparent transaction history as an incredibly valuable side effect). When I say “public service”, I do not mean that a company or public organisation provides it, I mean technology itself and collaborative user effort provide it. In a sense – everyone and no one. The protocol acts as the service provider.
And this is crucial. In traditional financial world, the basic value transfer layer that cryptocurrencies (i.e. everyone and no one) provide as a public technology service, is provided by companies – service providers, and is not accessible to anyone. For example, PayPal provides digital value transfer service.
Here I want to make a point that permissionless cryptocurrency systems have a promise of a digital environment in which value transfer is intrinsic, embedded on the protocol level – and so, for users the ability to make a transfer could become what I call a native digital right. Just to give you an analogy (it’s not a very accurate analogy but you’ll like it!) – take a guess what you see on the picture below. Well, it’s a standard residential elevator in my mother country Georgia, where you need to pay every time you use it! Up and down. Every time up, every time down!
So maybe we all (all internet users) live in our kind of Georgia, where every time we want to make a deal (economic agreement) in the online world we have to go through a cumbersome process and pay an unreasonable fee (each time!) for it. We need to get our bag out, fill in our card details, merchant’s acquirer (if it’s a merchant – even more obstacles with peer transfers) needs to send a request, card issuer needs to approve the transaction etc. Our today’s economic life online is based on this very complex e-commerce domain. And to me, it looks a lot like Georgian elevator. Think about it: on top of the obvious, that elevator only accepts certain denominations of Georgian coins – very specific, and is broken every once in a while – so even if you want to use a paid elevator sometimes you just can’t. So familiar.
How great would it be if we had a native digital right to make a value transfer online that noone could take from us (or grant us!), on a protocol level. How many applications could be built on top (at Consult Hyperion we call them SLAPPs -shared ledger applications)!
Persistence of permissionless
At the heart of the public shared ledgers is value transfer. This is because in order to assure the liveliness and self-sufficiency of the system, while providing non-restricted access to it, there needs to be an intrinsic economic incentive for those who maintain it. In other words, there should be a positive value to maintaining consensus. Most public shared ledgers for this reason can be described as currencies (decentralised cryptocurrencies) because they provide this incentive as a reward on the ledger in the ledger’s own “money”.
The canonical example of such a decentralised cryptocurrency is, of course, Bitcoin (remember, there are hundreds of them though!). As Bitcoin was intended to exist and evolve out of the reach of regulatory, corporate or any other centralised command, the technology includes mechanisms that ensure it persistently “survives” and proves its robustness and self-sufficiency. (Disclaimer: I’m not a Bitcoin maximalist)
This persistence is a differentiating characteristic of a public shared ledger system. The technology does not need people at tables making decisions in order to survive, it is “permissionless” (nevertheless, the way it evolves to an extent is influenced by “people at the tables” – just different people).
Potentially the principal implication of this persistence is the permissionless ascent of alternative virtual economy on top of decentralised protocols. Cryptocurrencies are not just a new form of payment – but rather, it’s a potential foundation for a new virtual economy, with new forms of economic interactions coming into place. When I say “new”, I don’t mean substitutive – I mean additional.
Virtual economic activity could become something fundamental to the Internet. Similar to the way the ability to communicate transformed into the ability to communicate over the Internet – it could grow into the ability to make friction-less economic arrangements (“economically” communicate) in the virtual world.
Thanks to the shared ledger technology and “smart contracts” innovation, not only the emergence of alternative economy is permissionless (and so – non-stoppable), but if it happens at certain scale, the very nature of economic relationships in this economy could be drastically different from what we are used to. A good depiction of such transformation is content monetisation on the web through the use of “invisible” micropayments. Another good example is seamless online payments in video games:
Breakout Coin provides for seamless in-game payments anywhere in the world, while the blockchain technology behind it, Breakout Chain, uses smart contracts and sidechains to enforce these financial agreements between parties.
Shared ledger technology could even turn our things (as in “Internet of Things”) into active economic agents through smart contracts.
Public shared ledger technology may help to turn a big part of our (as it seems) non-economic life into an economic activities.
Although there are many “if” in that, we should not dismiss this possibility quite yet and keep an eye on the permissionless space. You can observe or get involved, but it would be a mistake to put your head in the sand and deny that something incredible is happening.
I’m in Frankfurt for the annual PayComm MEETS Europe, my chance to catch up with practitioners from the continental card markets. It’s really hard to keep up with all of the change in the market, driven primarily because of the regulation rather than new technology. The pace of regulatory change seems relentless. A few years ago, I took part in an panel discussion about payments and regulation and innovation in Brussels. I remember it quite well because of my excellent fellow panelists and because of the nature of the discussions that followed. The panelists and topics, not that they are terribly relevant to the rest of this post, were:
Dave Birch, Consult Hyperion: Conditions of consumers acceptance of e- and m-payments
Roy Vella, Mobile services advisor: Potential of mobile technology in the area of payments
Alice Enders, Enders Analysis: Monetising digital content: electronic and mobile payments as means to reach the consumer
Katarzyna Lasota Heller, EDiMA (European Digital Media Association): Online retailers view on consumer expectations towards e- and m-payments
Stacy Feuer, US Federal Trade Commission. US regulatory perspective
In those discussions, I put forward a suggestion taking from Norbert Bielfeld’s superb December 2011 Working Paper “SEPA or payments innovation: a policy and business dilemma” [PDF] for a five year “legislative holiday” around payments to let the effects of the Payment Services Directive (PSD) and so forth settle down, so nothing would change until around now. That never happened, of course, and the Commission pressed forward with a regulatory agenda, one significant part of which was the reform of the card payment market in Europe. Now, setting to one side that I have always favoured a competition agenda and regard interchange caps as inappropriate and counterproductive price-fixing, these reforms are beginning to impact the market.
How? Well, I remember that when he was speaking at this event last year, Peter Jones from PSE gave an excellent presentation on the impact of the new European card regulations on the different players in the payments game. You won’t be surprised to hear that I agree with his fundamental conclusion that the regulations represent a victory for merchants over banks and demonstrate the importance of having a concerted and coordinated lobby. He went on to say, and I hope my scribbled notes on this are accurate, that the commission don’t fully understand the impact of the changes that they have made. (I might be tempted to add that I’m not sure that any of us really do because of the chaotic nature of the changes.) These changes will inevitably have some unexpected consequences and it is part of the fun in the industry at the moment trying to guess what these consequences might be. I had not, for example, realised at that time that the reform of licensing on a pan-European basis means that Amex and Diners will have to restructure their franchise models.
I won’t take you through a detailed analysis of the changes that occurred last year and the final set of changes to come into place this month except to say that they will trigger have started to change the structure of the European cards industry. This is not inherently bad for everyone, of course. Chaos is a ladder, as they say, and Peter’s presentation alluded to opportunities that might arise from the enormous changes that will take place. Peter for example, said that he could see to pan-European “common carrier” real-time networks evolving from the impending separation of brand and processing for the international card schemes and suggested that with good strategies the debit portion could emerge into a pan-European immediate settlement system.
However, this year I want to focus on two of his conclusions that I think were both correct and of tremendous importance. I think that they might not have been recognised as such by some stakeholders who were focusing on the reorganisation of the card business rather than the larger context. These conclusions are entirely congruent with the strategic perspectives that we shared with our clients and, as I down to PayComm 2016, I think it’s worth opening them up for discussion again.
As we have long advised our clients, a working push payment infrastructure (ie, smart devices and an immediate settlement network) means that a lot of day-to-day payments will shift to the infrastructure).
The first is that the heavy regulation of interchange-based card products will mean energy, investment and imagination being directed into non-card credit products, a driver that I have referred to before as the “push to push”. It is hardly a surprising prediction that banks and others will want to develop businesses that offer higher margins. As the margins on the card business are regulated down the ability to offer rewards, cutback, loyalty and other services is necessarily restricted. If mobile-centric account to account payment services can deliver better functionality and more attractive propositions to customers then the merchants will have to take them and pay more than for card products.
The second is that the payment services directive provisions on open access to payment accounts that we have discussed several times before on the blog will mean that (unless they are totally insane) banks will compete to offer what our Australian cousins call “overlay services” in order to compete with non-bank overlay services. Such value added service providers will use the account information service provider (AISP) APIs and the payment initiation service provider (PISP) APIs to deliver services to their customers. Now this has a number of strategic problems for banks to wrestle with. Banks are naturally concerned about third-party access to accounts relegating them to the role of commoditised, utility pipes for money because “over the top” players such as Facebook, Apple, Google and the other usual suspects will form a layer between the banks and their customers. But of course some banks might move aggressively to form the layer between other banks and their customers by providing better API services to those over the top players or they might decide to specialise in particular areas of the business and make themselves more attractive to customers in those niche is.
During the excellent PayComm workshop on instant payments, led by Andy Makkinje from Equens, a couple of people touched on the impact of these two trends (i.e., the push to push together with API access for PSPs) together. A working instant payment infrastructure, that is opened up because of the API access to banks, is very likely to become the dominant retail payment system, certainly for e-commerce (which is where all of the card fraud is in Europe). It will be the simplest, cheapest and most pervasive solution to the payments problem, and it’s nearly here. If you look around Europe the trend is unstoppable. The reform of the card market may well be the end of the card market as we know it.