TLS, DSS, and NCS(C)

As I was scanning my list of security-related posts and articles recently, my eye was drawn by the first sentence of an article on (Google security engineer) Adam Langley’s blog, indicating that Her Majesty’s Government does not understand TLS 1.3. Of course, my first thought was that since HMG doesn’t seem to understand the principles of encryption itself, it’s hardly surprising that they don’t understand TLS. However, these aren’t the thoughts of an understandably non-technical politician but instead those of Ian Levy, the Technical Director of the National Cyber Security Centre at GCHQ – someone you’d hope does understand encryption and TLS. Now normally, I would read this type of article without feeling the need to comment. So what’s different?

Well, following the bulk of the article discussing how proxies are currently used by enterprises to examine and control the data leaving their organisation, by in effect masquerading as the intended server and intercepting the TLS connection, is the following throwaway line:

For example, it looks like TLS 1.3 services are probably incompatible with the payment industry standard PCI-DSS…

Could this be true? Why would it be true? The author provided no rationale for this claim. So, again in the spirit of Adam Langley, “it is necessary to write something, if only to have a pointer ready for when people start citing it as evidence.”

Adam’s own response – again following a discussion about how the problem with proxies is their implementation, not with TLS – is that

…the PCI-DSS requirements are general enough to adapt to new versions of TLS and, if TLS 1.2 is sufficient, then TLS 1.3 is better. (Even those misunderstanding aspects of TLS 1.3 are saying it’s stronger than 1.2.)

which would seem to make sense. Not only that, but

[TLS 1.3] is a major improvement in TLS and lets us eliminate session-ticket encryption keys as a mass-decryption threat, which both PCI-DSS- and HIPAA-compliance experts should take great interest in.

In turn, Ian follows up to clarify that it’s not TLS itself that could present problems, but the audit process employed by organisations

The reference to regulatory standards wasn’t intended to call into question the ability of TLS 1.3 to meet the data protection standards. It was all about the potential to affect (badly) audit regimes that regulated industries have to perform. Right or wrong, many of them rely on TLS proxies as part of this, and this will get harder for them.

So that’s alright. TLS 1.3 is not incompatible with PCI DSS. So what is the problem?  Well, helpfully, Simon Gibson outlined this in 2016:

…regulated industries like healthcare and financial services, which have to comply with HIPAA or PCI-DSS, may face certain challenges when moving to TLS 1.3 if they have controls that say, “None of this data will have X, Y, or Z in it” or “This data will never leave this confine and we can prove it by inspecting it.” In order to prove compliance with those controls, they have to look inside the SSL traffic. However, if their infrastructure can’t see traffic or is not set up to be inline with everything that is out of band in their PCI-DSS, they can’t show that their controls are working. And if they’re out of compliance, they might also be out of business.

So the problem is not that TLS 1.3 is incompatible with PCI DSS. It’s that some organisations may have defined controls with which they will no longer be able to show compliance. They may still be compliant with PCI DSS – especially if the only change is to upgrade to TLS 1.3 and keep all else equal – but cannot demonstrate this. So what’s to be done?

Well, you could redefine the controls if necessary. If your control requires you to potentially degrade, if not break, the very security that you’re using to achieve compliance in the first place, is it really suitable? In the case of the two example controls above, however, neither of them should actually require inspection of SSL traffic.

For the organisation to be compliant in the first place, access to the data must only be possible to authorised personnel on authorised (i.e. controlled) systems. If you control the system, you can stop that data leaving the organisation more effectively by prohibiting its access to arbitrary machines in the external world. After all, you have presumably restricted access to any USB and other physical storage connectors, and you hopefully also have controls around visual and other recording devices in the secured area. It is difficult in today’s electronic world to think of a situation where a human (other than the cardholder) absolutely must have access to a full card number without (PCI DSS-compliant) alternatives being available.

So TLS 1.3 is a challenge to organisations who are using faulty proxies and/or inadequate controls already. It certainly doesn’t make you instantly non-compliant with PCI DSS.

Given this, we, as humble international payments security consultants, are left puzzled by the NCSC’s line about TLS 1.3 and PCI DSS compatibility. At worst, organisations need to redefine their audit processes to use the enhanced security of TLS 1.3, rather than degrade their security to meet out of date compliance procedures. But, of course, this is the type of problem we deal with all the time, as we’re frequently called in to help payment institutions address security risks and compliance issues. TLS 1.3 is just another tool in a complex security landscape, but it’s a valuable one that we’re adding to our toolkit in order to help our clients proactively manage their cyber defences.

Who would have ex-Spectre-d this?

At Consult Hyperion we’re always interested in the latest news in cyber security and in case you haven’t heard, 2018 has started with the news that the most processors found inside current computers, tablets, phones and cloud servers are vulnerable to a new class of attack. These attacks have been named Meltdown and Spectre, and are caused by common optimisations built into modern processors. Processors designed by Intel, AMD and ARM are all affected to varying degrees and, as it is a hardware issue (possibly dating back to 1995 if some reports are correct), it could affect any operating system. It’s likely the machine you’re reading this on is affected – whether it’s running Windows, Macs, iOS, Android or is in “the cloud”!!

At a basic level, these vulnerabilities break down the fundamental security barriers between an application and the operating system (OS). This means that a malicious application running on your processor may be able to read your, or your OS’s, secrets which may include passwords, keys or possibly payment data, present in processor caches or memory.

I’m not going to discuss how the vulnerabilities achieve what they do (there’s plenty of sites which attempt to do this), however I’d rather consider its impact on people, such as our clients, who may be handling sensitive data on mobile devices – e.g. payments, banking information. If you do want to understand the low-level details of the vulnerabilities and how they work, I suggest looking at which has links to the original papers on both Spectre and Meltdown.

So, what can be done about it? The good news is that whilst the current processors cannot be fixed, several operating system patches have already been released to try and mitigate these problems.

However, my concern is that as this is a new class of attack, Spectre and Meltdown may be the tip of a new iceberg. Even over the last week, the issue has changed from it only affecting Intel processors, to now including AMD and ARM to some extent. I suspect that over the coming weeks and months, as more security researchers (and probably less savoury characters as well) start looking into this class of attack, there may be additional vulnerabilities discovered. Whether they would already be mitigated by the patches coming out now, we’ll have to see.

It should also be understood that for the vulnerability to be exploited, there are a few conditions which must be met:

1. You must have a vulnerable processor (highly likely)
2. You must have a vulnerable OS (i.e. unpatched)
3. An attacker must be able to execute their malicious code on your device

For point 1, most modern devices will be vulnerable to some extent, so we can probably assume the condition is always met.

Point 2 highlights two perennial problems, a.) getting people to apply software updates to their devices and b.) getting access to appropriate software updates.

For many devices, software updates are frequent, reliable and easy to install (often automatic) and there are very few legitimate reasons for consumers to not just take the latest updates whenever they are made available. We would always recommend that consumers apply security updates as soon as possible.

A bigger problem for some platforms is the availability of updates in the first place. Within the mobile space, Microsoft, Apple and Google all regularly release software updates; however, many Android OEMs can be slow to release updates for their devices (if they release them at all). Android devices are notorious for not running the latest version of Android – for example, Google’s latest information ( – obtained 5th January 2018 and represents devices accessing the Google Play Store in the prior 7 days) shows that for the top 81% of devices in use:

• 0.5% of devices are running the latest version of Android – Oreo (v8.0, released August 2017)
• 25% are running Nougat (v7.x, released August 2016)
• 30% running Marshmallow (v6.0, released October 2015)
• 26% running Lollipop (v5.x, released November 2014).

It should be noted that Google’s Nexus and Pixel devices have a commitment to receiving updates for a set period of time, and Google is very keen to encourage OEMs to improve their support for prompt and frequent updates – for example, the Android One ( programme highlights that these devices get regular software updates.

If you compare to iOS, it’s estimated ( that less than a month after it was released in December 2017, over 75% of iOS devices are already running iOS 11.

The final requirement is Point 3 – getting malicious code onto your device. This could be via a malicious application installed on a device, however, the malicious code could also come via a website as it’s been shown that even JavaScript sandboxed in a browser can exploit these vulnerabilities. As its not unheard of for legitimate websites to unwittingly serve up 3rd-party adverts which contain malicious code, a user doesn’t have to be accessing malicious websites for the problem to occur. Several browsers are receiving patches to try and prevent Meltdown and Spectre working via this route. Regarding malicious applications, we’d always recommend that applications are only ever installed from legitimate sources, however malicious apps still regularly appear in legitimate app stores, so this is not fool-proof.

Thinking specifically about mobile banking and HCE payment applications, which is what interests many of our customers – these applications should already be including protections to prevent, or at least detect, malicious attacks. These protections typically include numerous measures such as root/jailbreak detection, code obfuscation, data minimisation, white-box cryptography and so on.

If anything, these latest vulnerabilities are a useful reminder that security is not a single task within a project plan, ticked off when complete before moving onto the next sprint or task. Rather, it is an ongoing concern for the lifetime of the system – something that Consult Hyperion quietly helps its customers with. A year ago, few would have considered this class of attack to either have been possible, let alone something which needs to be actively mitigated.

The Challenge of Delivering mPOS Services through Off-The-Shelf Mobile Devices


The last few months have been exciting if, like Consult Hyperion, you are attracted by the mobile POS (mPOS) sector. We’ve seen significant announcements from Mastercard and Worldpay and heard interesting rumours about the current work within the PCI Security Council, suggesting that the use of off-the-shelf mobile devices as card acceptance devices is likely to happen in the near future.

Targeted at small to medium sized and mobile merchants who do most of their business in cash or cheques, but have the occasional customer who prefers to transact by card, the mPOS dongle (card reading device) has been seen by these merchants as their first venture into the “expensive” world of credit and debit cards. However, the cost of the dongle and the power required to run it are often cited as barriers to the adoption of mPOS services.

Magnetic stripe dongles are effectively given away; their cost refunded through reductions in the fees levied against the initial transactions; their power derived from the phone, when inserted in the audio port. Chip & PIN dongles are more complex and so more expensive requiring their own power supply or battery. The business case to subsidize the additional cost of these devices through reductions in transaction fees is more challenging.

The higher cost and more power-hungry elements of a Chip & PIN dongle are the display and keypad. If we can replace these components with the capabilities of an off-the-shelf smartphone, can we bring down the cost and power requirements of the Chip & PIN dongle closer to that of the magnetic stripe version? If we can deliver the service entirely through a mobile application, can we simplify our distribution channels? These are the sort of questions that get the team at Consult Hyperion excited as they present big information security challenges, which we like.

Generic, off-the-shelf mobile devices have none of the physical and electronic countermeasures designed into a payment terminal to secure the personal and account information in the payment transaction. Nor do they have the specific assets required by the payment scheme such as the secure PIN entry capabilities. Equally, the Acquirer doesn’t have any control over the other applications loaded onto the phone or tablet, which could include malware designed to impact the performance of their mPOS service or monitor any communications to or from it.

So, the challenge is; can we develop applications for generic off-the-shelf mobile devices that deliver, as far as practical, similar levels of security to the hardware in the payment terminal, whilst withstanding repeated attack from hackers interested in capturing assets that they could use to attack the payment schemes’ international networks?

There are many companies delivering solutions which could protect the mPOS application against some of these threats and/or give the Acquirer a level of assurance about the identity of the individuals involved in the transaction. However, no one solution is likely to deliver against all of the PCI’s security standards, should they be published, and not every solution works on every mobile device.

So, the team designing your mPOS solution for off-the-shelf mobile devices must understand in detail the threats to which the application will be exposed, the most cost-effective countermeasures against those threats, how they work together and how they need to evolve in response to new fraudulent attacks. Experience would suggest that they will need to understand in detail the operation of the EMV payment application, transaction security and the smartphone operating system, whilst having considerable experience of implementing the best-of-breed information security tools.

People with such experience are few and far between. Many are my friends and colleagues, which makes my job interesting, exciting and rewarding. It looks like a busy end to the year!

Payments and passports

The new administrations in the UK and USA are apparently planning to work together to create a new transatlantic America First / Buy British trade alliance. This will, it seems, include financial services. 

A deal to reduce barriers between American and British banks through a new “passporting” system was being considered by Mr Trump’s team

From Donald Trump plans new deal for Britain as Theresa May becomes first foreign leader to meet new president since inauguration

Now what this passporting might mean is anyone’s guess, since this is just a newspaper story based on gossip, but I think it might be a little more complex to arrange than it seems at first because of the nature of banking regulation in the United States. If a British bank were to get a US banking passport this would presumably be equivalent to the implicit granting of a national bank charter and state regulators do not seem enthusiastic about the granting of more national bank charters. We know this, because at the end of 2016 the US Office of the Comptroller of the Currency (OCC) said that it was going provide a new national bank charter for fintech companies.

“The OCC will move forward with chartering financial technology companies that offer bank products and services and meet our high standards and chartering requirements,” said Comptroller of the Currency Thomas Curry

From OCC Grants New Charter to Fintech Firms — with Strings Attached | American Banker

The reason for wanting to do this is obvious: right now, if I want to create a competitor to Venmo or Zelle, I have to either have to be regulated as a payment processor and have regulated banks involved or go and get regulated by 50 different state regulators under 50 different regulatory regimes, most of which remain rooted in a previous, pre-internet age. This seems anachronistic. Surely an American company should be able to a get a licence and get going. Well, the OCC’s proposal is attracting a lot of negative comment.

A turf war is brewing between US state and federal regulators over oversight of the financial technology sector after New York’s top watchdog sent a stinging letter to the Office of the Comptroller of the Currency (OCC), telling it to back off plans for a national bank charter for fintech firms.

From New York regulator blasts OCC over bank charter plan for fintech fi…

Now I saw a few comments about this and other responses from state regulators that cast them in the role of Luddites standing in the way of progress but I have to say I agree with them. I mean, I am not a lawyer or anything, I don’t really understand US banking regulation and I couldn’t make any sensible comments on the proposals myself, but I think that the US regulatory environment is broadly speaking unfit for purpose and might benefit from at least a cursory examination of the direction of regulation in one or two other jurisdictions including Europe, for example and India.


The fundamental problem with the OCC proposals to my mind is that they are about a national charter for banking as a whole. They do not distinguish between the payments business and other parts of the banking business. Hence the charter means extending systemically risky credit creation activities in new directions. I don’t see any immediate problem that this solves. And the state regulators may well be right that it potentially makes the problems associated with banking regulation much worse.

Connected to this is the worry that a national charter would encourage large ‘too big to fail’ institutions – a small number of tech-savvy firms that dominate different types of financial services simply because they are able to get a national charter.

From New York regulator blasts OCC over bank charter plan for fintech fi…

Whatever you think about Facebook they are not too big to fail. If Facebook screw up and lose a ton of money and go out of business then that is tough luck on their employees and their shareholders but it’s nobody else’s problem. That’s how capitalism is supposed to work. But if Facebook obtained a national banking charter they would immediately become too big to fail and no matter the greed or incompetence of their management, the government will be on the hook to bail them out just as the Roman senate was forced to bail out the banks there two millennia hence.


(In case you are curious, in 33BCE the emperor had to create 100 million sesterces of credit (a trifling couple of billion dollars in today’s money) through the banks to save them from collapse. Plus ca change, as they didn’t say in Ancient Rome).

If you look at what is happening in other jurisdictions, what you see is a separation of payments and banking so that the systemically less risky payment activities, which many people see as somewhat less than optimal in the world’s largest economy, can be reinvigorated while the systemically more risky credit business and investment banking business are left alone. In the European Union there is the regulatory category of the payment institution (PI). In Europe, Facebook is therefore a payment institution and not a bank.  They don’t want to lend people money, they want to facilitate buying and selling and for that they need access to core payment systems and that’s all to the well and good. Similarly, in India, the regulator created the new category of payment bank (PB) so that mobile operators and others could start providing electronic payment services to what will soon be the world’s most populous nation.

The reasons for going down this path are entirely logical. If you leave innovation to the banking system then you end up in the situation of India as was or Nigeria as it is. A huge population, phones everywhere, talented and entrepreneurial people, huge and unfulfilled demand and… Nothing happening. I’m sure you’re all utterly bored with me reminding you, but the key innovations in technology in banking do not originate in banks. That’s the nature of the beast. The four digit PIN code was invented by a Scottish engineer. The payment card was invented by New York lawyer. M-PESA was invented by a telco. Bitcoin was invented by… Well, for all I know, it may well have been the head of Citibank or programmer number 2216 in the North Korean army, but you get my point.

This is why I think that the OCC should leave the regulation of credit institutions where it is now and propose instead a new national charter for payment institutions amalgamating the European PI and Electronic Money Institution (ELMI). Allow these American Payment Institutions (let’s shorten this to APIs to avoid confusion) to issue electronic money but not to provide credit, allow membership of payment schemes (e.g., the UK’s Faster Payment Service, Visa and so on), ensure customer balances are held in Tier 1 capital and so on.  This way, Apple and Verizon can apply for a national charter and start providing competitive payment services that will benefit businesses and consumers and the existing banks will just have to suck up the loss of payment revenues for the greater good.

The passporting of such institutions should be much less controversial than the passporting of credit institutions. Surely it will be to everyone’s benefit if the “fintech” passporting agreements give UK and EU payment institutions the right to operate nationally in the United States, in return giving recipients of my proposed American Payment Institution charter the right to operate in the UK and EU? This would allow innovation and competition in the fintech space without creating yet another financial time bomb that bankers will inevitably trigger.


Facebook, APIs and cardmageddon

The wonderful people at Payments NZ invited me around the globe to their conference “The Point” in Auckland this year and flattered me by asking me to

    1. give a keynote talk on the topic of “Cardmageddon” (the day when cards are no longer more than half of non-cash payments) and

    2. be the prize in their raffle.

Naturally, I accepted both offers.

Getting The Point (yuk yuk)


It was a terrific event (you can download the presentations from the event here) and I thoroughly enjoyed both roles. I made a big deal about APIs and XS2A in my presentation because I wanted the audience to understand just what a range of organisations consumers are likely to give access to their bank accounts to. In particular, I said that I thought that retailers would be quick to take advantage of the possibilities here, but I also mentioned messaging and social networks. This latter case is one that I have discussed a couple of times before. Here’s where I came back to it a couple of years ago:

I can remember discussing with some clients at the time what sort of services they might be able to offer to Facebook or other social networks that were empowered through an Electronic Money Issuing (ELMI) license and Payments Institution (PI) licence.

From Facebook money is overdue | Consult Hyperion

In work for one of our clients around about the same, I firmly predicted that Facebook would do just this because the advantage of being able to instruct transfers without having the regulatory overhead of being a bank were so great. These were hardly Nostradamus-style prognostications, merely rather obvious interpolations of technology and regulatory trends. And, frankly, the cost of obtaining and maintaining these licences is so trivial to a Facebook or a Google or an Apple that it was a no-brainer to assume that they would apply. Well, guess what…

The Sunday Business Post reports that Facebook has received a licence from the Central Bank to operate a financial payments service, two years after applying for authorisation. A subsidiary of the social media giant can now act as a payments provider and electronic money issuer, as well as provide credit transfers and remittance services across the EU, as a result of the regulatory approval.

From Seen and Heard: Facebook secures payments services licence

Interesting phrasing. They can “provide credit transfers”. So the day when my teenage son’s dreams will at last come true are not far off. I’ll be able to send you a tenner in WhatsApp just as easily as I can send you my location and neither of us will need a bank account to do this. This means real, and real serious, competition coming into the payments space. This is great, because competition will drive new services for consumers. But it does make me wonder whether some more regulatory intervention is on the horizon.

To see why I think this, reflect on the Second Payment Services Directive (PSD2) — the home of the aforementioned XS2A — and why it is going to have a major impact on banks. This has been clear for some time and, indeed, I have been droning on about it for years. Let’s just recap on the principle for a moment. The point is that because banks occupy a privileged place in society they are required to provide some services that are for society’s good rather for their own good. XS2A is an example. In return for their privileges, banks have to deliver on certain responsibilities. So the regulator’s argument is that banks have to open up their APIs to 3rd parties in order to allow those third-parties to create new products and services that otherwise would not exist. The result of all of this is that society as a whole is better off.

Note that the banks themselves are not prevented from creating new products or services using these APIs either. I written before about the “Amazonisation of banking” and on a number of different engagements for financial services clients, my colleagues at Consult Hyperion have looked at the possibilities of opening up in this field. But back to The Point, where the very clear-thinking Victoria Richardson, General Manager Payments Direction at the Australian Payments and Clearing Association (APCA), set the meme of the event when she talked about banks having to shift their perspective from “API horror” to “API opportunity” and I genuinely think that, in the UK at least, some banks have started to do this.

Victoria from APCA

So now the dust has settled, the banks are opening up their APIs and are seeing new opportunities from accessing data. This is not because banks wanted to do this, but because they were given no choice. But if this argument applies to banks, that they are required to open up their APIs because they have a special responsibility to society, then why shouldn’t this principle also apply to Facebook? You may be aware that Facebook recently blocked an insurance company from having access to customers Facebook data, which the insurance company wanted to know in order to provide better quotes and special offers and so on.

Facebook will allow people to use their accounts to log in to the Admiral app, and for verification purposes, but will not allow the insurer to view users’ posts to work out discounts.

From Facebook blocks Admiral’s car insurance discount plan – BBC News

It seems to me that these issues are equivalent. On the one hand we are saying the banks cannot stop other regulated institutions from having access to customers accounts provided that they obtain the customers’ permission first and use strong authentication and so on and so forth, so why on the other hand shouldn’t the same should apply to Facebook. Why shouldn’t a regulated institution such as an insurance company obtain access to customers’ data provided those customers give consent for them to do so? If I want to give GEICO access to my LinkedIn account on the grounds that I think it will get me a better deal on car insurance, why shouldn’t I? If an insurer decides to up my life insurance premium because they see me in a hot dog-eating competition on Facebook why shouldn’t they? After all, the more information insurers have, the more accurately they can price the risks. And if I don’t want to pay a higher premium, then I should stop smoking, bungie-jumping and eating Scotch eggs before breakfast. This is, by the way, hardly a new idea.

Startup Lenddo has launched a ‘social network’ credit card in Colombia that will see applicants approved or declined based on their reputations on Facebook and Twitter.

[From Finextra: Lenddo delves into credit card applicants’ social media data]

You can see the obvious benefits for financial services organisations if they can have access to social media accounts, almost as great as the benefits that social media platforms will obtain from having access to bank accounts. Come to that, why shouldn’t all regulated institutions have access to LinkedIn or Twitter or whatever else given the informed consent of customers? These platforms are crucial to the way that  society functions nowadays so why should they not be required to be open platforms just as banks are? That would be a level playing field, wouldn’t it?

Blockchain as a public technology service

When people say “blockchain” they mean different things. And some of the things they mean are just absolutely, categorically different. Implications of public open blockchain designs and private blockchain designs vary drastically. I emphasis this distinction because it is key – the different designs assume and imply totally different things.

Both types are important but for different reasons, for different markets and for different use cases. I think we have passed the time when “Bitcoin bad – Blockchain good” seemed an eye opener. What this kind of argument did is it drew the attention of financial incumbents from the Bitcoin-like permissionless space to the private, permissioned space. Which makes sense for their business models. But I think they are not paying enough attention to the permissionless space. I think you are not either!

A brave slide from the Consensus conference in New York this year (unfortunately, can’t remember the name of the speaker! – let know and I’ll update), where I chaired the panel on post-trade and my colleague Dave Birch chaired panels on Identity. This illustrates that “Bitcoin bad, Blockchain good” is not set in stone.

I bet you hadn’t anticipated such a steep rise of Ethereum (the price of native Ethereum currency soared 10 times from the beginning of 2015 and Ethereum’s market cap reached 1.5 billion dollars). You may have even missed the creation of the first human-free organisation. Even if you try to keep an eye on the public blockchain world, you only get reminded of its existence when Bitcoin price surges to its 2-year high (it now trades at over 700$) and all the mainstream media cover this.

Both public and private shared ledgers (Blockchains) are essentially shared book-keeping (and computing) systems, one class – open for everyone to use (public), another – restricted to a certain group of members (private). And this is it. Open for everyone to use means lower entry barriers, it means identity-free and regulation-free shared book-keeping (and computing). What could be restricted by identity policies and financial regulations goes around this. You can, say, restrict a person from buying bitcoins by setting high KYC requirements to online exchanges (for users not to be able to change dollars for bitcoins if they are not KYC’d). You can even cut his or her internet connection. You can issue a court order to close a business that accepts bitcoins as money. And so on and so forth.

A lot of this effort looks similar to trying to stop the Internet, but I suppose the regulators can dream!

Public technology service and native digital rights

“Proof-of-work is inefficient”. So what? Let it go! Think of what’s the idea behind it and what it tries to achieve, regardless of this inefficiency. Regardless – because even if proof-of-work is not ideal, there are other permissionless technologies already developed and many more that are work in progress. Some of best minds in the world are looking to provide the benefits of permissionless shared ledger environment without the drawbacks of original Bitcoin’s proof-of-work. Just assume that they will solve that problem and move your thinking on.

What the blockchain delivers is permissionless book-keeping (and computing) public technology service (with the unchangeable and transparent transaction history as an incredibly valuable side effect). When I say “public service”, I do not mean that a company or public organisation provides it, I mean technology itself and collaborative user effort provide it. In a sense – everyone and no one. The protocol acts as the service provider.

And this is crucial. In traditional financial world, the basic value transfer layer that cryptocurrencies (i.e. everyone and no one) provide as a public technology service, is provided by companies – service providers, and is not accessible to anyone. For example, PayPal provides digital value transfer service.

Here I want to make a point that permissionless cryptocurrency systems have a promise of a digital environment in which value transfer is intrinsic, embedded on the protocol level – and so, for users the ability to make a transfer could become what I call a native digital right. Just to give you an analogy (it’s not a very accurate analogy but you’ll like it!) – take a guess what you see on the picture below. Well, it’s a standard residential elevator in my mother country Georgia, where you need to pay every time you use it! Up and down. Every time up, every time down!

Georgian elevator. Each time you go up and down, you need to pay!

So maybe we all (all internet users) live in our kind of Georgia, where every time we want to make a deal (economic agreement) in the online world we have to go through a cumbersome process and pay an unreasonable fee (each time!) for it. We need to get our bag out, fill in our card details, merchant’s acquirer (if it’s a merchant – even more obstacles with peer transfers) needs to send a request, card issuer needs to approve the transaction etc. Our today’s economic life online is based on this very complex e-commerce domain. And to me, it looks a lot like Georgian elevator. Think about it: on top of the obvious, that elevator only accepts certain denominations of Georgian coins – very specific, and is broken every once in a while – so even if you want to use a paid elevator sometimes you just can’t. So familiar.

How great would it be if we had a native digital right to make a value transfer online that noone could take from us (or grant us!), on a protocol level. How many applications could be built on top (at Consult Hyperion we call them SLAPPs -shared ledger applications)!

Persistence of permissionless

At the heart of the public shared ledgers is value transfer. This is because in order to assure the liveliness and self-sufficiency of the system, while providing non-restricted access to it, there needs to be an intrinsic economic incentive for those who maintain it. In other words, there should be a positive value to maintaining consensus. Most public shared ledgers for this reason can be described as currencies (decentralised cryptocurrencies) because they provide this incentive as a reward on the ledger in the ledger’s own “money”.

The canonical example of such a decentralised cryptocurrency is, of course, Bitcoin (remember, there are hundreds of them though!).  As Bitcoin was intended to exist and evolve out of the reach of regulatory, corporate or any other centralised command, the technology includes mechanisms that ensure it persistently “survives” and proves its robustness and self-sufficiency. (Disclaimer: I’m not a Bitcoin maximalist)

This persistence is a differentiating characteristic of a public shared ledger system. The technology does not need people at tables making decisions in order to survive, it is “permissionless” (nevertheless, the way it evolves to an extent is influenced by “people at the tables” – just different people).

Virtual economy

Potentially the principal implication of this persistence is the permissionless ascent of alternative virtual economy on top of decentralised protocols. Cryptocurrencies are not just a new form of payment – but rather, it’s a potential foundation for a new virtual economy, with new forms of economic interactions coming into place. When I say “new”, I don’t mean substitutive – I mean additional.

Virtual economic activity could become something fundamental to the Internet. Similar to the way the ability to communicate transformed into the ability to communicate over the Internet – it could grow into the ability to make friction-less economic arrangements (“economically” communicate) in the virtual world.

Thanks to the shared ledger technology and “smart contracts” innovation, not only the emergence of alternative economy is permissionless (and so – non-stoppable), but if it happens at certain scale, the very nature of economic relationships in this economy could be drastically different from what we are used to. A good depiction of such transformation is content monetisation on the web through the use of “invisible” micropayments. Another good example is seamless online payments in video games:

Breakout Coin provides for seamless in-game payments anywhere in the world, while the blockchain technology behind it, Breakout Chain, uses smart contracts and sidechains to enforce these financial agreements between parties.


Shared ledger technology could even turn our things (as in “Internet of Things”) into active economic agents through smart contracts.

Public shared ledger technology may help to turn a big part of our (as it seems) non-economic life into an economic activities. 

Although there are many “if” in that, we should not dismiss this possibility quite yet and keep an eye on the permissionless space. You can observe or get involved, but it would be a mistake to put your head in the sand and deny that something incredible is happening.

Card market reform means non-card opportunities

I’m in Frankfurt for the annual PayComm MEETS Europe, my chance to catch up with practitioners from the continental card markets. It’s really hard to keep up with all of the change in the market, driven primarily because of the regulation rather than new technology. The pace of regulatory change seems relentless. A few years ago, I took part in an panel discussion about payments and regulation and innovation in Brussels. I remember it quite well because of my excellent fellow panelists and because of the nature of the discussions that followed. The panelists and topics, not that they are terribly relevant to the rest of this post, were:

Dave Birch, Consult Hyperion: Conditions of consumers acceptance of e- and m-payments
Roy Vella, Mobile services advisor: Potential of mobile technology in the area of payments
Alice Enders, Enders Analysis: Monetising digital content: electronic and mobile payments as means to reach the consumer
Katarzyna Lasota Heller, EDiMA (European Digital Media Association): Online retailers view on consumer expectations towards e- and m-payments
Stacy Feuer, US Federal Trade Commission. US regulatory perspective

[From e-Commerce – Digital Agenda Assembly 2012]

In those discussions, I put forward a suggestion taking from Norbert Bielfeld’s superb December 2011 Working Paper “SEPA or payments innovation: a policy and business dilemma” [PDF] for a five year “legislative holiday” around payments to let the effects of the Payment Services Directive (PSD) and so forth settle down, so nothing would change until around now. That never happened, of course, and the Commission pressed forward with a regulatory agenda, one significant part of which was the reform of the card payment market in Europe.  Now, setting to one side that I have always favoured a competition agenda and regard interchange caps as inappropriate and counterproductive price-fixing, these reforms are beginning to impact the market.

How? Well, I remember that when he was speaking at this event last year, Peter Jones from PSE gave an excellent presentation on the impact of the new European card regulations on the different players in the payments game. You won’t be surprised to hear that I agree with his fundamental conclusion that the regulations represent a victory for merchants over banks and demonstrate the importance of having a concerted and coordinated lobby. He went on to say, and I hope my scribbled notes on this are accurate, that the commission don’t fully understand the impact of the changes that they have made. (I might be tempted to add that I’m not sure that any of us really do because of the chaotic nature of the changes.) These changes will inevitably have some unexpected consequences and it is part of the fun in the industry at the moment trying to guess what these consequences might be. I had not, for example, realised at that time that the reform of licensing on a pan-European basis means that Amex and Diners will have to restructure their franchise models.

I won’t take you through a detailed analysis of the changes that occurred last year and the final set of changes to come into place this month except to say that they will trigger have started to change the structure of the European cards industry. This is not inherently bad for everyone, of course. Chaos is a ladder, as they say, and Peter’s presentation alluded to opportunities that might arise from the enormous changes that will take place. Peter for example, said that he could see to pan-European “common carrier” real-time networks evolving from the impending separation of brand and processing for the international card schemes and suggested that with good strategies the debit portion could emerge into a pan-European immediate settlement system.

Speaking at PayComm


However, this year I want to focus on two of his conclusions that I think were both correct and of tremendous importance. I think that  they might not have been recognised as such by some stakeholders who were focusing on the reorganisation of the card business rather than the larger context. These conclusions are entirely congruent with the strategic perspectives that we shared with our clients and, as I down to PayComm 2016, I think it’s worth opening them up for discussion again.

As we have long advised our clients, a working push payment infrastructure (ie, smart devices and an immediate settlement network) means that a lot of day-to-day payments will shift to the infrastructure).

From Push payments are a win-win (and a lose) | Consult Hyperion

The first is that the heavy regulation of interchange-based card products will mean energy, investment and imagination being directed into non-card credit products, a driver that I have referred to before as the “push to push”. It is hardly a surprising prediction that banks and others will want to develop businesses that offer higher margins. As the margins on the card business are regulated down the ability to offer rewards, cutback, loyalty and other services is necessarily restricted. If mobile-centric account to account payment services can deliver better functionality and more attractive propositions to customers then the merchants will have to take them and pay more than for card products.

The second is that the payment services directive provisions on open access to payment accounts that we have discussed several times before on the blog will mean that (unless they are totally insane) banks will compete to offer what our Australian cousins call “overlay services” in order to compete with non-bank overlay services. Such value added service providers will use the account information service provider (AISP) APIs and the payment initiation service provider (PISP) APIs to deliver services to their customers. Now this has a number of strategic problems for banks to wrestle with. Banks are naturally concerned about third-party access to accounts relegating them to the role of commoditised, utility pipes for money because “over the top” players such as Facebook, Apple, Google and the other usual suspects will form a layer between the banks and their customers. But of course some banks might move aggressively to form the layer between other banks and their customers by providing better API services to those over the top players or they might decide to specialise in particular areas of the business and make themselves more attractive to customers in those niche is.

During the excellent PayComm workshop on instant payments, led by Andy Makkinje from Equens, a couple of people touched on the impact of these two trends (i.e., the push to push together with API access for PSPs) together. A working instant payment infrastructure, that is opened up because of the API access to banks, is very likely to become the dominant retail payment system, certainly for e-commerce (which is where all of the card fraud is in Europe). It will be the simplest, cheapest and most pervasive solution to the payments problem, and it’s nearly here. If you look around Europe the trend is unstoppable. The reform of the card market may well be the end of the card market as we know it.

How can we reshape retail without reshaping payments?

Well, the circus came to town again. Barcelona. It’s 100,000 people and non-stop meetings and basically no fun whatsoever. But it’s in Barcelona. The calendar is jammed from first thing in the morning until the evening, and then it’s out for dinner and drinks with customers and suppliers. Man, that Catalan pasta was delicious. It’s absolutely exhausting. My feet are killing me by coffee time and I’m not in heels. Loved that lemon beer though, never had that before. The communist traitors down the metro are on strike so we have to queue for buses. It’s lovely and sunny here. Eight halls!  Still, let’s take a deep breath and get on with it.

I’ve been interested in mobile payments for 20 years. A decade ago, Consult Hyperion was lucky enough to be chosen by Vodafone to carry out the feasibility study on M-PESA. I can remember seeing the first Nokia with a contactless chip (Mastercard) embedded in it and being blown away by the convenience. I am the archetype for the stereotype in mobile futurists presentations, the person who often leaves the house with a phone but no wallet. Last year at MWC I gave a presentation about the impending shift to in-app payments. So, you can imagine how downhearted I was to see this vista before me on arriving in the host city.


Yep. Twenty years of mobile payments, twenty years of presentations about mobile payments at MWC, twenty years of pilots and trials and tests and MoUs, twenty years of arguing about SIM vs. embedded vs. SE, twenty years of closed-loop and open-loop and three-party and four-party, and there’s a queue a mile long for the ATM because you can’t use your phone to by a metro ticket or ride the bus into town. Where did it all go wrong?

Why aren’t there mobile payments everywhere? In a sane world, as we landed in Barcelona our phones would automatically fire up a Barcelona app that we could use to pay for the trains and taxis, restaurants and hotels. How long would it take for your bank to issue a four day, Barcelona merchant-only token to the handset? Five seconds? Why can’t I pay in-app for my hotel? Karen Webster wrote about this too.

…when it comes to commerce and payments, well, we’re still very much making our way to first base. And that’s more than two decades after the launch of the commercial Internet and nearly a decade after the introduction of the iPhone…

From Mobile Is Everything, But Where’s The Progress? |

Karen points to the role of the carriers as a fundamental problem, and she is certainly right to note that their attempts to be toll collectors for the superhighway have been a boat anchor on progress in mobile commerce just as it will be for IoT commerce, but I wonder if there’s something more fundamental going on. What if the attempts to shoehorn the existing infrastructure (of PANs and acquirers and networks and schemes and issuers and authorisation and all the rest of it) are themselves responsible for the drag? What if we should have started again? What if we should have just said that the mobile phone gives us a mechanism to establish (and verify) the identity of everyone and once you know who the counterparts are, payments are easy. What if we should have started with mobile ID instead of taking 60+ year old way of doing a payment?

 MWC16 Digital ID Connect Societies

I was lucky enough to be asked to chair the MWC conference session on “Digital Identity for Connected Societies”. During this discussion, it became very clear to me (and, I hope, the rest of the audience) that we already have all of the building blocks that we need to create a strong identity infrastructure based on the mobile phone. If we take that architecture as a given, then what “payments layer” should be put on top of it? You know where my sympathies lie: in the “push to push”. Karen correctly, in my opinion, talks about the reshaping of retailing.

Mobile and online – together — is creatively destroying the retail model that’s been in place for millennia – a model that used to rely only on consumers and merchants coming together face-to-face to do business.

From Mobile Is Everything, But Where’s The Progress? |

Why do we think that we can reshape retail without reshaping payments? Here’s just one example: why do you give card details to the merchant? It makes no sense: it’s because you used to hand your card to merchants in shops. Surely it would make more sense to send the _invoice_ to the bank, have the bank pay it and send back the _paid invoice_ to the merchant. Why should the merchant ever seen your card, tokenised or otherwise? Since merchants are installing BLE anyway, why not just transmit the invoice over BLE to your phone and have your phone send it to the bank for payment? I’m just giving a random example, but you see my point.

Here’s what’s gone wrong: we took amazing new technologies (smart cards, mobile phones, biometrics) and used them to emulate some cardboard hack from 1949. Time to scrub off the whiteboard and start again. I make this vow here and how: if you cannot use your phone to pay the airport bus in Barcelona at Mobile World Congress 2017, then I will never go again.

Key trends in digital payments for 2016

The nice people at Westminster eForum invited me along to give one of the talks at their Digital Payments event at the Royal Society in London. I gave a talk about key trends in digital payments for 2016 and since I got some very nice feedback on it, I thought I would put a few of the key points on Tomorrow’s Transactions. I don’t normally show behind the curtain, but here’s the mind map I drew for the talk. It’s a bit scruffy because I meant to do it on the train on the way up but there was nowhere to sit so I couldn’t do it until I got to the event and then I only had ten minutes to get it together. And yes, I know I put Atom Bank in the wrong place. And the figure of 8% should have been 7%, but you get the general drift of it.

Key Trends for Digital Payments for 2016

As you can see, I chose four themes to talk about: regulatory, business, social and technical. Here’s (roughly) what I said. On the technology road map I don’t think there’s terribly much uncertainty over any reasonable planning horizon anyone is involved in. Mobile is going to remain the central and dominant technology trend. We are shifting to mobile centric interaction for all aspects of finance and it’s hardly a controversial statement to say, well that looks like it’s going to continue so we won’t talk about that terribly much any more. We are mindful of the ECB strong consumer authentication (SCA) effort and that’s going to be a big thing for the coming year and obviously that’s started to verge into biometrics. Apple made the breakthrough of establishing biometrics in the mass market as a convenience technology rather than security technology, that’s been tremendously successful, and it’s going to continue. So I expect we will see a jump in mass market consumer biometrics.

If you want to look out of the corner of your eye (William Gibson style) for the new technology that’s unevenly distributed, if you’re looking to start off your skunk works for next year, you could do worse than look at blockchain technology. I hate to bring up the dreaded word, I know everyone is sick of hearing about it, but… Consult Hyperion is a small company (there’s only 50 of us) and so hardly a statistical cross section of IT spending in the world economy, however the nature of the work that we do we means are a useful barometer for clients and I’m surprised by the amount of blockchain consulting work we have already. I think that is an indication that there’s a lot going on under the surface. The sector is developing more quickly than many people think, so if you want one new thing to play with: blockchain (or as I insist on calling it on Twitter, the replicated distributed shared ledger formally known as blockchain) is a very interesting area.

When it comes to business, I think we can see some key trends around this space, but obviously they are a little fuzzier than the technology road map. If you focus on banking (which is where digital payments are located now) and look at the breakdown of European bank income you see pressure in all areas (except one). For European banks non-interest income is less than half of their income. Net interest income more than half, but that’s obviously under pressure because of low interest rates, it’s also contains the profit pools there that are most at risk from the new players in that space (P2P lenders and people like that). Of the non-interest income, trading income is under pressure because the banks are being forced to reduce those activities, transaction fee incomes under pressure and in the digital payments space are asymptotic to zero.

Euro Bank Income 2014

European Bank Income (Source: Deutsche Bank 2015).

The one tiny little bit of that pie chart that I’m going to come back to a bit later on is that “other” segment, which currently accounts for  only 7% of European bank income, but that’s the segment that our clients have to grow I think. They have to find new products and services and later on I’ll make a suggestion as to what I think one category of new product and service might be. So that’s banks, but there are other categories of organisation.

There are the challenger banks, for example. I’m somewhat negative about the challengers. Remember I’m looking at this from a technological perspective and the challenger banks have by and large bought the same core systems as the incumbents and are offering essentially the same service It’s hardly surprised as they’re heavily regulated and there’s not much you can do with those services, so if you’re using the same systems and delivering the same services, essentially you’re the same. To my mind a genuine challenger would be something more along the lines of Fidor (which has a UK licence now) which is “amazonised” (API-centric) and has been from the start.

There are two other categories that I’ve chosen to label non-banks and near-banks.

Near-banks are things that look like banks to consumers but aren’t actually banks. When I’m working out of our US office I use a Simple account (now owned by BBVA which actually is a bank) but Simple wasn’t a bank, Simple was a prepaid card and an app, but to me the consumer it looked like a bank and it was more than adequate for my purposes. Near-banks that take specific niches in the marketplace and look like banks to consumers within those niches seem to me to be growing (you’ve got Holvi and Moven and people like that in that space) and, as an aside, being given a boost because of the renewed emphasis on financial inclusion.

And then you have the non-banks. People tend to focus on what they call the OTT players, the Facebooks, Google, Amazon in that space and what they are going to do. There are plenty of rumours of who has and hasn’t got a payments licence, but I think one underplayed angle is that of the retailers. Retailers haven’t really moved in that space yet, but because retailers dominate the contacts with the customer, what retailers choose to do in that space, if they had any strategy toward it, might turn out to be more significant actually in many ways than what other people do.

On the social side we have, for the sake of simplicity, a couple of groups to look at and respond to with new kinds of payment systems. The millennials are happy with Venmo and so on. But what about the rest of us?

First and most importantly, I think, for many people is the ageing population. We recently examined a scheme using mobile phones to log into bank accounts via a face recognition system. It proved to be rather popular and not, as you might have thought, with toy-obsessed millennials, but actually it was older people who can never remember passwords or PINs or anything like that, and the idea of just looking at the phone was much more popular in that demographic than the young! We tend to think about the technology stuff as to do with millennials, but actually there’s a fantastic opportunity to use the new technology to serve other groups. At Consult Hyperion we’ve been involved in a fair few projects using mobile (and digital television) to deliver financial services to excluded groups and I think these will grow and develop in the short term.

Then you’ve got the squeezed middle, so that’s people like me who have to catch the 7.52 from Woking in the morning and barely make it alive into Waterloo, scrabbling to keep up with things, sliding further and further into poverty as we are taxed to the hilt. We are are very time poor and so we need services that are delivered on the spot and with context and you will see that word “context” appearing more and more frequently in the services and products that you see coming to the market in 2016. So a lot of the stuff that’s in R&D at the moment that’s a little bit underdeveloped is about context and delivering better services in the right context, as my good friend Brett King has long predicted.

And finally there are the regulatory trends. Our clients tend to be at the larger end of the scale, so for our clients these are by far the most important trends. It doesn’t matter if people come up with a super duper clever blockchain, it doesn’t matter if they come up with a niche to sell it to millennials, it doesn’t matter if they come up with a terrific near-bank structure to make it all work, because actually this is a regulated world and regulatory trends set the envelope that we can work within, and I just want to point out two key regulatory trends which I know are going to be covered in a lot more detail later on.

The first one is the trend to what people have started to call instant payments. FPS, the faster payment service, is well established in the UK. Regulators made the banks implement an instant payment service, and that instant payment service has been fantastically successful and it’s enabled things on top of it already, like Pingit and Paym and so on, and there’s more to come in that space. Other countries have started to go down that route, there are many other countries that already have instant payments in place. The one great exception was always the US because the Federal Reserve has no regulatory power to make banks implement an instant payment system, so we all thought it would be some time before instant payments appeared in the US. That gave a space for people like Venmo and so on to play in, but actually in the last couple of months we’ve seen a raft of announcements coming out of the US. There is The Clearing House with VocaLink, as a prime example. So even in the US you seek this shift to instant payments. You might call it RegTech rather than FinTech! The technology hasn’t changed, but the regulatory space has opened up for instant payments and there’s going to be a raft of start-ups taking advantage of that.

The second is PSD2. You can’t get away from it. For most of the people we work with, PSD2 in general and the open API space in particular are a great focus of attention at the moment and this is where the opportunity to grow the “other” segment comes from. PSD mandates that banks have to provide open APIs to third parties by 2018. The EBA has set out three categories of those APIs. These are the mandatory payment APIs which all banks will have to implement so that regulated third parties can have direct access for account information and for transaction initiation, the non-mandatory payment APIs which banks will be allowed to offer so that they can provide some unique special services and the non-mandatory non-payment APIs. I think that that “other” segment in digital payments maps to these non-mandatory, non-payment APIs, so if you want to look for an opportunity to make something really interesting, really different, really special in digital payments in Europe in 2016 then you should be looking at not the digital payment services but the services that go around them, the valued-added stuff that goes around payments that banks could deliver. A non-mandatory, non-payment API to create a new business? How about identity which, as we all know, is the new money.

So: in summary the technical, business, social and regulatory changes around digital payments are coming together to make it an interesting year and there is plenty of opportunity for new entrants as well as incumbents to exploit these changes to take the digital payments sector forward in a real win-win-win (for consumers, service providers and regulators).

Location, location, location and financial innovation

In his super book “Money Tales”, Alessandro Giraudo looks at the emergence of paper financial instruments and tells how the great medieval trade fairs of Europe were gradually replaced by financial fairs where no actual trade took place except in money.

Even after trade routes had shifted away from the north-south axis that depended on the Champagne commodities fairs, the fairs continued to function as an international clearing house for paper debts and credits, as they had built up a system of commercial law, regulated by private judges separate from the feudal social order and the requirements of scrupulously maintaining a “good name”, prior to the third-party enforcement of legal codes by the nation-state.

[From Champagne fairs – Wikipedia, the free encyclopedia]

Hello. New instruments but no new institutions, new technology beyond traditional law enforcement, so a private reputation-based scheme grew up to facilitate commerce where previously gold and silver had been the oil that greased the wagon wheels. It’s almost as if identity had become the new… no, let’s not get distracted…

Giraudo tells how over time the power of the Genoese bankers rose and they shifted the fairs from France down to Piacenza, near Milan. The Genoese had established the function of the banker as a money merchant and separated this function from that of the “merchant banker” with holdings. Imagine how the idea of paper replacing gold and jewels and spices must have seemed to the institutions of the time! The finance and wealth based only on paper astonished the traditional Italian bankers, and those of rest of northern Europe too, but they all had to adapt to the new reality so as not to be overtaken swept aside by this technological revolution.

Those Paicenza money market fairs became the largest in Europe from the end of the 16th and into the 17th centuries with bankers from Flanders, Germany, England, France and the Iberian peninsula converging four times a year to meet with the Genoese, Milanese and Florentine clearing houses. The clearing houses put down a significant deposit in order to participate in the fairs and in return they fixed the exchange and interest rates on the third day of the fair (this was when interest rate fixing wasn’t the thing it is today). In addition to the bankers there were also money changers who also had to put down a deposit (smaller) to present letters of exchange, and there were also there were also representatives of firms and brokers who participated in the trading.

During the fair, the participants tried to clear all of the transactions in such a way as to limit the exchange of actual coins, so it was net settlement system. Any outstanding amounts were either settled in gold or carried forward to the next fair with interest. This was the first structured clearing system in international finance and it lasted until 1627, when the Spanish Empire went bankrupt (again) causing serious losses to the Genoese bankers who were its principal financiers (and sadly for them had no access to a taxpayer-funded bailout). As a result the financial centre of Europe shifted to Amsterdam, which had a central bank for efficient inter-merchant transfers (more on this is in another post) and was developing newer instruments including futures and options, and then onto London. Spain’s gold and silver (from the Americas) never translated into a strong financial services sector and trade-led economic growth.

When Philip III became King of Spain and Portugal in 1598, Spanish commentators were complaining that instead of being used to stimulate industry and business, the treasure from the Americas had created an attitude that held productive work in contempt, while foreigners – Genoese, Dutch, Germans – ran Spain’s trade and finance to their own profit.

[From Spanish Bankruptcy | History Today]

Giraudo observes that while geography and politics have a strong influence on the location of financial centres, the deciding element has always been the capacity to invent and use new financial techniques, and above all to create a dynamic sense of innovation. This is where, in my opinion, London and New York excel and why they remain powerful financial centres. But what if that capacity to invent new financial techniques is in the future better exploited in Kenya or the Far East or on the Internet? What if financial innovation slips its mundane anchors and begins to float free on the tides of cyberspace? In London, in the UK and in Europe we have to make sure that we have a regulatory climate that supports innovation in financial services in the new economy, not one that attempts to prop up the old one.

Well, anyway, that’s what I told the Parliamentary Office of Science and Technology when they interviewed me about fintech this week and it’s what I’m going to tell the chaps from the European Parliament when they interview me about fintech next week.

Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.