[Dave Birch] A thought experiment. Suppose you found a flaw in a widely-used payment scheme, such as EMV. Suppose the flaw had come about because of a mistaken interpretation of a specification and would take some time to fix. Would you keep the flaw secret, and hope that the criminals didn’t find it, or would you tell the banks, or would you tell the banks about the flaw and tell them that the flaw will be made public in six months. I’m genuinely curious: what would you do? I’m sure that the first option is the most wrong: not exploring how to break a payment scheme means that the criminals will break it and you won’t know what to do. Consider the recent example of SIM card cloning in India, which the police apparently had difficulty responding to:

The experts said no one has actually done any research on SIM card cloning because the activity is illegal in the country.

If the good guys can’t even participate, the bad guys will always win.

[From Schneier on Security: The Ill Effects of Banning Security Research]

Bruce is, as is generally the case, right. Banning research means that only the bad guys will do the research. Hoping that the bad guys won’t find the flaw is a ridiculous strategy: it’s much better to come clean, bite the bullet and then fix it. What does “fix” mean though?

In an odd sort of way, knowing that has system has vulnerabilities means that the existence of vulnerabilities doesn’t render the system useless, because you can build in countermeasures in other areas. That’s no excuse for forgetting all about security though.

The unique identity numbers used to identify the FasTrak wireless transponders carried in cars can be copied or overwritten with relative ease. This means that fraudsters could clone transponders, says Lawson, by copying the ID of another driver onto their device. As a result, they could travel for free while others unwittingly foot the bill. “It’s trivial to clone a device,” Lawson says. “In fact, I have several clones with my own ID already.”

[From Technology Review: Road Tolls Hacked]

A system like this will only collapse when a simple vulnerability is exploited if the designers had been so dumb as to invest all of the security in a single factor. No-one would make a payment system like this. For example, it is possible — in fact trivial — to create counterfeits in a closed-loop contactless payment system that I am familiar with, and has been so for years. Yet the system has not collapsed (in has not even been damaged, frankly) by this because the back-end authorisation system is rather clever and can easily spot and generally decline the duplicates.

You don’t have to close all vulnerabilities, at potentially infinite expense, to make a system work. Conversely, the existence of vulnerabilities does not mean a system doesn’t work. How do you know what to do then? Well, that’s what risk analysis is all about.

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]

Leave a Reply

Discover more from Consult Hyperion

Subscribe now to keep reading and get access to the full archive.

Continue reading


Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.