Now that the US has (finally) migrated from magnetic stripe to chip payments, and signature will soon be going too, the time has come to think about where the fraud will go next. This was the topic of a great discussion at Money 20/20 involving amongst others EMVCo, Capital One and USAA.

Obviously the first place fraud will jump to will be card-not-present transactions such as e-commerce. This is well understood by those of us who went through the EMV chip migration over a decade ago. Brian Byrne outlined the various initiatives in EMVCo to secure these transactions – Tokenisation, 3DS 2.0 (with live solutions being imminent) and SRC (which is open for public comment).

Increasingly though it’s an identity problem. Identity theft and synthetic identities are being used to attack payments in a number of ways.

Because EMV chip cards are much harder to counterfeit than magnetic stripe cards, fraudsters instead will try to get their hands on genuine cards. This could be through opening a fraudulent account or by taking over an account and ordering a replacement card.

Identity fraud will be a big issue in faster payments too, with a need for good authentication on both ends of the transaction.

Synthetic identities are a particular challenge. Detecting them is tough, spotting the subtle clues that indicate that an identity record which looks legitimate has actually be cultivated over time by a fraudster. And this is big business, with criminals using the latest machine learning and ready access to data (thanks to all of those breaches) to launch well organised attacks at scale.

In the following session, Professor Pedro Domingos (author of “The Master Algorithm”) gave the great quote “if you try to fight machine learning with code you are doomed”. But it is not simply a case of implementing machine learning. As the Prof explained, the characteristics of fraud are constantly changing so any machine learning system will need to be constantly tuned and re-trained to keep up.

Definitely a case of whack-a-mole.

2 comments

  1. I can see where this maybe going. Current validation processes use static attributes/artefacts to establish the authenticity of an ID. For example DoB, Credit History, existence on an electoral role, etc. Blockchainers will argue that if you can incorporate behaviour into an identity then you introduce an element which makes identity checking more in tune with the human world. Criminals may construct fake identities but can they fake the historical behaviours of a real person. Even if they can will machine learning not be able to detect the difference?

    1. The point was that current machine learning spots fixed / static patterns but criminals will constantly change their behaviour, so the ML will need constant re-tuning.

Leave a Reply

Discover more from Consult Hyperion

Subscribe now to keep reading and get access to the full archive.

Continue reading


Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.