Defining Digital Identity

Our friends at Smartex challenged its readership to define Digital Identity the other day, with a bottle of wine on offer for the best definition. I’m pleased to say that the bottle of wine was won by Consult Hyperion, with a couple of competition entries submitted.

DIACC announces launch of the Pan-Canadian Trust Framework

flag of canada

The Digital ID & Authentication Council of Canada (“DIACC”) announced the launch of the Pan-Canadian Trust FrameworkTM (“PCTF”) this week, a set of digital ID and authentication industry standards that will define how digital ID will roll out across Canada. Its launch marks the shift from the framework’s development into official operation and will begin alpha testing by public and private sector members in Canada. The alpha testing will inform the launch of DIACC’s PCTF Voila Verified Trustmark Assurance Program  (“Voila Verified”), set to launch next year. 

The tension in facial recognition

Facial recognition camera

The rise of facial recognition technology and the erosion of privacy

In the 2002 movie Minority Report, Tom Cruise’s character has his eyes surgically replaced so he can avoid being identified by the all-pervasive retina scanning system that the state uses to track people… and of course, uses to show targeted ads to people. This is a rather dystopian view of the broad application of biometrics technology.  However, judging by a lawsuit targeting Macy’s for their use of Clearview AI’s facial recognition technology in their stores, it seems that staying anonymous in the bricks and mortar world is becoming a little more like the movie. Whilst you may not require surgery, you may soon require something akin to glasses and a fake beard to avoid being tracked. The issue here is that Clearview AI has been scraping images from publicly viewable sources on the web for a while, enabling them to create a database of facial biometrics against which to match captured facial images. Amongst the sources of this data are Facebook, Twitter, LinkedIn, YouTube and Vimeo, with some of these companies having sent cease and desist letters to Clearview AI for breach of their terms of service.  The aim it seems is for Clearview AI to create a one-to-many facial recognition solution that can identify an individual from only an image of their face from anyone who is in a photo or video on the web.  Based on a report on Buzzfeed, they were working with over 2000 companies as of February 2020, and they are probably not alone, so perhaps we should be concerned.

Leveraging the payment networks for immunity passports

COVID-19

As if lockdown were not bad enough, many of us are now faced with spending the next year with children unable to spend their Gap Year travelling the more exotic parts of the world. The traditional jobs within the entertainment and leisure sectors that could keep them busy, and paid for their travel, are no longer available. The opportunity to spend time with elderly relatives depends on the results of their last COVID-19 test.

I recognize that we are a lucky family to have such ‘problems’. However, they are representative of the issues we all face as we work hard to bring our families, companies and organizations out of lockdown. When can we open up our facilities to our employees, customers and visitors? What protection should we offer those employees that must or choose to work away from home? What is the impact of the CEO travelling abroad to meet new employees or customers, sign that large deal or deliver the keynote at that trade fair in Las Vegas?

Identity – Customer Centric Design

The team put on an excellent webinar this Thursday (May 21st, 2020) in the Tomorrow’s Transactions series. The focus was on Trust over IP, although digital identity and privacy were covered in the round.

The panellists were Joni Brennan of the DIACC (Digital ID & Authentication Council of Canada—full disclosure: a valued customer), long-time collaborator Andy Tobin of Evernym and our own Steve Pannifer and Justin Gage. Each of the panellists is steeped in expertise on the subject, gained from hard-won experience.

Joni and Andy presented, respectively, the DIACC and ToIP layered architectural models (largely congruent) for implementing digital identification services. The panellists agreed that no service could work without fully defined technical, business and governance structures. Another key point was that the problems of identification and privacy merge into one another. People need to make themselves known, but are reserved about making available a slew of personal information to organisations with whom they may seek no persistent relationship or do not fully trust.

At one point, it was mentioned that practical progress has been slow, even though the basic problem (to put one aspect crudely, why do I need so many passwords?) of establishing trust over digital networks has been defined for 20 years at least. It could be argued that Consult Hyperion has earned its living by designing, developing and deploying point solutions to the problem. I began to wonder why a general solution has been slow to arise, and speculated (to myself) that it was because the end-user has been ill-served. In particular, the user sign-up and sign-in experiences are inconsistent and usually horrible.

Therefore, I posed the question “What is the panel’s vision for how people will gain access to personalised digital services in 2030?” The responses were interesting (after momentary intakes of breath!) but time was short and no conclusions were reached.

I slept on the problem and came up with some tentative ideas. Firstly, when we are transacting with an organisation (from getting past a registration barrier to download some info, through buying things, to filing tax returns), everything on our screens is about the organisation (much of it irrelevant for our purposes) and nothing is about us. Why can’t our platforms present a prominent avatar representing us, clickable to view and edit information we’ve recorded, and dragable onto register, sign-in or authorise fields in apps or browsers?

Now, there could be infinite variations of ‘me’ depending on how much personal information I want to give away; and the degree of assurance the organisation needs to conduct business with me (of course, it’s entirely possible there could be no overlap). I reckon I could get by with three variations, represented by three personas:

  • A pseudonym (I get tired of typing flintstone@bedrock.com just to access a café’s wifi; there are some guilty parties registering for our webinars too!)
  • Basic personal information (name, age, sex, address) for organisations I trust, with a need-to-know
  • All of the above, maybe more, but (at least, partly) attested by some trusted third party.

Obsessives could be given the ability to define as many options, with as many nuances, as they like; but complexity should be easily ignorable to avoid clutter for the average user.

I think it’s the major operating system providers that need to make this happen: essentially, Apple, Android and Microsoft, preferably in a standard and portable way. For each we would set up an ordered list of our preferred authentication methods (PIN, facial recognition, etc) and organisations would declare what is acceptable to them. The system would work out what works for both of us. If the organisation wants anything extra, say some kind of challenge/response, that would be up to them. Hopefully, that would be rare.

The Apple Pay and Google Pay wallets are some way to providing a solution. But sitting above the payment cards and boarding passes there needs to be the concept of persona. At the moment, Apple and Google may be too invested in promulgating their own single customer views to see the need to take this extra step.

I sensed frustration from the panellists that everything was solvable, certainly technically. Governance (e.g. who is liable for what when it all goes wrong?) was taken to be a sticking point. True, but I think we need to put the average user front and centre. Focus groups with mocked-up user experiences would be a good start; we’d be happy to help with that!

Would you use the NHSX app?

I listened with interest to yesterday’s parliamentary committee on the proposed NHSX contact tracing app, which is being trialled on the Isle of Wight from today. You can see the recording here.

Much of the discussion concerned the decision to follow a centralised approach, in contrast to several other countries such as Germany, Switzerland and Ireland. Two key concerns were raised:

1. Can a centralised system be privacy respecting?
Of course the answer to this question is yes, but it depends on how data is collected and stored. Cryptographic techniques such as differential privacy are designed to allow data to be de-indentified so that is can be analysed anonymously (e.g. for medical research) for example, although there was no suggestion that NHSX is actually doing this.

The precise details of the NHSX app are not clear at this stage but it seems that the approach will involve identifiers being shared between mobile devices when they come into close proximity. These identifiers will then be uploaded to a central service to support studying the epidemiology of COVID-19 and to facilitate notifying people who may be at risk, having been in close proximity to an infected person. Whilst the stated intention is for those identifiers to be anonymous, the parliamentary debate clearly showed there a number of ways that the identifiers could become more identifiable over time. Because the identifiers are persistent they are likely to only be pseudonymous at best.

By way of contrast, a large team of academics has developed an approach called DP-3T, which apparently has influenced designs in Germany and elsewhere. It uses ephemeral (short-lived) identifiers. The approach is not fully decentralised however. When a user reports that they have COVID-19 symptoms, the list of ephemeral identifiers that user’s device has received, when coming into close proximity to other devices, is shared via a centralised service. In fact, they are broadcast to every device in the system so that risk decisioning is made at the edges not in the middle. This means that no central database of identifiers is needed (but presumably there will be database of registered devices).

It also means there will be less scope for epidemiological research.

All of this is way beyond the understanding of most people, including those tasked with providing parliamentary scrutiny. So how can the average person on the street or the average peer in Westminster be confident in the NHSX app? Well apparently the NHSX app is going to be open sourced and that probably is going to be our greatest protection. That will mean you won’t need to rely on what NHSX says but inevitably there will be universities, hackers, enthusiasts and others lining up to pick it apart.

2. Can a centralised system interoperate with the decentralised systems in other countries to allow cross border contact tracing?
It seems to us that whether a system is centralised or not is a gross simplification of the potential interoperability issues. True, the primary issue does seem to be the way that identifiers are generated, shared and used in risk decisioning. For cross border contact tracing to be possible there will need to be alignment on a whole range of other things including technical standards, legal requirements and perhaps even, dare I say it, liability. Of course, if the DP-3T model is adopted by many countries then it could become the de facto standard, in which case that could leave the NHSX app isolated.

Will the NHSX app be an effective tool to help us get back to normal? This will depend entirely on how widely it is adopted, which in turn will require people to see that the benefits outweigh the costs. That’s a value exchange calculation that most people will not be able to make. How can they make a value judgment on the potential risks to their civil liberties of such a system? The average user is probably more likely to notice the impact on their phone’s battery life or when their Bluetooth headphones stop working.

There’s a lot more that could be said and I’ll be discussing the topic further with Edgar WhitleyNicky Hickman and Justin Gage on Thursday during our weekly webinar.

Counterintuitive Cryptography

There was a post on Twitter in the midst of the coronavirus COV-19 pandemic news this week, that caught my eye. It quoted an emergency room doctor in Los Angeles asking for help from the technology community, saying “we need a platform for frontline doctors to share information quickly and anonymously”. It went on to state the obvious requirement that “I need a platform where doctors can join, have their credentials validated and then ask questions of other frontline doctors”.

This is an interesting requirement that tell us something about the kind of digital identity that we should be building for the modern world instead of trying to find ways to copy passport data around the web. The requirement, to know what someone is without knowing who they are, is fundamental to the operation of a digital identity infrastructure in the kind of open democracy that we (ie, the West) espouse. The information sharing platform needs to know that the person answering a question has relevant qualifications and experience. Who that person is, is not important.

Now, in the physical world this is an extremely difficult problem to solve. Suppose there was a meeting of frontline doctors to discuss different approaches and treatments but the doctors wanted to remain anonymous for whatever reason (for example, they may not want to compromise the identity of their patients). I suppose the doctors could all dress up as ghosts, cover themselves in bedsheet and enter the room by presenting their hospital identity cards (through a slit in the sheet) with their names covered up by black pen. But then how would you know that the identity card belongs to the “doctor” presenting it? After all the picture on every identity card will be the same (someone dressed as a ghost) and you have no way of knowing whether it was their ID cards or whether they were agents of foreign powers, infiltrators hellbent on spreading false information to ensure the maximum number of deaths. The real-world problem of demonstrating that you have some particular credential or that you are the “owner” of a reputation without disclosing personal information is a very difficult problem indeed.

(It also illustrates the difficulty of trying to create large-scale identity infrastructure by using identification methods rather than authenticating to a digital identity infrastructure. Consider the example of James Bond, one of my favourite case studies. James Bond is masquerading as a COV-19 treatment physician in order to obtain the very latest knowledge on the topic. He walks up to the door of the hospital where the meeting is being held and puts his finger on the fingerprint scanner at the door… at which point the door loudly says “hello Mr Bond welcome back to the infectious diseases unit”. Oooops.)

In the virtual world this is quite a straightforward problem to solve. Let’s imagine I go to the doctors information sharing platform and attempt to login. The system will demand to see some form of credential proving that I am a doctor. So I take my digital hospital identity card out from my digital wallet (this is a thought experiment remember, none of the things actually exist yet) and send the relevant credential to the platform.

The credential is an attribute (in this case, IS_A_DOCTOR) together with an identifier for the holder (in this case, a public key) together with the digital signature of someone who can attest to the credential (in thsi case, the hospital the employs the doctor). Now, the information sharing platform can easily check the digital signature of the credential, because they have the public keys of all of the hospital and can extract the relevant attribute.

But how do they know that this IS_A_DOCTOR attribute applies to me and that I haven’t copied it from somebody else’s mobile phone? That’s also easy to determine in the virtual world with the public key of the associated digital identity. The platform can simply encrypt some data (anything will do) using this public key and send it to me. Since the only person in the entire world who can decrypt this message is the person with the corresponding private key, which is in my mobile phone’s secure tamper resistant memory (eg, the SIM or the Secure Enclave or Secure Element), I must be the person associated with the attribute. The phone will not allow the private key to be used to decrypt this message without strong authentication (in this case, let’s say it’s a fingerprint or a facial biometric) so the whole process works smoothly and almost invisibly: the doctor runs the information sharing platform app, the app invisibly talks to the digital wallet app in order to get the credential, the digital wallet app asks for the fingerprint, the doctor puts his or her finger on the phone and away we go.

Now the platform knows that I am a doctor but does not have any personally identifiable information about me and has no idea who I am. It does however have the public key and since the hospital has signed a digital certificate that contains this public key, if I should subsequently turn out to be engaged in dangerous behaviour, giving out information that I know to be incorrect, or whatever else doctors can do to get themselves disbarred from being doctors, then a court order against the hospital will result in them disclosing who I am. I can’t do bad stuff.

This is a good example of how cryptography can deliver some amazing but counterintuitive solutions to serious real-world problems. I know from my personal experience, and the experiences of colleagues at Consult Hyperion, that it can sometimes be difficult to communicate just what can be done in the world of digital identity by using what you might call counterintuitive cryptography, but it’s what we will need to make a digital identity infrastructure that works for everybody in the future. And, crucially, all of the technology exists and is tried and tested so if you really want to solve problems like this one, we can help right away.

The “isRecovered?” attribute

So far the tech giants seem to be the coronavirus winners, with a massive surge in digital communications and online orders. The impact on lift sharing companies is less clear.

The guidance from both Uber and Lyft says that if they are notified (by a public health authority) that a driver has COVID-19 they may temporarily suspend the driver’s account. It is not exactly clear how this would work.

That got us wondering whether digital identity systems, that we spend so much time talking about, could help. It seems to me there are two potential identity questions here:

1.       Is the driver who Uber or Lyft thinks it is?

2.       Does the driver have coronavirus?

The first question should be important to Uber and Lyft at any time. Ok, for the moment they want to be sure that they know who is driving to give them a better chance of knowing if the driver has the disease. But there are all sorts of other reasons why they might want to be sure that the driver is who they think it is – can the person legally drive for one.

The second question is harder. Just because the driver doesn’t have the virus today, doesn’t mean he or she won’t have it tomorrow. Maybe, perhaps the ability to share an isRecovered? attribute that says “I’ve recovered from the illness” would be useful when we start to see the light at the end of this tunnel we are entering. And the ability to share that anonymously might be helpful too – providing assurance to both driver and passenger.

All this to one side, the guidance from both Uber and Lyft outlines financial measures they are putting in place to provide security to drivers that self-isolate. That is a great example of responsibility providing the incentive and support required to allow their drivers to do the right thing.

KYC at a distance

We live in interesting times. Whatever you think about the Coronavirus situation, social distancing will test our ability to rely on digital services. And one place where digital services continue to struggle is onboarding – establishing who your customer is in the first place.  

One of the main reasons for this, is that regulated industries such as financial services are required to perform strict “know your customer” checks when onboarding customers and risk substantial fines in the event of compliance failings. Understandably then, financial service providers need to be cautious in adopting new technology, especially where the risks are not well understood or where regulators are yet to give clear guidance.

Fortunately, a lot of work is being done. This includes the development of new identification solutions and an increasing recognition that this is a problem that needs to be solved.

The Paypers has recently published its “Digital Onboarding and KYC Report 2020”. It is packed full of insights into developments in this space, features several Consult Hyperion friends and is well worth a look.

You can download the report here: https://thepaypers.com/reports/digital-onboarding-and-kyc-report-2020

Technology and Trust @ Money2020

Online trust is a pretty serious issue, but it’s not alway easy to quantify. We all understand that it is important, but what exactly is the value in pounds, shillings and pence (or whatever we will be using after Brexit) and how can we use that value to develop some business cases? It’s one thing to say (as you will often hear at conferences) that some technology or other can increase trust, but how do we know whether that means it is worth spending the money on it? At Consult Hyperion we have a very well-developed methodology, known as Structured Risk Analysis (SRA), for managing risk and directing countermeasure expenditures, but we need reasonable, informed estimates to make it work.

The specific case of online reviews might be one area where trust technologies can be assessed in a practical way. In the UK, the Competition and Markets Authority (CMA) estimates that a staggering £23bn a year of UK consumer spending is now influenced by online customer reviews and the consumer organisation Which has begun a campaign to stop fake reviews from misdirecting this spending. According to their press office, with “https://press.which.co.uk/whichpressreleases/revealed-amazon-plagued-with-thousands-of-fake-five-star-reviews/“, fake reviews are a very serious problem.

Unscrupulous businesses undoubtedly find fake reviews an incredibly useful tool. There are millions of examples we could use to illustrate this, but here is just one.”Asad Malik, 38, used fake reviews and photographs of secure car parks hundreds of miles away to trick customers into leaving their vehicles with him when they flew from Gatwick [Airport parking boss jailed for dumping cars in muddy fields].

So how can we use technology to make a difference here? When you read a review of an airport parking service, or a restaurant or a Bluetooth speaker, how can you even be sure (to choose the simplest example) that the reviewer purchased the product? Well, one possibility might be to co-opt the payment system: and this can be done in a privacy-enhancing way. Suppose when you pay the bill at a restaurant, and you have told your credit card provider that you are happy to be a reviewer, your credit card company sends you an unforgeable cryptographic token that proves you ate at the restaurant. Then, when you go to Tripadvisor or wherever, if you want to post a review of the restaurant, you have to provide such a token. The token would be cryptographically-blinded so that the restaurant and review-readers would not know who you are, so you could be honest, but they could be sure that you’ve eaten there.

Such “review tokens” are an obvious thing to store in digital wallets. You could easily imagine Calibra, to choose an obvious case study, storing these tokens and automatically presenting them when you log in to review sites. This would be a simple first step toward a reputation economy that would benefit consumers and honest service providers alike.

This is one of the cross-overs between payments and identity that we expect to be much discussed at Money20/20 in Las Vegas this week. I’ll be there with the rest of the Consult Hyperion team, so do come along to the great, great Digital Trust Track on Tuesday 29th and join in the discussions.


Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.