While attending Money Live in London this year, I had the pleasure of participating in some really insightful sessions around one of my favourite topics – battling fraud. It was great to hear that in financial services we see a decrease in ATO (Account Take Over) but unfortunately scams and APP fraud is on the rise and spiked in 2021 and 2022.
Authorised Push Payment (APP)
Long story short – it’s a scam. It works based on the same principle as many other deception attacks (for example phishing). Bad actor is impersonating and then manipulating the victim into making payments, transferring funds, purchasing goods or disclosing sensitive, personal information.
With the dawn of call id spoofing it’s particularly dangerous as the attacker can substitute their phone number and make it appear on your mobile as a call from your bank or any other organisation you do (or not!) business with. ‘Just look at my number’, it has to be your bank if I am calling from their number that is published on their website, right? Wrong. While it’s illegal in UK and Apple’s appstore tries to protect us by disallowing downloading and usage of the tools facilitating such attacks, if you dig long enough, you can still find apps that allow you to call from any number at a bargain price of £5.
One of the most common scripts is to tell you that your bank discovered a suspicious activity in your account – but there’s nothing to worry about as they have already stopped the transaction. Your bank helped you already! At the same time rapport has just been built, because let’s be fair here – the person is trying to help me, which lowers my suspicion levels – that’s just human nature. So the next step is to protect the money from any potential further attempts. And for that we need to move it to ‘secure’ account – just temporarily while we identify the culprit. And the poor victim is playing ball, transferring the money to a rogue account, thinking I shall be so lucky, I saved my money from being stolen, all thanks to the modern technology and my bank.
The current defence model
It is incredibly hard and complex to detect this type of fraud. After all it is the actual, genuine person performing actions on their accounts. And I get it, it’s easy to classify as risky moving all funds to another account, but it we break it down into smaller transfers and trickle the money out, we might just about be able to sneak under the radar. So, what do we have in the anti-fraud controls inventory, that we could use to protect the victims? We apply context, maybe a pinch of behavioural biometrics, AI and ML to tell us if this is something a person would normally do or whether it seems unusual. We may use other intelligence sources to check if the funds are funnelled into accounts that we already know are fraudulent and probably a few other techniques. But… all of those don’t even remotely touch the fundamental issue and root cause of APP scams – the betrayal of trust.
Control is the highest form of (zero) trust
And of course, I am being sarcastic here. Identity and Access Management is built on trust, that is IAM’s foundation. Having said that, TRUST is a STATE that we need to build up to. It is not to be taken as granted. No trust relationship is established without technical security controls that allow for the verification of the partner. For example, in OIDC (Open ID Connect) flows we use client_id and client secret to establish trust between the identity provider and the application. If the application knows its credentials, the IdP (Identity Provider) will issue the token. We have a trust relationship based on credentials. This trust is traditionally established in an unidirectional way. The application presents itself to the IdP and carries authentication in order to exchange a code for a token. But somewhere along the way we realised that it simply isn’t sufficient. We may want to establish a mutual trust, not only verifying the client side for the server, but also verifying the server side for the benefit of the client. We have a standard for this in secure network communications, it’s called Mutual Transport Layer Security or MTLS in short.
The ‘oppression’ of digital society
Once again, a little bit of sarcasm here, but what effectively happened we have been forced (note I didn’t use the word bullied) to conform with security protocols that the technology or regulators set for us. A good example will be the PSD2 (Payment Services Directive regulation) that defines SCA (Strong Customer Authentication) or if I wanted to simplify, I’d say MFA (Multi Factor Authentication). We are required to enter username and password and an OTP from a text message received on our mobile phone (for the purpose of illustration as this may vary). We need to make sure we (strongly) authenticate ourselves when we interact with the bank. But what does the bank do for us in return? With the exception of passkeys (which don’t cover the issue I am about to talk about) little to nothing – we think we’re interacting with the bank, but unfortunately that may NOT be the case. To make things worse attackers may take us to phishing websites that they no longer need to build to resemble the original. Evilginx is an example of an openly available platform to build Man In The Middle (MiTM) attacks. And it comes in handy with ‘phishlets’ (ready configurations) for LinkedIn, Facebook, HSBC… oops! Once again Passkeys are resistant to this type of attack, but last time I checked my bank wasn’t using them.
Mutual Trust via UIPA (User Initiated Peer Authentication)
If I log into my bank account using Passkeys – I can be sure that it’s the bank’s website (or app for the sake of the argument). But what happens if I receive a phone call from a number that is known to belong to my bank?
We have had CIBA (Client Initiated Backchannel Authentication) for a while. You call your bank and identify yourself, maybe answer some security questions and then the support person, the bank clerk initiates CIBA. That sends a push notification to your mobile application, which you open with FaceID and tap confirm. That gives the clerk confidence they are dealing with who you say you are. The proposal is to use this phenomenon in reverse. I didn’t want to call it reverse CIBA, hence a poor attempt at naming it UIPA (User Initiated Peer Authentication).
The user (the subject of the potential transaction, the customer) initiates a verification flow with some kind of challenge that would be included in the response in relation to a specific action, for example an interaction or transaction ID (I am making this up as I go). The bank clerk receives a challenge and they need to authenticate themselves to verify their identity and send the confirmation in a form of a token through a push notification, just like CIBA does. Only a genuine bank officer on the other side would be able to complete it. That way we have established the trust mutually – the bank knows they’re dealing with their customer and the customer KNOWS it is the bank. If you cannot verify the identity – the message is simple – it’s a scam.
Scaling Up
The above works well for the banks’ ecosystem and sort of remediates scammers impersonating the bank. But what if they try to impersonate other business? Another example which is a very popular script is investment scam. Long story short I convince you to purchase bogus shares, cryptocurrency (or sometimes just services). Since financial sector is heavily regulated only businesses that have permission to operate can offer those types of services. If the regulator wanted to go an extra mile, they’d offer this reverse verification service as an open ecosystem.
If I want to hire a car I sometimes need to provide information how many points I have on my license. Some hire companies will not allow me to use their services if I exceed certain threshold. I am too risky of a driver and it’s beyond their appetite. Fair. In order to do that (in UK) we log into the DVLA (Driver and Vehicle Licensing Agency) using government gateway and provide consent which in turn generates a sharable code. We can pass this code to the car hire company and et voila they can see the details we shared – driving license plus endorsements, if any.
We could use the same principle to authenticate ourselves in business transactions carried over a phone or over a digital channel like chat. This article is not a place for the detailed design of such service and undoubtedly it would require at least a few guardrails, but you get my point in principle. If the caller can successfully authenticate using biometrics (server side, of course) and provide consent to share their limited details – for example name and active employer’s information together with maybe brief business description, then the potential victim can easily make the decision. No mutual authentication, no business. This doesn’t need to be operated by the regulator, after all credit score providers are private businesses with no direct links to financial governing bodies. An opportunity for business – just saying.
Going beyond financial services
If we take another step back, we can utilise the principle beyond financial vertical. Without a shadow of a doubt this would require time/resources and investment but could be a very successful way to limit the amount of scams around us.
The root cause of APP
The misuse of trust is the root cause of most, if not all of the scams. If we can apply mutual authentication to our non-digital channels (funny enough by using digital channels), we could reduce scams/fraud numbers and at the very least stop those most devastating like losing life-savings or whole retirement funds. It just shows the importance of not only using the latest gizmos like Passkeys or CIBA with push notifications, but extending the authentication protocols to hybrid architectures where traditional channels are complemented by their digital counterparts. Surely, we’re missing a trick here, would you not agree?