Fraud is everywhere, and the scale of the problem is staggering. In 2024, the global cost of online fraud and related financial crime exceeded 9 trillion dollars.¹ To put that figure in perspective, it represents a growth of over 40% since 2019 — more than eleven times the rate of the UK's entire economic growth over the same period.²
This raises a critical question: as we are forced through ever more complex security processes, why is it seemingly easier than ever to industrialise fraud and identity crime? Are the measures designed to protect us simply not working?
The problem is profoundly complex, with roots stretching back centuries. It isn’t the result of a single failing, but rather the unintended consequence of decades of decisions made in good faith. These layers have combined to create a systemic vulnerability that touches everything from personal identity to information security.
Let's start with a simple analogy. Securing an online system should be like securing a bank vault. If you don't open the door, criminals can't get in to steal the valuables. Simple. "But," you might argue, "the digital world isn't a physical bank vault."
You're right. It’s not the same — yet we secure our digital world using the principles of a physical one, and that is the root of the problem.
All our modern security processes — the keys, codes, credentials, and even biometrics — were originally designed to secure tangible assets. These physical things were kept in locations where human oversight was a given. A security guard could spot fake documents, notice when someone tried too many key combinations, and best of all, recognise the right person because they knew them.
Online, none of those human checks apply. You have no idea who truly holds the password. Software can try billions of combinations in the blink of an eye. A biometric scan can't tell you if the person on the other end is the legitimate user under duress. Ultimately, your security is left to a game of chance. While it’s mathematically unlikely that any given user is a criminal, 100% of criminals are users. And we aren't talking about a few bad apples. It is estimated that hundreds of thousands of people are being forced to work in criminal compounds dedicated to industrial-scale fraud, with at least 120,000 in the Thai-Myanmar border region alone.³
If you think this is hyperbole, consider that the $9 trillion cost of fraud is a figure only eclipsed by the national GDPs of the United States and China. This is very big business.¹
The seeds of this crisis were sown long ago. In 1760, Carl Linnaeus's invention of the card index revolutionised the storage and classification of data. Modern databases, in essence, are merely hyper-efficient versions of that same system. More often than not, the data within them is categorised by the very things we use to identify ourselves: a name, an address, an account number. This is Personally Identifiable Information (PII), the same data that regulators in the EU and California are so desperate to protect.
The trouble arises from what is known as data correlation. Criminals combine countless records about an individual — some stolen from corporate hacks, others gathered legitimately every time a user gets annoyed with a cookie pop-up and clicks "accept." Each bit of data on its own might seem harmless. But if a criminal knows Mrs Jones has shopped at four online pet stores, bought several cat-themed items of clothing, took out vet insurance, and belongs to eleven cat-themed social media groups, they know exactly how to craft a phishing email that will hook her into becoming a victim of fraud.
These things have a way of becoming circular. Mrs Jones, finding passwords annoying, might use her cat's name, "T1ddles," for her login — believing the number makes it secure. She has just become the open door to the vault.
Fraud is easy because our approach has made it easy. We use concepts designed for the physical world to secure a digital one for which they are entirely inappropriate. We’ve made it worse by telling consumers that 2FA, KYC, and firewalls will keep them safe. They won't. Nor will digital identity systems, whether issued by a company or a government.
Instead, they risk making the problem exponentially worse by centralising data, making correlation easier, and perpetuating the flawed, asymmetric nature of identity online, where individuals must constantly prove who they are, while taking it on trust that the organisation on the other end is genuine.