The Internet isn't a Vault

Why is it seemingly so easy to industrialise fraud and identity crime? The problem is incredibly complex and has its roots back as far as the 18th Century. Decisions taken over decades in good faith have layered one upon the other to create the mother of all unintended consequences.

June 2, 2025

Fraud is everywhere, and the scale of the problem is staggering. In 2024, the global cost of online fraud and related financial crime exceeded 9 trillion dollars.¹ To put that figure in perspective, it represents a growth of over 40% since 2019 — more than eleven times the rate of the UK's entire economic growth over the same period.²

This raises a critical question: as we are forced through ever more complex security processes, why is it seemingly easier than ever to industrialise fraud and identity crime? Are the measures designed to protect us simply not working?

The problem is profoundly complex, with roots stretching back centuries. It isn’t the result of a single failing, but rather the unintended consequence of decades of decisions made in good faith. These layers have combined to create a systemic vulnerability that touches everything from personal identity to information security.

Let's start with a simple analogy. Securing an online system should be like securing a bank vault. If you don't open the door, criminals can't get in to steal the valuables. Simple. "But," you might argue, "the digital world isn't a physical bank vault."

You're right. It’s not the same — yet we secure our digital world using the principles of a physical one, and that is the root of the problem.

All our modern security processes — the keys, codes, credentials, and even biometrics — were originally designed to secure tangible assets. These physical things were kept in locations where human oversight was a given. A security guard could spot fake documents, notice when someone tried too many key combinations, and best of all, recognise the right person because they knew them.

Online, none of those human checks apply. You have no idea who truly holds the password. Software can try billions of combinations in the blink of an eye. A biometric scan can't tell you if the person on the other end is the legitimate user under duress. Ultimately, your security is left to a game of chance. While it’s mathematically unlikely that any given user is a criminal, 100% of criminals are users. And we aren't talking about a few bad apples. It is estimated that hundreds of thousands of people are being forced to work in criminal compounds dedicated to industrial-scale fraud, with at least 120,000 in the Thai-Myanmar border region alone.³

If you think this is hyperbole, consider that the $9 trillion cost of fraud is a figure only eclipsed by the national GDPs of the United States and China. This is very big business.¹

The seeds of this crisis were sown long ago. In 1760, Carl Linnaeus's invention of the card index revolutionised the storage and classification of data. Modern databases, in essence, are merely hyper-efficient versions of that same system. More often than not, the data within them is categorised by the very things we use to identify ourselves: a name, an address, an account number. This is Personally Identifiable Information (PII), the same data that regulators in the EU and California are so desperate to protect.

The trouble arises from what is known as data correlation. Criminals combine countless records about an individual — some stolen from corporate hacks, others gathered legitimately every time a user gets annoyed with a cookie pop-up and clicks "accept." Each bit of data on its own might seem harmless. But if a criminal knows Mrs Jones has shopped at four online pet stores, bought several cat-themed items of clothing, took out vet insurance, and belongs to eleven cat-themed social media groups, they know exactly how to craft a phishing email that will hook her into becoming a victim of fraud.

These things have a way of becoming circular. Mrs Jones, finding passwords annoying, might use her cat's name, "T1ddles," for her login — believing the number makes it secure. She has just become the open door to the vault.

Fraud is easy because our approach has made it easy. We use concepts designed for the physical world to secure a digital one for which they are entirely inappropriate. We’ve made it worse by telling consumers that 2FA, KYC, and firewalls will keep them safe. They won't. Nor will digital identity systems, whether issued by a company or a government.

Instead, they risk making the problem exponentially worse by centralising data, making correlation easier, and perpetuating the flawed, asymmetric nature of identity online, where individuals must constantly prove who they are, while taking it on trust that the organisation on the other end is genuine.

Footnotes

¹ Forbes: Morgan, S. (2023). ‘Cybercrime To Cost The World $8 Trillion In 2023’.

² Office for National Statistics (UK):GDP, actual and estimated, chained volume measures, seasonally adjusted £m’. ons.gov.uk. To calculate the comparison, UK GDP at the end of 2019 (Q4) was £569,634 million. By the end of 2024 (Q4), it was £572,561 million, a growth of approximately 0.5%. The growth of fraud from ~$5 trillion in 2019 to $9 trillion in 2024 is ~80%. This is significantly more than 11 times the UK's GDP growth.

³ United Nations Human Rights Office: (2023). ‘Online scams and human trafficking: A downward spiral of criminality and abuse’.

Identity is not a product
The increasing trend among governmental and corporate entities to conceptualise and manage human identity as a product is fundamentally flawed. The approach is not only morally problematic, but also technically unsound.
Exploring Call Fraud
Since Telephone operators were replaced by dial phones, call fraud has been a problem, today it is a huge global industry. We explore how and why this problem remains so difficult to eradicate.
Your security problem is an identity problem
It’s only going to get more difficult for organisations to protect themselves, their customers and their data as the technology the hackers use gets better. In many cases organisations simply aren’t ready for today's challenges, let alone those which are coming