Caroline Bimson, Practice Manager – Business and Digital Consulting, Atkins, explores how the government is tackling fraud and highlights the three key technological developments that can help with their efforts
Do you know how much personal information you’re sharing online? There are the details you choose to hand over, for example, your debit or credit card number, which you exchange for goods and services. And then there’s the data you may not realise you’re providing others with access to (think about cookies on websites), or even that’s been stolen.
The amount of data we’re generating has grown at an extraordinary rate, along with the computing power that enables organisations to process and make use of that information. There are real benefits – it can increase a company’s or department’s efficiency and improve our customer experience. But it needs to be designed carefully. Cyber-attacks are on the rise. Fraud plays a big part in this and the UK Government estimates that £31-£53 billion of public money is lost through fraud each year.
Organisations of all sizes and across sectors now have the difficult job of weighing up the potential benefits of their data gathering and sharing techniques with the need to keep people’s data safe. Organisations have a weight of responsibility to collect the ‘right’ data, to process it and to share it fairly. Throw into this mix data that is wrong, stolen or misleading and more than ever before, fraud is a challenge to every organisation.
However, the power of data, used well, has vast potential to detect and prevent fraud. Major government departments including HM Revenue and Customs and the National Economic Crime Centre in the National Crime Agency have set out to find fraud, to fight it. The National Fraud Initiative identified over £300 million of fraud between 2016 and 2018 by matching the data of local authorities with that of others. In the private sector, the Insurance Fraud Bureau brings insurers together, pooling claims data to then flag suspicious patterns and networks of behaviour. As well as leading to over 650 convictions, this ability to analyse large sets of shared data helps protect the general public from purposeful collisions between vehicles, which enabled fraudsters to make fraudulent claims.
Technology has a key part to play in this. Established technologies are gaining the computing power behind them to be used in anger and emerging technologies are showing their potential.
Here are three key developments:
A core technology to make data sharing a reality is the use of APIs (application programming interfaces). It means that different technologies can use a common language to communicate with each other. Instead of a human needing to input data at one end of a conversation, multiple technologies can “talk” to each other or to a central hub in technology-to-technology conversations.
They can make the transfer of data unnecessary in some cases: one version of the truth can remain, with APIs querying a trusted, up-to-date data source with the freedom to build the front-end design in different ways. For example, Atkins supported the Cabinet Office to create the Counter Fraud Data Alliance, setting up data sharing technologies between the public sector, banks and insurers. Government and industry work in partnership to securely share known fraud data for the prevention, detection and reduction of fraud. Each has very different systems, but the back-end technologies can use APIs to create a single conversation.
Designed and implemented well – perhaps also using their front-end cousin, RPA (Robotic Process Automation) with its rule based steps and clicks – they can reduce the need for multiple versions of the truth, increasing data quality while being able to share data faster.
The rapid pace of development in Artificial Intelligence (AI) represents the maturation of a technology that has existed for over 50 years and is set to bring further opportunity for improvement to identify and counter fraud. The convergence of large data sets, powerful hardware and advanced algorithms have made AI increasingly capable, for example, through faster data analysis. AI technologies can search through vast amounts of data to look for patterns and identify potentially fraudulent transactions, predict behaviour, make recommendations and classify information.
Machine learning algorithms are not as good at understanding complex unstructured data such as images and undertaking non-deterministic analysis yet. However, machines are increasingly outperforming humans at aspects of some of these challenging tasks, including image recognition, bulk data analysis and providing decision options.
Certainly, AI promises quicker decisions examining a broader range of information, which will be particularly relevant as human capacity is challenged by the deluge of data. But replacing people with AI is not as simple as a straight switch – optimising the capabilities of humans and AI in teams to mitigate weaknesses will be essential.
Distributed ledger technology (or blockchain)
Distributed ledger technologies take data sharing another step forward. Instead of a centralised authority, network members exchange data securely across a distributed ledger and the data must be synchronised, which means there can only be one version of the truth.
It’s impossible to say where this will go next. In the context of counter fraud, potentially when a department or agency updates its own database, other members are notified. Notifications mean that every organisation or database that needs to know about that change does know, and instantly. this could mean a distributed set of authorised accounts with different permissions able to share data seamlessly. However, as for all socio-technical security systems, the users are the weak link no matter how cutting edge the cryptography is and managing this risk remains paramount. Cost will also be a limiting factor.
People or machines?
These three technologies can help organisations tackle fraud and reduce human error by automating repetitive tasks and identifying patterns or anomalies. But when we’re dealing with information that needs to be interpreted it adds another layer of complexity for organisations. Cognitive bias may be a well-known factor for people’s decision making, but there are also concerns about bias in AI. If an algorithm decides someone is more likely to be guilty of fraud, how can we check what led to that decision? Can AI be charged with making decisions at all? Even if an algorithm only makes recommendations, is that still a step too far? And how can we be certain that AI that learns in an unsupervised way is not just repeating and amplifying inherent prejudices?
It’s imperative that we see these digital tools as just that – tools to be employed by people. It’s also important for organisations to treat data properly and ensure the thoughtful and complete application of data protection principles that go beyond the obvious and restrictive rules of how long we can store data and for what purposes. Instead, they should be embedded in the ways we design our software and gather and share data, so fraud and errors can be detected from the get-go.
Editor's Recommended Articles
Must Read >> Fighting COVID-19 fraud with a selfie