The benefits of AI in Defence outweigh the costs

AI in defence

There is a revolution taking place in the world of Artificial Intelligence (AI) and Machine Learning (ML). But it might not be the revolution that you think it is

When people think of this technological revolution, they think of big AI breakthroughs that are inherently transformational, completely changing the way things have been done.

But the truth is, it is often just as transformational, if not more so when existing techniques are applied to new problems in different ways. It’s never been easier to engage with specialist AI/ML companies to access a mass of talent and expertise who can either build you a bespoke model or provide a SaaS-like platform to make sense of and draw insight from your own or external data. Many companies and organisations have built in-house teams to accelerate (with mixed success) their access to specialist knowledge.

As an institution, the defence sector instinctively understands – and can articulate in grand strategic terms – how AI could revolutionise the speed of processing of data. Commanders and strategic thinkers across the sector (rightly) identify that data is a significant potential source of value, which currently is at best inefficiently realised, and at worst criminally untapped.

While large technology companies like Google and Amazon operate to extract maximum value from big data, there is still a long way to go in the defence sector. For example, most analysis and data processing today is slow and resource intensive, relying too much on people, when much of it could be off-loaded more effectively with AI, as long as it is developed and implemented responsibly.

Using emerging technologies to enhance defence

Though the implementation of AI in the defence space has lagged behind other sectors, the appetite for innovation is clear. In 2020, countries globally spent $1.9 trillion on defence, and this number will pass the $2 trillion mark by the end of 2022. In the EU, member states invest an average of 1,2 percent of their annual GDP on defence. Recent geopolitical events in Ukraine have further increased this drive, with Germany, for example, doubling its defence spending almost overnight. Defence has historically been an incubator for cutting-edge and next-generation technology. Many of the technologies we use every day were developed through the crucible of conflict. This poses a significant opportunity to harness the innovation budget toward the responsible development of data-driven systems.

Risks for AI in Defence

There are significant opportunities available for AI in defence, but it is not easy. In many lower-risk industries, the default is to use complex models with vast quantities of data so they learn the inner workings of a problem. When this approach works, as it often does, it’s great for companies and users. But when it doesn’t work, due to some unknown “edge case” or novel situation, it is inexplicable and results in a system that risks failing.

In lower-risk industries, these emergent behaviours can be found and fixed after the fact, and the resulting fallout contained. In defence, this is not an option. Stories such as the pedestrian death by autonomous driving vehicles would have been even more horrific and had wide-reaching consequences if replicated in a defence setting. Instead, there must be a distinct approach to the design, build and deployment of AI specific to the high-stakes applications of critical systems in defence.

From the outset, it is vital to ensure understanding and buy-in from all relevant stakeholders of the scope and capability of AI within the system. Individuals must be given the responsibility, accountability, and resources required to deploy AI. Equally, AI must be designed so that it can be understood by users of all technical backgrounds.

Additionally, defence cannot over-rely on automated systems. Instead, it should defer to human experts with a greater understanding of the system. Such interactions are a knife edge to walk, as it can be tempting to simply include a human in the loop where verification is included in every decision. This is impractical, however, and slows down system function, as well as frustrating for users who will rapidly fail to complete their tasks to the required standard.

Do it well, or don’t do it at all: implementing effective AI defence

The issues outlined above hint at the size of the task at hand. Robustness, reliability, and ethical considerations must be given serious thought when implementing AI in such high-stakes areas as defence. Those in defence may naively consider that the easy option is simply to put off or avoid embracing this technology, waiting until the time is right. But the use of AI in defence is not a question of ‘if’, but ‘when’.

Adversaries already harness some of the potential these technologies hold, and may in some places be unhindered by the problem of “getting it right” or an ethics framework. Starting now, and committing serious resources to its development, testing and integration will ensure that there is the scope to get it right the first time, rather than chasing an accelerating adversarial target down a path that leads us further away from the values we hold dear.

 

This piece was written and provided by Al Bowman, Director of Government and Defence at Oxford University spinout Mind Foundry

LEAVE A REPLY

Please enter your comment!
Please enter your name here