MONS talks to Raytheon about the use of artificial intelligence for the purpose of planning interventions
On 6 December, Raytheon announced that, under an award from the Defence Advanced Research Projects Agency’s Causal Exploration of Complex Operational Environments programme, it would explore artificial intelligence and machine learning techniques to develop tools that may enable military planners to understand how cultural and other factors combine to cause conflicts.
“Machine learning works by giving the computer information that has been characterised beforehand,” says David Lintz, Vice President Raytheon BBN Technologies. “Based on that characterisation, one can then ask the computer to characterise different situations, scoring it each time based on whether its answer was right or wrong, so that the machine progressively learns how to make that characterisation itself with different data sets.”
In this case, Raytheon is working to develop a technology that would help military planners, operating in post-conflict environments, characterising situations so as to minimise the risks of a relapse into conflict when choosing between different courses of action.
To do this, the first step is to define the key terms around which the machine will be making its statistical analysis. “The military customer is involved from day one in this process,” continues Lintz. “We sit with them during their planning sessions and listen to the way they use terms such as ‘causality’, ‘impact’ or ‘precondition’ in their discussions; once we get a sense of what these terms mean for the customer, we work with them on definitions of ‘causality’, ‘impact’ or ‘precondition’ that will serve as the framework through which the machine will process the information to suggest different outcomes.”
The second step is feeding the machine the datasets it needs to recognise causal links and provide different possible outcomes to a course of action in a given context. This involves taking texts, or audio files that can be converted into texts, from open sources and classified intelligence analysts’ articles and reports, and inputting the relevant data in the machine. Lintz adds: “Of course, the quality of your model is no better than the quality of the sources, so the idea is to put in a wide range of sources to make sure the machine has a spectrum of different perspectives it can use to work out the outcome of a given situation.”
As such, Lintz confirms that humans remain central to the decision-making process, primarily because they are the ones feeding information to the machine and adjusting it to the context. For example, military planners will feed different reports to the machine depending on whether a mission will take place in Iraq or in Cambodia; this will ensure that the cultural context is also always taken into consideration. Just as importantly, Lintz adds: “The model does not make a decision, it only gives planners the likelihood of different outcomes according to the courses of action they are considering. The political problems inherent to decision-making in these contexts, however, remain because even if you know the right decision you might still be constrained by political reasons.”
The advantage of this technology is not to remove the human from the decision-making process, but rather to provide it with a tool that works around the biases inherent to human decision-making processes. “As a species we are just not really well designed for taking in additional information and adjusting our view of the probability of something happening,” says Lintz. “The advantage of having a programme is that the computer uses arithmetic to work out how a situation will evolve, where analysts would have read the same datasets magnifying the information they find relevant; the computer removes that bias.”