Man deliberates, AI decides?How AI can assist and expedite military decision-making

“Many intelligence reports in war are contradictory; even more are false, and most are uncertain.” ~Carl von Clausewitz

“The real risk with AI isn’t malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.” ~Stephen Hawking

Artificial intelligence has been widely used in problem identification, data analysis, and solution searching. Although AI has proved its power in intelligence gathering and operational control, the huge potential of AI to assist military personnel in decision making is still understudied. Georgetown University’s Center for Security and Emerging Technology (CSET) recently published a research report that outlines a framework for AI-enabled decision support systems (AI-DSS), which contain three critical areas in scope considerations, data considerations, and human-machine interaction. The report also lays out comprehensive strategies to mitigate risks associated with AI-DSS. 

Scope considerations concern contexts, flexibility, and uncertainty. AI-DSS is well operated only within settings that fit the training data. Hence users should exert caution in making projection or prediction unless the applied model is based on physical laws or observable data. Users also need consider their decision ranking and choose commensurate AI-DSS in strategic, tactical, or operational levels. Besides, the use of large language models (LLMs) in decision support requires guidelines and guardrails to avoid confusion or misuse. Though AI-DSS can be used to downsize volume of unknowns, there is no way to totally eliminate inherent uncertainty. Consequently, military commanders still need to rely on their own judgment in making battlefield decisions.

Data considerations concern quality, fidelity, skewedness, and scarcity. AI-DSS can be trained with simulated human-based data that accurately reflects reality. Simulation works better if its underlying input-output mechanisms can be tested and validated. Data skewedness comprises AI-DSS’s effectiveness with biased data, which might reflect the operators’ personal or cultural backgrounds. Extreme caution must be exercised when using human-based data collected from social media platforms. These platforms vary in demographics and discourse, and thus cannot be counted as reliable sources of real-world facts and opinions. Besides, data scarcity poses severe challenges for AI-DSS to make analysis or prediction in combat situations. To address these challenges demands traditional methods of intelligence analysis that draws upon human insights and inferences derived from contextual understanding.

Human-machine interaction concerns capabilities and limitations of AI-DSS as a human-machine system, which covers three facets carried with associated risks. The first facet is the falsity of large language models (LLMs), which tends to align user expectations with incorrect information, unfaithful explanations, or unjustified recommendations. The second facet is human biases. AI-DSS can largely improve human decision-making in stressful situations, while simultaneously reduce confirmation bias, ambiguity aversion, and negativity bias. However, at the same time AI-DSS generates automation bias, the unwarranted belief in algorithmic recommendations.  The third facet is organizational biases. Though AI-DSS hasten decision-making processes and save personnel costs, they also lower decision quality as decision quantity rises. Organizations may falsely assume AI-DSS can be used in all situations, and inadvertently prioritize speed over quality. Hence it’s extremely important to establish risk-based governance policies and procedures in the application of AI-DSS.

How to harness the potential unleashed by AI-DSS while manage the associated risks? The basic principle is to understand strengths and weaknesses with regard to their design, deployment, application, and maintenance. Based on this principle, the authors make the following suggestions in accordance with human governance actions and international humanitarian law.

First, there should be risk-based criteria for the deployment of AI-DSS. Based on strategic and tactical contexts, settings, and risk profiles, military commanders need to establish guidance and instructions of AI-DSS, thus making their deployment adjustable and reversible. Second, operators of AI-DSS should receive adequate training and possess professional qualifications commensurate with their decision-making roles. Third, AI-DSS operating units ought to be regularly evaluated and certified with performance metrics shared with data scientists and operations analysts. Fourth, responsible AI officers should be recruited to facilitate information sharing, improve AI literacy, monitor AI incidents, and manage safety risks. Fifth, flaws and mishaps of AI systems ought to be reported, documented and shared with analysts, developers, operators, and researchers. This kind of normalized transparency helps build public trust, forestall inadvertent harms, and avoid cross-national misperceptions in the case of systematic failures.

The great scientist Stephen Hawking once warned of the danger a super-intelligent AI might pose if its goals are not aligned with ours. Military decision-making is not merely a hard science, but also an art to master. AI may outsmart humans in mechanical calculation, but humans still possess unsurpassable advantages in artistic creativity. Ultimately, it is the human being that calls the shot and bears the consequences, not AI.

The article is based on Center for Security and Emerging Technology’s “AI for Military Decision-Making: Harnessing the Advantages and Avoiding the Risks”, authored by Emelia Probasco, Helen Toner, Matthew Burtell, and Tim G. J. Rudner. Read the full report

Leave a comment