Quick Search:

Decision-Forcing Under Generative Coherence

Atkinson, David ORCID logoORCID: https://orcid.org/0000-0002-2179-1652 (2026) Decision-Forcing Under Generative Coherence. In: NATO STO IST-238 Research Specialists’ Meeting (RSM) on Military Applications of Generative AI 2026, 24-26 March 2026, Fraunhofer FKIE Fraunhoferstraße 20, 53343 Wachtberg, Germany. (Submitted)

[thumbnail of NATO STO--IST-238: Decision-Forcing Under Generative Coherence]
Preview
Slideshow (NATO STO--IST-238: Decision-Forcing Under Generative Coherence)
D Atkinson Presentation IST-238-Takeaway.pdf - Presentation

| Preview

Abstract

Generative AI systems are increasingly proposed as decision-aids in military contexts, including wargaming, cognitive operations, and command-and-control support. Much of this work focuses on improving prediction, optimisation or autonomous tasking. The presentation proposed examines a different and under-addressed risk. That is the tendency of large language models to produce highly coherent narratives that prematurely close-down uncertainty. This tendency risks displacing rather than supporting human judgement.
The work to be presented introduces a lightweight approach to AI-mediated Hypothetical Decision-Forcing Cases (HDFCs) for influence and hybrid environments. Rather than using generative AI to recommend actions or optimise outcomes, it uses AI to generate plausible coherence without resolving the central dilemma. Human participants are therefore forced to exercise judgement under ambiguity, with responsibility remaining non-delegable.
The presentation is grounded in a series of rapidly generated influence wargame scenarios, constructed in minutes from open-source reporting (e.g. news sweeps) and structured around recognised civil-military and influence frameworks (ASCOPE/PMESII, narrative policy framing, and CIMIC-relevant civil factors). A Game-Master (GM) architecture adjudicates scenarios, in which the generative AI controls narrative development, surfaces and escalates consequences, and checks player skills. This architecture explicitly constrains the production and suggestion of optimal or authoritative solutions. Second-order effects are surfaced through dynamic metrics (for example: public support, escalation tension, narrative dominance, alliance cohesion).
The design logic underpinning the approach draws on recent work in Applied Negative Dialectics as a practical design methodology. This is not treated as a philosophical overlay, but as an operational principle. Here, contradiction and tension are preserved as features of the decision environment rather than considered as problems to be eliminated. In this sense, the HDFC method treats coherence itself as a potential failure mode in AI-supported decision-making. This is particularly relevant for influence and information environments where legitimacy, perception, and escalation dynamics are a central feature.
The contribution of the presentation proposed is threefold. First, it reframes hallucination and narrative fluency as operational risks, not technical defects. Second, it offers a practical design framework for human–AI teaming that strengthens human judgement by refusing premature closure. Third, it demonstrates how generative AI can be used safely in influence war-gaming and experimentation contexts ahead of any optimisation, where uncertainty is highest and decision-making is most vulnerable.
The approach offered in the presentation is positioned as experimental, unclassified, and dual use. It is suitable for research, professional military education, and early-stage concept development, rather than operational decision support.

Item Type: Conference or Workshop Item (Other)
Status: Submitted
Subjects: H Social Sciences > H Social Sciences (General)
T Technology > T Technology (General)
U Military Science > U Military Science (General)
School/Department: York Business School
URI: https://ray.yorksj.ac.uk/id/eprint/14371

University Staff: Request a correction | RaY Editors: Update this record