<
Intuitively, we understand that a human-emulation cell, no matter how expert, is not the same as the real actor. When moving from a human assessment of the self and the other to one of a machine, we add in an additional layer of divergence and we risk false precision in trusting simulated actions. Much as maps distort reality by playing with our perceptions, we risk the same bias with AI.
From the authors’ perspective, much can be learned from observing both the outcomes of games and the interactions from players. Yet we learn not just from game results or discussions, but also from interactions between players and game designers. For this form of knowledge-generation, which is perhaps entirely experiential, the ways in which we can use AI to generate knowledge is even more unclear. What do we learn about the problem at hand? What do we learn about the game itself? What do we learn about wargaming as an enterprise? If the purpose of wargaming is to generate learning, we must interrogate what can be learned from this interaction between the human designer, the AI player, and the game itself.
Wargames and novel technologies have a troubled history. Still, in settings where we have clear measures of effectiveness based on other tools and can cross-check results, AI can help the wargaming community expand its reach. If, however, AI is used to replace human judgement—for example, to “impersonate” an adversary decision-maker or to replace human interaction—we risk a dangerous mirage of knowledge, and there be dragons.
Stephen M. Worman is director of the RAND Center for Gaming, a political scientist at RAND, and a professor at the Pardee RAND Graduate School.
Bryan Rooney is a political scientist at RAND.
]]>
Read the full article here