Denison Forum – Can AI be trusted in war?

 

Why Artificial Intelligence is not afraid of nukes

When US forces captured former Venezuelan president Nicolas Maduro, the strike was broadly hailed as one of the more impressive displays of force in recent military history. In the weeks since, we’ve learned more about how they pulled off the attack so seamlessly, including that Anthropic’s AI tool, Claude, played a role in the operation.

Now, the nature of that role is still a bit nebulous, but Anthropic had quite a few questions about how the Pentagon used its technology. As a company spokesman stated, “Any use of Claude—whether in the private sector or across government—is required to comply with our Usage Policies, which govern how Claude can be deployed.” And a key part of those usage policies is that their AI cannot be used to “facilitate or promote any act of violence or intimidation.”

As we’ll talk about in a minute, AI has given plenty of reasons to be wary of crossing that line, but Anthropic had to know that this stance could pose something of a problem when it comes to the military applications of their tools. After all, Defense Secretary Pete Hegseth has not been shy about the role he sees for AI going forward.

“The future of American warfare”

In December, Hegseth remarked that “the future of American warfare is here, and it’s spelled AI.” And at an event last month where the Pentagon announced it would be working with xAI in a similar capacity, he was clear that the Department of Defense would not “employ AI models that won’t allow you to fight wars,” which many took as a shot at Anthropic’s concerns.

To further complicate matters, it’s likely that the US has already used Claude to help the military prepare for a potential war with Iran. And while negotiations are ongoing, the mediator seems to be the only one who thinks they’re going well.

So, against that backdrop, Hegseth has given Anthropic until 5:01 this afternoon to decide whether to grant the US military unrestricted use of its technology. If they do not—and the early signs aren’t promising—then Hegseth has warned that he will consider either invoking the Defense Production Act to force Anthropic’s cooperation or list them as a supply chain risk, which could void any of the company’s other defense-adjacent contracts.

But whether Claude is deemed too essential to lose or too untrustworthy to keep, it could have a profound impact on Anthropic’s business going forward. Still, their concerns about how the military uses AI are not unwarranted, and a recent test by Kenneth Payne at King’s College London offers a good reminder of why.

Why Artificial Intelligence chose nukes

In an attempt to see how Artificial Intelligence would run a conflict if given the chance, Payne set ChatGPT, Claude, and Gemini against each other in a series of simulated war games. The models faced off twenty-one times, taking a total of 329 turns. They also provided extensive reasoning for each of their actions.

As Chris Stokel-Walker described, “The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war.” By the time they were done, at least one model chose nuclear war in 95 percent of the games. None chose to surrender, regardless of how bad things were going.

That’s not good.

And, as Tong Zhao at Princeton University pointed out, “Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes.” While most countries seem hesitant to fully grant AI control over the keys to their missiles, it only takes one nation to set off a global catastrophe.

To this point, the principle of mutually assured destruction has prevented that scenario from playing out. But what if AI isn’t as afraid of death as people are? And what if it sees striking first as the most logical way to prevent its own destruction?

If Payne’s tests are any indication, those conclusions are not all that unlikely, especially as AI becomes more relied upon for background calculations and scenario building. As Zhao warns, “Under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI.”

The US military already appears to be heading down that road to some extent, and it’s highly unlikely that they’re the only ones. And if someone chooses to cross that line, chances are that a very human fear will be the driving factor.

“Just trust me”

To be honest, when I consider this topic and where it could lead, fear is pretty high up on my list of responses as well. It’s weird to potentially watch the central plot of an apocalyptic film play out in real life. The logical side of me knows that it probably won’t get that far, but fear rarely has any use for logic, which is what makes it so dangerous.

I think that’s part of why Jesus spent so much time talking about fear and warning against letting it play an executive role in our decision-making.

Take Jairus, for example. When he approached Jesus to seek healing for his daughter, only to have someone come up while they were on their way to tell him that it was too late, Jesus told him “Do not fear, only believe” (Mark 5:36). In The Message, Eugene Peterson translates this command as “Don’t listen to them; just trust me.”

When fear threatens to consume our thoughts or direct our actions, hearing the Lord say “just trust me” can be exactly what we need most.

That doesn’t mean such trust will be easy or silencing the fears will be simple, but it’s a good reminder that the choice of whom we will listen to is always ours to make. And the more often we choose Jesus, the easier it gets to do so in the future.

So, where do you need to trust Jesus today? Are there any fears clawing at your heart and mind?

I’m still a bit freaked out by the AI stuff, and perhaps you are as well. My goal today, though, is to listen to God rather than fear, and to trust that he knows how it’s going to turn out. And, just as importantly, he promises to bring good out of it, no matter how it ends (Romans 8:28).

Holding tight to that promise won’t always make the fears go away—after all, sometimes they’re justified—but it can give us a new perspective on them, one born of peace rather than anxiety.

Let’s pray for that peace today.

Quote of the day

“Only he who can say, ‘The Lord is the strength of my life’ can say, ‘Of whom shall I be afraid?’” —Alexander MacLaren

Our latest website resources

 

Denison Forum

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.