Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(55,433 posts)
Wed Mar 5, 2025, 11:38 AM Mar 5

Pentagon signs AI deal to help commanders plan military maneuvers

Source: Washington Post

U.S. military commanders will use artificial intelligence tools to plan and help execute movements of ships, planes and other assets under a contract called “Thunderforge” led by start-up Scale AI, the company said Wednesday.

The deal comes as the Defense Department and the U.S. tech industry are becoming more closely entwined. Scale will use AI tools from Microsoft and Google to help build Thunderforge, which is also being integrated into start-up weapons developer Anduril’s systems.

The Thunderforge project aims to find ways to use AI to speed up military decision-making during peace and wartime. Commanders’ roles have become more challenging as military operations and equipment have become more complex and technology-centric, with missions involving drones and conventional forces spanning land, sea and air, as well as cyberattacks.

Under the new contract, Scale will develop AI programs that commanders could ask for recommendations about how to most efficiently move resources throughout a region. The technology could combine data from intelligence sources and battlefield sensors with information on the positions of friendly and enemy planes and ships.

-snip-

Read more: https://www.washingtonpost.com/technology/2025/03/05/pentagon-ai-military-scale/

9 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

lapfog_1

(30,798 posts)
1. sigh
Wed Mar 5, 2025, 11:43 AM
Mar 5

very likely an "AI" scam ( slap AI on the front label of some pile of shit software and presto now you are the leading edge )... much like "cloud" 15 years ago.

There isn't enough historical captured data to make a LLM on how to do this for AI to learn and advise how to do it "right"

IronLionZion

(48,488 posts)
2. AI can make the hard decisions that humans are uncomfortable doing.
Wed Mar 5, 2025, 12:03 PM
Mar 5

with deadly results. Machines are not held back by basic human decency, empathy, perspective, nuance, consequences, or any other complex factors involved in these decisions.

Go ahead and ask AI if it's worried about lawsuits or public perception or losing elections.

AZJonnie

(700 posts)
6. I know what you're saying but it sounds like this is AI is more about logistics than trigger-pulling
Wed Mar 5, 2025, 01:14 PM
Mar 5

Though I could imagine AI generally being used to help commanders decide how large/what type of bomb is optimal to drop on a given target, for example. Computers should be pretty good at these sorts of things as it's mostly number-crunching. In fact, I'm sure they already are used heavily for that purpose.

AI in itself is just a tool and it's well capable of understanding what 'international laws' are and integrating them into its recommendations for actions. Sure, programmers could 'turn off any regard for international law' if they wanted, but that would be on the people, not on the machine. AI wasn't telling Blackwater they should torture prisoners in Iraq, for example, but if asked, and programmed correctly, it might well have correctly calculated that torturing those prisoners was the wrong decision. Just as it could calculate when attacks should be made in order to minimize civilian casualties, or calculate that bombing a school while it's in session is the wrong decision.

Point being yes you can make an AI that's totally totally lawless and immoral ... but you can also make one that knows every international law, and the tenets that underlie the concept of morality, and tell it to always consider them in it's recommendations. It comes down to the people who program the AI, and its inherent capabilities.

One should also consider that most every other military in the world will be implementing AI to help with their logistics and decision making, and the ones that don't do so could well be the ones who end up losing on the battlefield.

IronLionZion

(48,488 posts)
7. We're putting a lot of trust in the people who program and train it to do all that
Wed Mar 5, 2025, 03:03 PM
Mar 5

They would have to input all those complex factors for AI to calculate. Like any tool, it can vary tremendously based on how it's implemented.

AZJonnie

(700 posts)
9. You can just tell it to go to websites where the international laws on war/combat
Wed Mar 5, 2025, 04:05 PM
Mar 5

such as the Geneva Conventions are posted and say 'learn this'. But yeah, totally agree using AI in this way could either be very useful or very harmful, depending on implementation. Just like it depends on whether a General giving combat orders a) knows all the laws and/or b) cares about following them. You can be pretty damn sure AI is going to know all the laws and not forget any once it's learned them so if used in that way, as basically a library, it could definitely help with following the laws properly. My opinion of its potential reliability however goes WAY down if you start talking about letting it autonomously make decisions re: pulling the triggers or dropping the bombs. AI can be a very good 'informational resource' but I would NOT trust it as a 'commander'.

LudwigPastorius

(12,197 posts)
8. "Scale will use AI tools from Microsoft...to help build Thunderforge"
Wed Mar 5, 2025, 03:25 PM
Mar 5

Time to start taking the phrase 'Blue Screen Of Death' literally.

Latest Discussions»Latest Breaking News»Pentagon signs AI deal to...