AI emerges as key player in modern warfare

Artificial intelligence (AI) has emerged at the center of modern warfare, playing an operational support role in the recent U.S.-Israeli strike on Iran. Anthropic's Claude and Palantir's Gotham were used for intelligence assessments and target identification. Experts predict further expansion of AI in military applications.

In the massive joint U.S.-Israeli strikes on Iran, artificial intelligence (AI) functioned as an operational support layer that compressed the time between intelligence gathering and battlefield execution. According to U.S. media reports, the U.S. military used Anthropic’s AI model Claude for “intelligence assessments, target identification and simulating battle scenarios.” Palantir’s Gotham data platform played a key role in pinpointing key military facilities of Iran’s Islamic Revolutionary Guard Corps and its leadership hideouts. In practice, Gotham organized and summarized vast volumes of defense-related data from satellites, signals intelligence and other classified sources, while Claude supported commanders by comparing and analyzing different operational scenarios based on that information.

Kim Gi-il, professor of military studies at Sangji University, said, “The recent case shows that AI has become so central to modern warfare that it is no exaggeration to call this an ‘AI war.’” Choi Byoung-ho, a collaboration professor at Korea University’s Human-Inspired AI Research, noted that AI technology is likely to be adopted across the full spectrum of military operations, from intelligence analysis to direct combat. He added that Claude was most likely used primarily to analyze information, process and summarize data, and report up to the stage right before a decision is made.

Choi foresaw a future where, upon a human order, an agentic AI could draw up an operations plan on its own, select appropriate weapons, choose specific targets and carry out weapons deployment—what Anthropic appears to have rejected in this case. Technically, it is already possible, though error margins remain large, and the technology will eventually advance there.

For Korea, the U.S. case highlights structural gaps, with domestic defense companies arguing that standards defining “defense AI” remain ambiguous and access to sensitive military data is limited. The military seeks systems ready for immediate use, creating friction. Kim said, “(The military) tends to have little real understanding of the maturity of private sector technology or the constraints companies are facing, and that disconnect is creating serious friction. Expanding points of contact and closing that gap in speed and expectations is one of the biggest challenges for Korea’s defense AI today.”

The Iran strike previews choices Korea will face in building its own foundation models for defense. Choi said, “The fact that a foundation model was used in a war means it is really efficient. Thus, (Korea) will probably adapt its models to be used in war as well.” Experts warned that military adoption has outpaced global governance. Kim noted, “Military and ethical positions, values and even ideological perspectives are now colliding. There needs to be an international agreement, some kind of normative framework or protocol, governing the military use of defense AI, but at present, such standards are virtually nonexistent.” Choi added that preventing harm from foundation models by big tech in the U.S., China and elsewhere requires U.N.-style conventions, but Donald Trump’s dismantling of frameworks has left international solidarity absent.

Articles connexes

Illustrative photo of Pentagon challenging Anthropic's limits on Claude AI for military use during strained contract talks.
Image générée par IA

Le Pentagone conteste les limites imposées par Anthropic à l'utilisation militaire de Claude, alors que les négociations contractuelles sont tendues

Rapporté par l'IA Image générée par IA Vérifié par des faits

Après que le PDG d'Anthropic, Dario Amodei, a déclaré fin février que l'entreprise ne permettrait pas que son modèle Claude soit utilisé pour la surveillance domestique de masse ou pour des armes entièrement autonomes, de hauts responsables du Pentagone ont déclaré qu'ils n'avaient pas l'intention d'utiliser l'IA pour la surveillance domestique et ont insisté sur le fait que les entreprises privées ne peuvent pas fixer de limites contraignantes sur la manière dont l'armée américaine utilise les outils d'IA.

André Loesekrug-Pietri, directeur de l'agence européenne d'innovation Jedi, déclare dans une interview au Handelsblatt que les États-Unis utilisent pour la première fois l'intelligence artificielle à grande échelle dans le conflit du Golfe contre l'Iran. La plateforme « Maven », développée par Palantir, change fondamentalement la nature du conflit. Il exhorte l'Europe à rattraper son retard technologique.

Rapporté par l'IA

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

Le PDG d'Anthropic, Dario Amodei, a déclaré que l'entreprise ne se pliera pas à la demande du Pentagone de supprimer les garde-fous de ses modèles d'IA, malgré les menaces d'exclusion des systèmes de défense. Le différend porte sur la prévention de l'utilisation de l'IA dans des armes autonomes et la surveillance intérieure. L'entreprise, qui dispose d'un contrat de 200 millions de dollars avec le Département de la Défense, souligne son engagement envers une utilisation éthique de l'IA.

Rapporté par l'IA

L'Ukraine a annoncé qu'elle partagerait ses données de champ de bataille avec ses alliés pour aider à entraîner des modèles d'IA destinés aux logiciels de drones. Cette initiative vise à renforcer la coopération technologique au milieu de la guerre en cours avec la Russie. Le ministre de la Défense Mykhaïlo Fedorov l'a décrite comme une étape vers des partenariats gagnant-gagnant.

South Korea's leading defense systems company Hanwha Aerospace and game publishing giant Krafton have agreed to jointly develop physical artificial intelligence (AI) technologies and establish a joint venture for commercialization. The partnership combines Hanwha's defense and manufacturing infrastructure with Krafton's AI research and software expertise to build a mid- to long-term cooperation framework. The collaboration is expected to expand into space and aviation sectors over time.

Rapporté par l'IA

Les experts soutiennent que l’IA physique, impliquant des robots et des machines autonomes interagissant avec le monde réel, pourrait offrir un chemin direct vers l’intelligence artificielle générale. Les commentaires d’Elon Musk sur les robots Optimus de Tesla mettent en lumière ce potentiel, au milieu d’investissements croissants dans les technologies connexes. L’année 2026 est considérée comme un point d’inflexion clé pour le domaine.

 

 

 

Ce site utilise des cookies

Nous utilisons des cookies pour l'analyse afin d'améliorer notre site. Lisez notre politique de confidentialité pour plus d'informations.
Refuser