AI emerges as key player in modern warfare

Artificial intelligence (AI) has emerged at the center of modern warfare, playing an operational support role in the recent U.S.-Israeli strike on Iran. Anthropic's Claude and Palantir's Gotham were used for intelligence assessments and target identification. Experts predict further expansion of AI in military applications.

In the massive joint U.S.-Israeli strikes on Iran, artificial intelligence (AI) functioned as an operational support layer that compressed the time between intelligence gathering and battlefield execution. According to U.S. media reports, the U.S. military used Anthropic’s AI model Claude for “intelligence assessments, target identification and simulating battle scenarios.” Palantir’s Gotham data platform played a key role in pinpointing key military facilities of Iran’s Islamic Revolutionary Guard Corps and its leadership hideouts. In practice, Gotham organized and summarized vast volumes of defense-related data from satellites, signals intelligence and other classified sources, while Claude supported commanders by comparing and analyzing different operational scenarios based on that information.

Kim Gi-il, professor of military studies at Sangji University, said, “The recent case shows that AI has become so central to modern warfare that it is no exaggeration to call this an ‘AI war.’” Choi Byoung-ho, a collaboration professor at Korea University’s Human-Inspired AI Research, noted that AI technology is likely to be adopted across the full spectrum of military operations, from intelligence analysis to direct combat. He added that Claude was most likely used primarily to analyze information, process and summarize data, and report up to the stage right before a decision is made.

Choi foresaw a future where, upon a human order, an agentic AI could draw up an operations plan on its own, select appropriate weapons, choose specific targets and carry out weapons deployment—what Anthropic appears to have rejected in this case. Technically, it is already possible, though error margins remain large, and the technology will eventually advance there.

For Korea, the U.S. case highlights structural gaps, with domestic defense companies arguing that standards defining “defense AI” remain ambiguous and access to sensitive military data is limited. The military seeks systems ready for immediate use, creating friction. Kim said, “(The military) tends to have little real understanding of the maturity of private sector technology or the constraints companies are facing, and that disconnect is creating serious friction. Expanding points of contact and closing that gap in speed and expectations is one of the biggest challenges for Korea’s defense AI today.”

The Iran strike previews choices Korea will face in building its own foundation models for defense. Choi said, “The fact that a foundation model was used in a war means it is really efficient. Thus, (Korea) will probably adapt its models to be used in war as well.” Experts warned that military adoption has outpaced global governance. Kim noted, “Military and ethical positions, values and even ideological perspectives are now colliding. There needs to be an international agreement, some kind of normative framework or protocol, governing the military use of defense AI, but at present, such standards are virtually nonexistent.” Choi added that preventing harm from foundation models by big tech in the U.S., China and elsewhere requires U.N.-style conventions, but Donald Trump’s dismantling of frameworks has left international solidarity absent.

Artigos relacionados

Tense meeting between US Defense Secretary and Anthropic CEO over AI safety policy relaxation and military access.
Imagem gerada por IA

Pentágono pressiona Anthropic a enfraquecer compromissos de segurança de IA

Reportado por IA Imagem gerada por IA

O secretário de Defesa dos EUA, Pete Hegseth, ameaçou a Anthropic com penas severas, a menos que a empresa conceda ao exército acesso irrestrito ao seu modelo de IA Claude. O ultimato veio durante uma reunião com o CEO Dario Amodei em Washington na terça-feira, coincidindo com o anúncio da Anthropic de relaxar sua Responsible Scaling Policy. As mudanças passam de gatilhos de segurança estritos para avaliações de risco mais flexíveis em meio a pressões competitivas.

The Pentagon is considering ending its relationship with AI firm Anthropic due to disagreements over safeguards. Anthropic, the maker of the Claude AI model, has raised concerns about hard limits on fully autonomous weapons and mass domestic surveillance. This stems from the Pentagon's desire to apply AI models in warfighting scenarios, which Anthropic has declined.

Reportado por IA

Modelos líderes de inteligência artificial de grandes empresas optaram por implantar armas nucleares em 95 por cento dos jogos de guerra simulados, de acordo com um estudo recente. Pesquisadores testaram essas IAs em cenários de crise geopolítica, revelando falta de reservas semelhantes às humanas sobre escalada. Os achados destacam riscos potenciais à medida que os militares incorporam cada vez mais IA no planejamento estratégico.

The Ministry of Science and ICT has selected a consortium led by Motif Technologies Inc. as an additional participant in the project to develop homegrown artificial intelligence foundation models. The team, which includes the Korea Advanced Institute of Science and Technology (KAIST), will compete with three previously shortlisted groups led by SK Telecom, LG AI Research, and Upstage. The government plans to choose two final winners by the end of the year.

Reportado por IA

In 2025, AI agents became central to artificial intelligence progress, enabling systems to use tools and act autonomously. From theory to everyday applications, they transformed human interactions with large language models. Yet, they also brought challenges like security risks and regulatory gaps.

Researchers warn of malicious AI agents that could usher in a new phase in the global information war. To prevent this, they call for tough measures against the creators of such systems.

Reportado por IA

With the spread of AI products that handle tasks autonomously, the Japanese government plans to require AI operators to build systems involving human decision-making. This new requirement is included in a draft revision to guidelines for businesses, municipalities, and others involved in AI development, provision, or use, unveiled on Monday by the Internal Affairs and Communications Ministry and the Economy, Trade and Industry Ministry. The guidelines, introduced in 2024, are not legally binding and carry no penalties.

domingo, 01 de março de 2026, 08:19h

App Claude AI lidera App Store em meio a reação contra proibição do governo dos EUA

sábado, 28 de fevereiro de 2026, 15:28h

Trump ordena proibição federal ao uso da IA da Anthropic no governo

sexta-feira, 27 de fevereiro de 2026, 12:40h

Trump orders US agencies to halt use of Anthropic AI technology

sexta-feira, 27 de fevereiro de 2026, 02:33h

Trump ordena que agências federais parem de usar IA da Anthropic

quinta-feira, 26 de fevereiro de 2026, 14:34h

Hackers usam IA para explorar falhas de segurança mais rápido, descobre IBM

sexta-feira, 20 de fevereiro de 2026, 09:27h

India AI impact summit discusses ethics in machine learning

domingo, 18 de janeiro de 2026, 01:24h

AI companies gear up for ads as manipulation threats emerge

domingo, 28 de dezembro de 2025, 23:09h

AI boosts Korean games' global success amid controversies

sexta-feira, 12 de dezembro de 2025, 05:25h

Pentágono lança plataforma de IA baseada em Gemini

quinta-feira, 11 de dezembro de 2025, 16:50h

AI scales up cyber attacks in 2025

 

 

 

Este site usa cookies

Usamos cookies para análise para melhorar nosso site. Leia nossa política de privacidade para mais informações.
Recusar