WEB DESK: In 2024, the deployment of advanced artificial intelligence (AI) systems in military operations has marked a new chapter in modern warfare, with reports indicating that AI-powered weapons have been instrumental in recent conflicts worldwide.
One notable example is the use of Anthropic’s Claude AI by the U.S. Department of Defense during Operation Epic Fury against Iran. According to sources, this AI technology played a critical role in coordinating nearly 900 strikes within the first half-day of the operation. Similar AI integration has been observed in Ukraine’s ongoing conflict with Russia, particularly in drone warfare and intelligence gathering.
Experts voice concerns that the rapid adoption of AI weapons could diminish human involvement in critical decision-making processes. The concept of the “kill chain” the sequence of target identification, approval, and strike execution is being compressed by AI systems, allowing for faster responses than ever before.
Military analysts warn that this “decision compression” risks reducing human oversight to mere rubber-stamping, raising ethical questions about accountability.
David Leslie, a professor specializing in ethics and technology at Queen Mary University of London, emphasized that this marks the dawn of a new era in military strategy but warned of the dangers of over-reliance on machines, which could lead to cognitive off-loading among commanders.
The controversy extends beyond operational concerns. Anthropic, a San Francisco-based AI firm, publicly opposed the use of its models in autonomous weapons systems without human control. The company also raised alarms about the Pentagon’s plans to employ their technology for extensive domestic surveillance. As a result, Anthropic was blacklisted from federal contracts after refusing to comply with certain military demands.
With AI increasingly shaping the future of warfare, questions about ethics, oversight, and the rules governing armed conflict are more urgent than ever.

