In recent years, the concept of autonomy in weapon systems has shifted from science fiction to operational reality. From swarming drones to AI-assisted decision loops, modern armies are investing heavily in new forms of battlefield autonomy, not to replace humans, but to enhance their effectiveness and survivability.
Defining autonomy
Autonomy exists on a spectrum. The U.S. Department of Defense defines autonomous systems as those that can
“perform assigned tasks without further human input” once activated
In practice, most systems are semi-autonomous, combining automation with intermittent human control.
Three categories emerge
- Human-in-the-loop: the system waits for human approval before acting, like the MQ-9 Reaper
- Human-on-the-loop: the system can act but remains supervised, such as Iron Dome
- Human-out-of-the-loop: the system acts independently within predefined parameters, like Israel’s Harpy loitering munition, which operates semi-autonomously once launched
Collaborative autonomy
The shift in 2025–2026 is not just about autonomy but about collaborative autonomy
This means multiple unmanned systems operating together, often with minimal human input
It’s already visible in programs such as:
- DARPA ACE: using AI to enable UAVs to fly and fight as a team
- UK Mosquito: a loyal wingman project now cancelled, but whose tech is feeding into other initiatives like GCAP
- Israeli swarming systems: integrating AI-driven drones for ISR and strike
Swarming relies on real-time data sharing, edge AI, and decentralized decision-making
The goal is to penetrate defenses, saturate airspace, or extend tactical reach
Autonomy in conflict, Ukraine and Gaza
These capabilities are no longer confined to labs. In Ukraine, Russian Lancet drones use vision-based algorithms to identify and home in on pre-defined targets source. Ukraine is integrating AI into FPV drones to enhance autonomy during terminal guidance, though most missions remain manually piloted source.
In Gaza, Israel’s IDF reportedly used AI systems like Habsora (“The Gospel”) to prioritize targets and support UAV strike planning. These tools automate parts of the targeting workflow but final decisions remain human-controlled.
Why now? the tech behind the shift
Four key factors are accelerating today’s shift toward more autonomous functions on the battlefield. First, cutting-edge AI can run real-time processing directly on small drones, reducing the need for continuous reach-back links. Second, machine-to-machine links enable near-instant coordination between platforms, an essential condition for effective swarming. Third, multi-sensor fusion significantly improves detection and object classification by reducing ambiguity and strengthening threat identification. Finally, synthetic environments provide a safe, scalable way to train, test, and validate autonomous behaviors quickly before exposing them to real-world conditions.
Defense autonomy isn’t synonymous with “killer robots.” It’s first and foremost about very concrete imperatives: gaining speed of action, precision, and survivability against denser, faster-moving threats. The real issue is AI-enabled warfare, where humans retain final decision authority, but where that decision must be made in time, based on better-fused information and more responsive systems. That trajectory only makes sense if doctrine, the legal framework, and technology advance together in a synchronized way, to avoid a dangerous gap between what systems can do, what forces want to do, and what the law permits.