AI employed by the U.S. military “has piloted pint-sized surveillance drones in special operations forces’ missions and helped Ukraine in its war against Russia,” reports the Associated Press.

But that’s the beginning. AI also “tracks soldiers’ fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space.”

Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative — dubbed Replicator — seeks to “galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many,” Deputy Secretary of Defense Kathleen Hicks said in August. While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy — including on weaponized systems.’

There is little dispute among scientists, industry experts and Pentagon officials that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles. That’s especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them — and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.