The US Department of Defense has deployed machine learning algorithms to identify targets in over 85 air strikes on targets in Iraq and Syria this year.
The Pentagon has done this sort of thing since at least 2017 when it launched Project Maven, which sought suppliers capable of developing object recognition software for footage captured by drones. Google pulled out of the project when its own employees revolted against using AI for warfare, but other tech firms have been happy to help out.
There’s really no reason to think this technology will be victim to the Jevons paradox. These strikes are already happening remotely, and if AI/ML can better discern targets vs civilians there’s absolutely no reason to think civilian casualties will increase because of it.
That’s like saying using AI/ML to screen for cancer will result in more people dying from cancer.
You’re trying to apply an economical theory about the consumption of finite resources to a completely unrelated field/sector.