The US Department of Defense has deployed machine learning algorithms to identify targets in over 85 air strikes on targets in Iraq and Syria this year.
The Pentagon has done this sort of thing since at least 2017 when it launched Project Maven, which sought suppliers capable of developing object recognition software for footage captured by drones. Google pulled out of the project when its own employees revolted against using AI for warfare, but other tech firms have been happy to help out.
I thought this had been going on for awhile now with computers identifying potential targets:
“The object recognition algorithms are used to identify potential targets. Humans then operate weapons systems. The US has reportedly used the software to identify enemy rockets, missiles, drones, and militia facilities.”
I suppose it was the human intervention that made them consistently mistake unarmed civilians for enemy combatants - what could possibly go wrong with this approach?
I was going to ask who gets charged with the warcrimes when a computer bombs a wedding, but that’s not likely to change when the current answer is “nobody” or perhaps “the journalists that reported on it.”
Finally, did the biggest AI vendor’s primary product inexplicably shit the bed like a week ago? Yes? Oh no…
the human (really the military and government entity that employs them) who pulled the trigger not the computer that identified it. You see the human was just given a possible target but they did not actually need to fire.
Yep - that’d be the human intervention I mentioned, which is now being removed. It was clearly the people that were the ones shifting the targeting away from the legitimate military targets to civilians - AI wouldn’t get regularly things wrong, right?
the humans not being removed. Didn’t you read the quote I pasted from the article that you replied to in this chain??? I mean I went to the the trouble of reading the article and copying and pasting the relevant part. Im not saying im a hero but…