Sitting in a U.N. committee meeting in Geneva earlier this year in a session on Lethal Autonomous Weapons Systems (aka killer robots), I was shocked to hear the American delegates claim that AI-powered automated warfare could be safe and reliable. Not only are they wrong, but their thinking endangers us all. It’s the same logic that led to the Cold War, the nuclear arms race and the Doomsday Clock.
I quit my job at a young, promising tech company in January in protest precisely because I was concerned about how the Pentagon might use AI in warfare and how the business I was part of might contribute to it. I have seen close up the perils of this unreliable but powerful technology, and I have since joined the International Committee for Robot Arms Control (ICRAC) and theCampaign to Stop Killer Robotsto make sure that AI is used responsibly, even in cases of war. It was because of this protest and my objection to autonomous weaponry that I attended the U.N. conference as part of the campaign’s delegation.
Lethal Autonomous Weapons Systems are by their very nature unsafe, and if we allow nations — especially undemocratic ones — to make them, we all stand to lose from their mistakes.
Read more HERE