Tuesday, October 22, 2019

Should AI Be Able to Kill in War?


In this episode, Byron talks about the kill decision and autonomous weapons. For more on Artificial Intelligence: https://voicesinai.com https://gigaom.com https://byronreese.com https://ift.tt/31WftGA... Transcript: In another one of these videos, I looked at the issues around building robots that could make autonomous kill decisions. Now I want to continue that by asking, “Should we build such weapons?” The threats that these systems would be built to counter are, in fact, very real. In 2014, the UN had a meeting on what it calls Lethal Autonomous Weapon Systems. The report that came out of that meeting maintains that these weapons are being sought by terrorists who will likely get their hands on them. Additionally, there is no shortage of weapons systems currently in development around the world that utilize AI to varying degrees. Russia, is an example, is developing a robot that can detect and shoot a human that is four miles away, using a combination of radar, thermal imaging and video cameras. A South Korean company is already selling a $40 million automatic turret, which, in accordance with international law, shouts out, “Turn around and leave, or we will shoot” to any potential target within two miles. It requires a human to OK the kill decision, but this was a feature that was only added later due to customer requests. It turns out virtually every country on the planet with a sizeable military budget, probably about 2 dozen, is working on developing AI-powered weapons.

No comments: