An ex-Google employee vehemently believes that killer robots could be a genuine threat in the future.
Laura Nolan resigned from Google last year in protest against the fact that they sent her to work on a project that would increase US military drone technology. She is fearful of AI killing machines that could be set loose.
She does not fear drones as much because they are controlled by human beings. Robots, however, have the potential to do things outside of the things they were programmed for. There is factual evidence that Google is not behind creating robots of this kind, but the employee felt she needed to take a stand knowing what she knows about robots.
The main point behind the protest is to make sure that large-scale creations like robots and weapons systems are under full control of human beings. Otherwise, we are setting ourselves up for a possible catastrophe of epic proportions.
Nolan became concerned with her assigned task when she expected to review hours of drone video footage to locate potential enemy targets. However, she and others were asked instead to create a system where artificial intelligence machines could tell the difference between humans and objects at a much faster rate. 3,000 Google employees signed a petition against participating in this type of activity.
People felt they were part of a kill chain that was looking at targeting and killing people as directed by the US military in Afghanistan. This is not what people who worked for Google signed up for. With the development of robots, external forces such as weather systems and machines not having the capacity to work out human problems could throw the robots off and cause them to behave in dangerous ways.
These robots can also only be tested in real-life situations. It is hard to develop something that could kill innocent people without having the ability to test it in a safe zone first. How is it possible for a robot to tell the difference between someone who is an enemy and someone who is an ally?
This would truly be an advancement in saving the lives of U.S. soldiers by replacing them with robots, but the potential problems that could be caused as a result of a malfunction could lead to larger world problems that we may not be equipped to handle.