Skip to main content

Verified by Psychology Today

Play

Attack of the Killer Robots

When a robot kills, is it an industrial accident or is it murder?

The killer robots are coming after you. While this is a common plot theme in science fiction, it isn’t just fiction. Robots have killed before and will kill again.

Three things have gotten me thinking about killer robots. The first was an actual killer robot. The second was the movie Ex Machina, which was science fiction. The third is Google’s self-driving car (which hasn’t killed anyone as far as I know). The basic question I’ve been worried about is both very simple and rather complex. When a robot kills, how should we think about it? Is it an industrial accident or is it murder?

Let’s start with the real killer robot. A robot in a VW automobile plant crushed and killed a man earlier this year. From the reports I’ve read, workers were putting the robot together when it grabbed one of the workers. The robot crushed the worker against a metal plate. But here is an important aspect of the story: Prosecutors were trying to decide if to bring charges and if so against whom. I assume they wouldn’t charge the robot with murder. But they might consider charging someone else as being responsible for this accident – bad programming, having the robot on when being installed, something that makes someone responsible for the death. But no one was talking about charging the robot or imprisoning it or taking it apart in retribution. No one seems to think the robot behaved intentionally to kill the man. So this robot killing seems to be an industrial accident.

If you haven’t seen the movie Ex Machina, I recommend it. It is beautiful and chilling. The basic set up is simple. A genius has created a robot that seems incredibly human. An employee who is a computer programmer is invited to give the robot a Turing test. In a Turing test, you see if a machine can make someone believe they are interacting with another person rather than the machine. In watching the movie, the robot (a beautiful female bot of course) is utterly compelling and the computer programmer starts to act as though the robot is human – human enough to have feelings and deserve to be freed. Spoiler alert: freeing the robots is always a bad idea in a science fiction movie. Of course, the robot imprisons the guy who frees her, kills her creator, and escapes. But what I loved about the movie is that the real Turing test is for the movie audience. Does the audience accept the robot as essentially human in capacity, thought, and feeling? How the audience responds determines how they see the killing. If the robot passes the Turing test, then the robot has murdered her creator (you’ve got to love these Frankenstein stories especially with a beautiful monster). If the robot doesn’t pass, then it is an industrial accident and justice for the evil scientist building crazy machines.

Finally, let’s consider Google’s self-driving cars. I admit to being excited about seeing them on the road soon. Humans are awful drivers. And humans are getting worse thanks to cell phones and other distractions. So self-driving cars don’t have to be that good to be an improvement. Let the car drive itself while you play with your phone. This is bound to make commutes faster and the world safer for bicycle commuters like me. But no matter how good the Google cars are, sooner or later one will cause a serious accident in which someone dies. When that happens, what will your response be? Was it an accident? Was it the fault of the person who wrote the computer code? Was it the fault of that particular car having developed some bad pattern? Or was it murder? Did the car develop awareness and decide to start killing humans? By the way, there’s a great scene in another science fiction movie, I Robot, starring Will Smith. Smith is chased by robots trying to kill him while he is driving. I don’t see the Google cars ever doing this, but it was a cool scene in a movie.

Someday the robots may develop awareness, see us as competitors, and decide to eliminate the humans. The point at which the robots and artificial intelligences take over is called the technological singularity and people worry about how close it is. It may start when one Google car gains awareness. Some people worry that it may start with drones that aren’t directly controlled by human operators (there is a group called Stop Killer Robots worried about this).

Consciousness is the key for how we make this decision about accidental killing or murder. It really is about the Turing test. Do you think the robot has awareness and makes decisions? If so, then you might suspect that a killer robot is a murderer. Of course, this isn’t just about robots. We make decisions about culpability based on consciousness, control, planning, and awareness all the time. We don’t blame you if your car is parker, the brake releases, the car rolls down a hill, and it crushes someone. We do hold you responsible if you make a plan, are behind the steering wheel, and choose to run over someone. One is an accident. The other is murder.

The question is somewhat the same with robots. Is there someone behind the wheel driving the robot? But the someone who is driving might be the robot itself. Is the robot a thinking being deciding to kill someone? This becomes a question of awareness on the part of the robot. But really it is a question of whether we think the robot is aware. Does the robot pass the Turing test?

Beware beautiful robots who seem to be aware. Don’t let them out.

advertisement
More from Ira Hyman Ph.D.
More from Psychology Today
More from Ira Hyman Ph.D.
More from Psychology Today