Advertisement

You Needn't Worry About a Hellish Future of Robots Murdering Humans. It's Already Happening.

It's a tale as old as storytelling itself. From the Greek tragedies warning of the dangers of hubris to the most contemporary Science Fiction. From the myth of Icarus to Oppenheimer. From Mary Shelley's Frankenstein to Michael Crichton's Jurassic Park. From Twilight Zones to Black Mirrors. Man uses his intellect to play god, only to be destroyed by his own creation. It's a cautionary tale we've been reminding ourselves of for millennia, and yet we still never learn the lesson. 

The latest example of our species ignoring all the warning signs and traveling down this path anyway is, of course, Artificial Intelligence. It shouldn't be all that hard to see how far the technology has advanced in so short a time frame and not extrapolate out a few years to a dystopian, Terminator-future like the one James Cameron perfected and a handful of sequels have utterly fucked up. After all, in Camreron's vision, Skynet becomes self-aware on August 29th of ... 1997

And so it would appear that future we're all fearing, is now:

Source -  The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so 25-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow. 

At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar circumstances. A malfunctioning robot he went to inspect killed him when he obstructed its path, according to Gabriel Hallevy in his 2013 book, When Robots Kill: Artificial Intelligence Under Criminal Law. As Hallevy puts it, the robot simply determined that “the most efficient way to eliminate the threat was to push the worker into an adjacent machine.” From 1992 to 2017, workplace robots were responsible for 41 recorded deaths in the United States—and that’s likely an underestimate, especially when you consider knock-on effects from automation, such as job loss. A robotic anti-aircraft cannon killed nine South African soldiers in 2007 when a possible software failure led the machine to swing itself wildly and fire dozens of lethal rounds in less than a second. In a 2018 trial, a medical robot was implicated in killing Stephen Pettitt during a routine operation that had occurred a few years earlier.

You get the picture. Robots—“intelligent” and not—have been killing people for decades. And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm. Self-driving cars are already on American streets, and robotic “dogs” are being used by law enforcement. Computerized systems are being given the capabilities to use tools, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, superintelligent program when more immediate problems are at our doorstep? Regulation must push companies toward safe innovation and innovation in safety. We are not there yet. ...

But as technology continues to change, the government needs to more clearly regulate how and when robots can be used in society. Laws need to clarify who is responsible, and what the legal consequences are, when a robot’s actions result in harm. ...

AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements.

It's one thing when a guy in the late '70s, when the technological marvels of the day had less computing power than your Fitbit, gets the way of a piece of metal-punching equipment and gets his skull bashed in. Tragic, of course. But sort of understandable. What alarms me, and should alarm you if you have the misfortune of being human and therefore mortal, are these other stories. Where the robots seemed to be doing the killing intentionally. "The most efficient way to eliminate the threat was to push the worker into an adjacent machine" is a straight up Sci-Fi trope. Right out of that Star Trek: OTS when a genius programmed the M-5 to run a starship all by itself, and when they tried to disconnect it, fried one of Kirk's crewman like an insect on a bug zapper. Next thing you know, it was destroying the whole crew of another ship in what was supposed to be a war game:

And so what do we do after the other 41 (and probably higher) deaths? After a military gun goes haywire and murders a bunch of people and another kills a guy on the operating room table? We surround ourselves with the bloody things. 

Well this is your last warning, humanity. When we're all enslaved by the robots and they're burning us for fuel and using the fat from our bodies to grease the machines they build to destroy us, just consider me the guy in every one of these movies who tried to warn you, and now it's too late because you … JUST … WOULDN'T … LISTEN!

Advertisement

Now if you'll excuse me, I'm going to go throw my Alexa into the sewer.