No Killer Robots, says the UN. FEAR THE MACHINES!

Science Fiction author Isaac Asimov invented what are known as the ‘Three Laws of Robotics’:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These laws appear in place of the opening credits of the 2004 hit science fiction film, ‘I, Robot’ starring Will Smith. In the opening scene, a terrible car accident occurs. The lives of Will Smith’s character and that of a 12 year old girl are at risk of death from drowning. A robot is tasked with the decision of saving what, given the time, will only be one of their lives. The robot quickly runs a crude calculation which takes into account the productivity of the survivor and likelihood of survivability in the long turn. Based on this crude calculation, the robot saves the stronger, adult male, Will Smith, leaving the 12 year old to drown.

A human, most would contend, would be more likely to take emotion into this calculation as well. A human is likely to rule that the 12 year old is “too young to die,” and would likely sacrifice the strong adult in favor of allowing the youngster a chance at a full, productive life. That is the “humane” component of an equation most people would say a robot or artificial general intelligence is incapable of learning or ever understanding.

And it is with that critical, albeit flawed, assumption in mind that takes us to the UN today: Autonomous, killer robots.

Or, robots which could attack targets without the overriding control of a human. Think a sentient drone without any central CIA mission control room. Today the United Nations Human Rights Council has expressed concern with the potentiality for this and has recommended the ban of “killer robots” prior to their invention.

Reporting on the matter for the Human Rights Council, Christof Heyns stated:

Machines lack morality and mortality, and as a result should not have life and death powers over humans

Again, the critical assumption at work is that machines are fundamentally incapable of learning human emotions. However, leading experts in the field of robotics and artificial general intelligence disagree. World renowned inventor and director of engineering at Google, Ray Kurzweil states that we will see computers outsmart humans by 2029. He contends that by the time we reach what he calls ‘The Singularity’ we will have entirely sentient machines and a world where biology and technology transcend one another. His current work at Google is focused on improvements in Artificial Intelligence, mainly voice recognition. Apart of understanding human emotion is not that it is innate, but rather learned. Much in the way a child learns through interaction, experts like Kurzweil contend machines can learn this way too.

But what about machines used to kill? Or what about machines used at war versus human combatants? While a valid fear, I think this scenario is used in a rather demagogic fashion. I would imagine any proponent of artificial intelligence or robotics would take issue with no fail safe mechanisms for machines of war. The problem is the media runs with these I, Robot and Terminator scenarios, because that is what we as a culture are familiar with.

In this culture, people like Ray Kurzweil are made out to be unrealistic utopians without seriously addressing the many valid points he makes on the topic. We fear technology like sentient machines because the only examples we know of them are their portrayal in dystopian science fiction films and novels. There is very little, if any, cultural works on all the benefits these machines could bring to society; including in medicine, war (replacing human casualty), disaster relief and prevention etc., etc.

However, man clings to what he knows because change does not come natural to him. Man fears the sentient machine because his only knowledge of them are boxed within a negative connotation, like the hypothetical machines referenced by the United Nations. The UN has unintentionally fostered and verified the right to be afraid of the sentient machine. And while Kurzweil, and his fellow futurist Michio Kaku are best selling authors, their predictions have barely emerged from the niche audience they target. Their predictions are not enough to counter-balance the fear-driven media reports of “Killer Robots.”

Progress relies on change. Remaining the same is regressive thinking and fear is a primary reason for its occurrence. By all expert consensus, technology shows no signs of slowing down. It only points to speeding up, becoming a greater and greater part of our lives. The internet is the industrial revolution of our era. Sentient machines and technology will emerge from this era to create a new age in time where what was no longer is. Progress, especially technological progress, relies on change, and progress starts by accepting that change, not fearing it.


2 thoughts on “No Killer Robots, says the UN. FEAR THE MACHINES!

  1. Pingback: Who’s Afraid of the Big Bad Robot? « Musings from a Sarcastic New Yorker

  2. Pingback: Who’s Afraid of the Big, Bad Robot? « Musings from a Sarcastic New Yorker

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s