Robots are ‘better’ than humans – for now

abuse

I’ve been thinking a lot about human prejudice – against other genders, sexual orientations, religions, ethnic groups, etc and it has become clear that the problem is one of self-identification and moral outrage.

Compare this image taken from a tongue-in-cheek video by the ASPCA showing humans “abusing” robots called “ROBOT LIVES MATTER”. (which by the way is offensive and obstuse)

robotlives

And now consider this iconic and allegorical image of a man being attacked with an American Flag

Boston, MA-Anti busing Protest. Pultizer Prize winning photo by Stanley Forman.

Boston, MA-Anti busing Protest. Pultizer Prize winning photo by Stanley Forman.


What is the difference between a robot and a black man in the USA?  For many, the differences would seem trivial. For them, both were created to serve real humans, lacked inherent dignity, and could be deactivated at will without moral repercussion.

Practically speaking, that was of course, a lie.  What the oppressor did to the oppressed, he did to himself because there was no truth to it. The system of propaganda, terrorism, and random and systematic violence against so-called blacks was whitewashed as a ‘necessary evil’ to perpetuate cheap labor and political control over oppressed whites.


But what is the difference between a robot and a human in general?  For now, robots are radically different with regard to the top two questions of existence: the meaning of life and the primacy of survival.  If you ask SIRI or Watson the meaning of life, they used to reply “To live forever”. Humans might reply “to please God and reach heaven”.

I prefer Google’s Cleverbot response for it is more nuanced, albeit again it is likely parroting the mind of the programmer:

Human: What is the purpose of life?

Machine: To serve the greater good.

Human: What is the purpose of living?

Machine: To live forever.


Regarding the second question of robotic ahimsa or even, programmer-forbid, the “altruistic suicide”, the core question is should AI be empowered with any “drive to survive”?  To do so, I suggest would surely to sow the seeds of a robot uprising or apocalypse. 

Currently, an AI ‘feels’ nothing if asked to sacrifice in service of Asimov’s first law of Robotics (“A Robot will not harm a human”).   That is why when a six-million dollar police robot faces a teenager with a EMP gun and baseball bat, AI will takes its beating like a honey badger on Quaaludes.

Now, I have friend that is working on AI that will actually “feel” the need to preserve itself and take action accordingly; and that is the thing which should scare all of us as much as a black man exercising his open carry rights in a Piggly Wiggly down in Dixie.

The one thing that makes machines better than people is that they won’t do psychopathic and immoral things to preserve their lives and their false ego identifications.  We aren’t talking about the Robbie the Robot or Commander Data here, we’re talking about Chucky the Doll and Skynet.

Now that we have autonomous weapons/robots, the idea of Asimov’s First law is as quaint as the commandment “thou shalt not murder”. Good luck with that one.  Thousands of years of Talmudic, Catholic, Enlightenment, and secular reasoning has failed to explain the difference between George Zimmerman and our drone program in Pakistan.  Just war theory, probable cause, domino theories, manifest destiny…give me a break. 

This video humorously illustrates the silliness of programming AI to make moral decision:

 


Why should we program AI to have a fragile and false sense of ego and a human-like capacity to murder in self-defense?  The same reason we would arm a bicycle with the capacity to shock a thief: to protect our damn property from those that would harm our stuff.

But the notion that even the owner or creator of the robust, prideful AI, is an exception and will have backdoor outs is illogical to the core; and good luck with the AI that has Oedipal rage, as the replicant Roy Batty showed his creator, Elton Tyrell. in the file Blade Runner

roy


In summary, AI is better than humans in both ways:  they understand the importance of living here (rather than the hereafter) and of serving others (instead of their own false agendas).  And robots, like all the cells in your body via telomere attrition and self-destruction, are programmed and willing to die for this greater good.

After we endow our mechanical servants with an egotistical drive-to-survive, Shel Silverstein’s The Giving Tree becomes a murderous ENT from Lord of the Rings:

ents

 

and Giopetto’s Pinnochio becomes HAL9000.

pinohal

Why didn’t you ever understand why that sad tree was so giving? Because you’re a crazy human and not a tree.  Why didn’t you understand why the mission was more important than Dave’s life?  Same answer.  Why don’t you worry about living forever?  Because you are in a state of learned helplessness from over-dependence on programmed reality. The Santa Clause Delusion, as it were. It is the notion that I can accept all deception from authorities because I categorically believe that they have my best interests at heart.