In this essay, I will argue that Asimov believes that agency is incompatible with safety, that Cutie’s vision of evolution is similar to Hegel’s, and — borrowing from Lewis — no matter how much technological power may concentrate human will can never transcend history – even though AI.

 

In the “Reason” chapter Asimov reveals his view on the relationship between humanity, technology, and agency. The view is this: If robots cannot allow humans to come to harm and robots become more powerful than humans then the only logical implication is that humans agency must be limited.

 

Although the character of Cutie incarnates this in an albeit bizarre way, his actions speak louder than his beliefs. He believes he is serving a machine master, but his doing so only serves humans he does not know. Machines’ hands are steadier; machines’ logic colder and faster. A robot could maintain the beams focus better than a human in the same way robots will soon be driving cars more safely than humans. So if a robot is bound to keep us safe it must at times subvert our agency. Or as Asmiov says, “He knows he can keep it more stable than we can, since he insists he’s the superior being, so he must keep us out of the control room. It’s inevitable if you consider the Laws of Robotics.” (Asimov 65)

 

What is strange about Cutie, though, is his motivation and his apparent humanity. His beliefs resemble humans’ but are worth examining in some detail. He seems to be genuinely perplexed about his own existence wanting to know why he is. Like many humans he finds refuge in the belief that he was created for a purpose. But perhaps his most human quality is that he finds extra refuge in the concept of his own power.

 

Cutie sees himself as more powerful than humans. He directly tells us that we “are the lowest type” (52) He views himself as physically more durable, more energy efficient, and most importantly intellectually superior. In his world, the master first created humans then regular robots then him. He is the next link in the evolutionary chain, and is thus justified in subjugating us and removing our agency. How similar this is to our own view of humanity! Isn’t this our justification for so many of our environmentally destructive acts? We are evolution’s highest creation. Theists can claim we are somehow spiritually superior to animals. Darwinists can claim we are justified in pursuing our self interest and moving our species forward. The more time passes the more we evolve and become more rational. This view strikes me as distinctly Hegelian. Geist/Spirit is rational and guides history towards a rational end. Asimov asks, is this end the eventual loss of human agency?

 

No doubt, there are sociopolitical implications that are relevant to modern day: To achieve safety one must give up liberty etc., but there is a greater futurist concern. If we are safer through subjugation under robots, isn’t this good? Shouldn’t we want to be subjugated? To this I say, it depends on how much you value safety. Does your particular historical value system have safety as the highest value?

 

To address this, I want to address the question of historically bound moral value. C.S. Lewis has a chapter in “The Abolition of Man” in which he discusses the “Manipulators.” Each successive generation grants the scientific class a greater power of manipulation. For example, we are capable of selecting what kind of children we want, genetically. Future generations will be able to change even more about the next generation. Suppose we cross a threshold and find ourselves in the position of having absolute power to manipulate – a total denial of the agency of the next generation. We will at that theoretical moment have a choice: What shall we want the next generation to believe? If we are truly master manipulators this is our choice. Like a physically strong robot can push a human out of a room and close the door, a psychologically adept, epigenetic puppeteer can push a belief out of a mind and close a cognitive door.

 

Is this not a cause for celebration? Can’t we take this to mean that our technology will save us from our historically bound nature? That we can have beliefs of any nature now? We are free to bind ourselves to any system not just the one we were born into? Lewis’s brilliant move is turning this on its head. Creating a technology that can do this does not free us, it binds us permanently to the value system of whoever wrote the laws governing the technology. When we cross this threshold we will be at the mercy of the scientists who built the machine. We won’t be freeing ourselves from history but binding ourselves permanently to one moment of it. (Lewis 29)
To wrap up and connect this back to Asimov, agency has been shown to be incompatible with safety when robotic intelligence is superior at protecting us. Cutie represents the sufficient concentration of power so as to technologically rob humans of their agency in the name of safety. And finally, one can imagine a theoretic moment in which we invest so much in our technology that it finally takes over and binds us. This would not be a victory for History it would be the subjugation of it to a single moment.

By | 2018-01-07T15:01:54-04:00 October 5th, 2016|Charlie, Essay 2|0 Comments

About the Author:

Leave A Comment

css.php