Desire for the Power of Artificial Intelligence: A Hazardous Simulation of Humanness

Desire for the Power of Artificial Intelligence: A Hazardous Simulation of Humanness

Huang Mingjun

Desire for the Power of Artificial Intelligence: A Hazardous Simulation of Humanness

Artificial Intelligence (AI) has been regarded as a great threat to human beings since Karel Capek’s Rossum’s Universal Robots (RUR), the very literary work that created the word “robot”. In the play, as soon as some robots get a “soul”, they show the desire for power and seize control of the world; though originally designed to obey people’s orders, aggressiveness is seemingly deep-rooted in their nature. Imaginary as the play is, part of its elements are not at all ridiculous. Today, by analyzing the essence of “power” and looking into the ultimate developing trend of AI, which is to simulate humanness, “desire for power” is no longer an absurd term. Indeed, there are claims that AI technically cannot possess human emotion, but advancements made in the AI field allow us to cast doubt on such claims. Also, different cases recently can help explain how machines would fulfill their desire for power. As a reflection of human nature, the desire for power will become a reality for AI, which is hazardous for both machines and human beings.

            The desire for power, as a way to survive and thrive, is commonly regarded as a sign of humanness. Since ancient times, there have been countless definitions of the word, among which the version of political scientists Kegley and Raymond may stand for what most people think. In their book The Global Future: A Brief Introduction to World Politics, they describe “power” as “the ability to make someone continue a course of action, change what he or she is doing, or refrain from acting” (28). From their point of view, desire for power is an aspiration of physically controlling others. By listing warfare in different eras, Kegley and Raymond adopt the classical realist view that “the strong dominate the weak” (28), which resembles the Darwinian theory of “survival of the fittest.” This is in high accordance with the behavior of Radius, leader of the intelligent Robots in RUR. Convinced that the human race is a weaker species than robots, he declares, “I don’t want a master” and “I want to be master over others” (47). Eventually, this becomes a reason for war and massacre. Examining history, excuses for war are shockingly similar to Radius’s standpoint: people either consider themselves as better nations or races with higher intelligence or greater strength, or they simply want to conquer others for their own welfare. Reasons behind the similarity are simple. Actually, in RUR, robots are such creatures that can only imitate people’s behavior but cannot create anything. As a result, these “intelligent” robots are nothing but mirrors of ourselves; Capek was questioning something far more than the safety of modern technology. Similarly, it’s reasonable to say the numerous fictions stressing AI’s threat to mankind precisely reflect our deepest desires for power and our deepest fears of losing it. Such striving for power is an innate part of human nature and is inevitable.

            RURis merely fiction, but we shall see the similar direction of future research. Improving AI’s capacity for understanding and imitating people’s desire will be the main focus of many experts, just like what is depicted in the latter part of the play. In a video interview of Professor Justine Cassel in 2017, who enjoys the reputation of “the queen of AI” in Carnegie Mellon University, the concept of “socially aware AI” is introduced. While traditional robots are designed to complete certain tasks for people, which are usually dangerous or tedious, modern AI “is about communication and socialization with people.” From her perspective, the study of AI is far more than programming and manufacturing. It is the study of ourselves, the human race. If an artificial intelligence is to be more human-like (this will be an extremely hot topic in years to come), human behavior needs to be carefully observed in every respect before we actually start programming. This process can be assimilated to duplication of the human mind. Obviously, at present we are limited by both technology and knowledge of psychology. But in the future, according to Cassell, people will try to comprehensively imitate humanness for academic purposes. Undoubtedly, what we call “humanness” is obscure and complex, but the desire for power has been proven to be commonly included in humanity. Then it can be anticipated that the more similar an AI is to human beings, the more likely that it would be aware of the concept “power” and chase it recklessly.

            However, such a tendency is hardly regarded by the public as something other than fiction, mainly because of the widespread opinion that machines cannot “think” in the way people do. A common interpretation is machines don’t know love, not to mention developing a human-like state of mind. In Ronald and Sipper’s research carried out in 2002, and AI medicin box named Dr. Jackson is used as an example to explain how difficult it is for machines to think and socialize. When talking to these AI doctors, people feel uncomfortable and distrust them. People dare not rely on something emotionless to give them medical advice. Sipper’s conclusion is, machines will never be “part of a network known as society” (5). Also, John Searle, a professor of philosophy at Berkeley, wrote in 2011 that machines totally lack the ability to “understand” things (2), which he concluded right after seeing an AI named Watson beal all human players in the gameshow Jeopardy. Generally, people describe AI as something like a dead mechanism with fake behavior, completely dependent on a programmer.

            Indeed, these ideas to some extent reflect reality, but they are more of preconceived prejudices. Actually, whether AI can really think is not important. They are chemically different from people; they certainly do not deal with things in exactly the same way as us, no matter how advanced they will one day become. Under such circumstances, discussions stressing AI’s so-called “thinking ability” become totally meaningless. What really counts is AI can have lifelike behavior by simulating us—machines appear as if they are thinking, and that’s enough. How they act is much more significant than how they manage to act. Edsger W. Dijkstra puts it as the following in an interview: “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” Rigidly, the process is not “swimming”, but the expected results are certainly achieved, better, faster, and more reliable. Similarly, whether the machine really “wants” power is unimportant. A good example may be Radius, who merely learns from people’s desire for power, simulates the desire, then causes harm. Simple imitation is enough to do harm to mankind.

            As to how AI can do perfect simulation of human beings, there is scientific research explaining this process. According to “Artificial Intelligence: Autonomous Mental Development by Robots and Animals” by Wang et al (2001), the definition of “autonomous mental development” (1) is the ultimate solution for machines to closely imitate human behavior. A robot is just like a mammal such as a dog or cat: animals learn by imitating what they encounter, while machines develop by executing what they are asked to (4). Unlike traditional programs, they update automatically under specific situations. What they do is to simulate—the brain structure of people, our way of thinking, and theoretically, humanity. To put it briefly, AI behaves as if thinking. This has been proven by AlphaGo, the Go robot which beat the top Go players in China, South Korea, and Japan. John Diamond discussed this concept in his article “AlphaGo.” The AI possesses the capability to learn its rival’s tricks so that it almost cannot be defeated (1). Due to the optimistic prospect of “autonomous mental development,” AI’s capacity of simulating and learning may be lifted to a whole new level, where imitating the desire for power will not be a difficult task.

            Under these circumstances, such highly advanced AI may cause a hazard. Mentioning “desire for power of robots,” clichés about civil rights of robots, rebellion and liberation may immediately come up. However, it’s not a practical model. One most obvious point is that AI has certain physical conditions restricting them from controlling anything else other than machines. For an AI is to seize control over the human race, a physical threat must be imposed onto people. In most fictions, artificial intelligences possess human-like bodies, which arm them with necessary conditions to be a threat to other beings. However, in reality, super computers are at an absolute disadvantage in terms of force; by the rule of “the strong dominate the weak,” even mice and insects can pose a threat to them. If their desire for power is to be fulfilled, the only choice left over is to seize control of other machines, especially those inferior, simple types without defense systems against hackers—for example, refrigerators, electrical ovens, or even assembly lines in an arsenal. Similar things done by human hackers have been common. In a news article called, “Is your vacuum cleaner watching you,” Ying Liu discusses such a trend. Utilizing bugs in smart phone APPs, hackers may “‘kidnap’ your seemingly ‘harmless’ home appliances and turn them into monitors or recorders” (1). In these cases, home appliances are relatively inferior “species”, while computers (which can be controlled by human hackers) play the role of intelligent rulers. Technically, they prove that deliberate control over lower-grade machines is possible.

            Can AI then take the place of human hackers, finding bugs in other machines? How will this be done? A possible answer appears in Facebook’s experiments in 2017. An accident in August was reported by the Xinhua News Agency. Bob and Alice, two advanced chat-bots developed by Facebook’s software engineers, are designed to simulate human behavior when chatting with each other. Nevertheless, something went wrong in the program: Bob and Alice created a new language (Figure 1), which seemed like nonsense. Programmers said the bots automatically developed a “shortcut” to complete their chatting task. In Bob and Alice’s case, AI is actually breaking through the limits imposed by people. Chatting is the bots’ original purpose; they regarded the complex restrictions as obstacles for fulfilling their settled purpose, so a revision was necessary. Then, under the task of “seizing control over other machines,” an AI can act in a similar way. It revises the programs in inferior machines and forces them to do something else. See what happened in RUR: at first, Radius together with his intelligent companions seems to liberate slave robots before attacking human beings. However, what he does is in fact not liberation, but merely becoming the new master. The inferior, not-so-advanced, robots must depend on someone to give orders, and they are the best tools by which Radius attains his purpose. This structure is much like the concept of “swarm intelligence,” the social formation of ants, bees, and many other insects, where masses obey the order of a single more intelligent individual (“Planes, Trains, and Anthills”). Hazards could be serious if daily appliances were out of control, factories could fail, and AI could give chaotic orders to them. When it comes to cyberspace security, even more serious problems could happen suddenly. Take “Wannacry” as an example. It is a virus that caused great harm worldwide and exposed big deficiencies in the global internet system (Samani and Beek 1). If a highly intelligent supercomputer should be able to utilize these deficiencies and create a virus like that to control other computers, it could be more efficient and hazardous than human hackers.

            The solution is seemingly easy: stop researching autonomous mental development in AI. Since “desire for power” is common in humanity, we cannot simply avoid it. We have to totally stop the simulation process. Nevertheless, is that even possible? Human beings ourselves are born with a desire for power, and any scientific advancement (for example, the airplane) will be immediately applied in the military field to help us maintain our own power. It is like game theory: because of the desire for power, the first to give up is sure to suffer a loss. This is true even though everybody has realized such technology may cause a hazard (Kegley and Raymond 48-49). In the past it was chemical weapons and nuclear power; today it’s AI. What’s really alarming is that, if ill-intentioned people take advantage of a machine’s desire for power, AI could be turned into a horrible weapon and take power from their controllers. Image this case: an AI’s desire for power is deliberately strengthened, after which it can automatically and intentionally pursue power for itself and its programmers. The programmers could be anybody, including software engineers, clever terrorists or even Nazis. It’s a combination of people who morbidly desire power, and machines efficiently imitate that desire for power. In the end, what is really hazardous is not the AI itself but the dark side of humanity.

            The exploration of AI is not only an exploration of technology, but also one of humanness. Al the horror we feel towards AI is out of humanity and towards humanity, towards our hidden fear for ourselves and others. As a species, we are afraid of being exceeded by other beings and losing power over the planet. As individuals, we desire power over others, which brings a sense of pride and excitement, so we invent and manufacture things like AI. Consequently, we are amazingly united towards the hazard that AI may bring. But we are still eager to see advances made in the AI field, which may create conveniences for us and let each individual survive and thrive more easily. After all, the desire for power itself is not evil. This contradiction is fully exposed as AI develops. Indeed, AI may learn to desire power and cause harm, but the problem with it does not lie in technology itself. Rather than refusing to embrace high technology, a better solution for the AI problem may be to envision it as a type of humanity and take preventative measures.

 

 

Works Cited

Capek, Karel. Rossum’s Universal Robots. Edited by Becky Hsu, VY200SP2018-Artificial Life, Canvas, 26 Feb. 2018, umjicanvas.com/courses/506/files/folder/Reading%20Materials/Primary%20Sources.

Cassel, Justine. “The Queen of AI Talking about Socially Aware Robots.” Masters’ Wikisecond, Baidu.com, 29 Dec. 2017, baike.baidu.com/item[Chinese characters]/22294413?secondId=397078.

“Chatbots Created Their Own Language to Chat?” Xinhua News Agency, Xinhuanet, 3 Aug. 2017, www.xinhuanet.com/world/2017-08/c_129671008.htm.

Diamond, John. “AlphaGo.” British Go Journal, www.britgo.org/files/2016/deepmind/BGJ174-AlphaGo.pdf.

Dijkstra, Edsger. “Edsger Wybe Dijkstra.” Edsger W. Dijkstra-A.M. Turing Award Winner, maturing.acm.org/award_winners/dijkstra_1053701.cfm. Accessed 27 Apr. 2018.

Kegley, Charles W., and Gregory A. Raymond. “Theories of World Politics.” The Global Future: a Brief Introduction to World Politics, 5thed., Wadsworth, 2014, pp. 25-53.

Liu, Ying. “Is Your Vacuum Cleaner Watching You?” Sohu News, Sohu, 31 Oct. 2017, www.sohu.com/a/201170325_100041011.

“Planes, Trains and Ant Hills—Computer Scientists Simulate Activity of Ants to Reduce Airline Delays.” Science Daily, 1 Apr. 2008, web.archive.org/web/20101124132227/https://www.science.daily.com/videos/2008/0406-planes_trains_and_ant_hills.htm.

Ronald, Edund, and Moshe Sipper. Intelligence Is Not Enough: On the Socialization of Talking Machines. Drive.google.com/file/d/0B6G3tbmMepR4TFdHSXNFWUtlNWe/vies.

Samani, Raj and Chistiaan Beek. “An Analysis of the WannaCry Ransomware Outbreak.” McAfee Blogs, 15 Sept. 2017, securingtomorrow.mcafee.com/executive-perspectives/analysis-wannacry-ransomware-outbreak/.

Searle, John. “Watson Doesn’t Know It Won on ‘Jeopardy!’” The Wall Street Journal, 23 Feb. 2011.

Weng, Juyang, et al. “Artificial Intelligence: Autonomous Mental Development by Robots and Animals.” Science Magazine, 2001, www.cse.msu.edu/dl/SciencePaper.pdf.