The Concerns of Artificial Intelligence


Lately I’ve been noticing a fair bit of commotion about Artificial Intelligence (AI), in particular the fears towards AI and the possibility of it being the greatest existential threat to humanity. These fears have not been made by some crackpot doomsday sayers but by very prominent intellectual figures, figures such as a notable theoretical physicist, cosmologist, a space-age entrepreneur and a founder of the personal computer industry. It’s hard not to be concerned when prominent, technologically savvy people like Stephen Hawking, Elon Musk and Bill Gates are alarmed.

A fair amount of people working in the field of AI believe that human-level artificial intelligence, or sometimes known as singularity is only two to three decades away.1 It is also common thought that according to Moore’s law singularity may be reached sooner than expected. Some predict that what we are going to see is AI systems starting to self-replicate and update their own systems. It will eventually reach to this true deep artificial intelligent system that can learn on its own and have autonomous feelings and will have awareness of itself, liken to human consciousness.2 Much of the commotion has been caused by the field of ‘deep learning’ or ‘machine learning’ in which computers teach themselves tasks by crunching large sets of data and has given rise to extreme concerns.3

Current Concerns

Some of the concerns include the danger that artificial intelligence could overtake humans. Hawking explained this in an interview with the BBC.4

“The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

Musk also shares similar concerns and has said

“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”5

Musk recently pledged $10 million to the Future of Life Institute, to fund research grants investigating potentially negative implications. Both Musk and Hawking, as well as, other groups of scientists and entrepreneurs have signed an open letter promising to ensure AI research benefits humanity. The letter warns that without safeguards on intelligent machines, mankind could be heading for a dark future. The document, drafted by the Future of Life Institute, said scientists should seek to head off risks that could wipe out mankind.6

Gates has revealed that he doesn’t believe AI will bring trouble in the near future however, there is reason to have concern.

“I am in the camp that is concerned about super intelligence,” Gates wrote. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”5


I always thought that if AI or even humans were to reach that super intelligence level, I’d hope that this intelligence wouldn’t be reduced to animalistic, primitive actions of ridding those that we saw as inferior to us in order for survival.  If this action was deemed as a form of intelligence which I believe is the opposite, then humans would have already wiped out all other species in this world. However, we know that the existence of other organisms whether we believe them to have a lower mental capacity to ours, serves to benefit us. We know that when there’s disruption with a certain species it in turn disrupts our complex ecosystem, consequently affecting our existence. Also, we know that there’s more loss than gain when we venture into wars. There are major losses in both human and technological resources and it leaves a scar in the human psyche that breeds distrust, paranoia and hinders growth and progress.  

I agree with author Edith Cobb’s notion that ‘there is a force inherent in the human biology itself that is even more powerful than the classic Darwinian idea of self-reproduction. She writes that,

the need to extend the self in time and space – the need to create in order to live, to breathe, and to be – precedes, indeed, of necessity exceeds, the need for self-reproduction as a personal survival function.’7

 If AI were to kill off our species and then migrate to another planet or universe acting as a parasite and terminating every other organism they encounter, this would be a fruitless endeavor that would eventually lead to their demise as other organisms would retaliate. A far more superior, intelligent notion would be to endorse cooperation as it forms cohesion. As found in ‘Tit for Tat’ a strategy in game theory for the iterated prisoner’s dilemma, cooperation is found to be the most evolutionary stable strategy.8 I would hope that any form of super intelligence would have reached to that conclusion. Wouldn’t exploration into the mystery of existence and the discovery of new realms of meaning is a far nobler pursuit than to just merely consume and exist?

Image source:
Progress can only be achieved through cooperation biologically and artificially. AI will always need humans and they only exist through human experiences and memories. Current AI are built on systems that collects oceans of data collected from human experiences. That data has a rapid search procedure that has algorithms to analyse and discover particular things about that data. An example of this is Deep Blue known as the first piece of AI. Deep Blue is a chess playing computer developed by IBM and was the first AI to win both a chess game and a chess match against a reigning world champion under regular time controls. Deep Blue succeeded in this as its database contained 50 grand masters planning strategies, 700,000 grandmaster games and it had the capability of evaluating 200 million positions per second which is obviously faster than what an average human can process. However, Deep Blue could only achieve this through the existence of those grand masters planning strategies which is only possible through human experience.9

Another example is Jeopardy winning Watson which is an AI computer system capable of answering questions posed in natural language. Watson is ‘about Big Data. It is about ingesting vast amounts of information on specific subjects – and allowing a user to query the data to look for patterns, assist in a diagnosis.’ Watson is currently being tested to aid doctors to more quickly and accurately make diagnoses and its library is made up of 23 million medical papers in the National Library of Medicine.10 However, Watson’s diagnoses are only achievable by the past achievements of medical practitioners which makes up this library.

IBM Watson Image: Kurzweil Accelerating Intelligence
Deep Blue and Watson are examples of systems that simply accumulate the understanding that has been achieved by humans and using brute force to run through rapidly and comprehensively to discover particular things about this data. There is debate that machines are close to reaching human intelligence and human consciousness. They would eventually have the ability to have human experiences. However, I do have doubts as ‘today’s AI produces the semblance of intelligence through brute number-crunching force, without any great interest in approximating how minds equip humans with autonomy, interests and desires. Computers do not yet have anything approaching the wide, fluid ability to infer, judge and decide that is associated with intelligence in the conventional human sense.’11 Capturing the nature of human intelligence is a colossal problem. It’s not enough to create a neural network, to simulate the brain and hope that some sort of intelligent behavior might emerge. As Gary Marcus nicely depicts,

“Biology isn’t elegant the way physics appears to be. The living world is bursting with variety and unpredictable complexity, because biology is the product of historical accidents, with species solving problems based on happenstance that leads them down one evolutionary road rather than another.”12

Human behavior and intelligence and how it is derived are very complex and is formed from both the evolutionary unpredictability of nature as well as nurture. It’s just difficult to believe that systems that use brute force to crunch large data sets is likened to embodying this biological complexity.

I agree with Noam Chomsky in his talk on the Singularity pod cast that there are far more pressing problems in the world than the coming of Singularity. Chomsky states “Ray’s (Ray Kruzweil) technological singularity is science fiction. I don’t see any particular reason to believe it. We should be more worried about the end of our species. We are very busy dedicating ourselves to destroying the possibility of decent survival. I think we should be worried about that.”13 Chomsky claims we should be more worried about the climate destruction we are carrying out. Musk also stresses this too ‘our oil based, carbon intensive economy as creating a “crazy chemical experiment on the atmosphere” with likely catastrophic consequences.14 "If we don't find a solution to burning oil for transport, when we then run out of oil, the economy will collapse and society will come to an end."15 As we become more technologically intensive there will be an increased need for energy. Our current sources of energy are unsustainable and so there should be more focus on finding and using renewable sources of energy. 

Another pressing issue is the current economic climate is generating a dystopia of socially unsustainable inequality. Before we reach that stage of true deep artificial intelligent systems there will come a point to where we can automate jobs that are highly cognitive and non-routine. As Martin Ford declares in his book ‘Rise of the Robots’ that white-collar jobs are at risk and argues that a bleak jobless future awaits if we don’t take action. Cognitive computing and genetic programming will soon do to even the most dynamic white collar workers what robots are doing to men and women on the assembly line. And it gets worse, according to Ford. "Indeed, because knowledge-based jobs can be automated using only software, these positions may, in many cases, prove to be more vulnerable than lower-skill jobs that involve physical manipulation."16 This automation will displace significant numbers of both blue- and white-collar workers and will lead to income inequality and a breakdown in social order. Nobody will be immune to this labour reality and hence there should be initiatives to prepare and tackle this. There has been suggestion of adopting a guaranteed basic income where everybody in society has a right of a universal basic income. 

While I hold a similar skeptical view to Chomsky about singularity and a much more optimistic view of AI compared to the techno elites I do believe that there should be safeguard measures taken.  As Anthony Wing Kosner rightly states, we should be more “concerned about the motivations of rogue humans who may misuse these technologies than about the rogue capabilities of the products of these technologies themselves.”17 The creation of AI can take on a life on its own and can do great harm if not set up in a just manner.

Atkins, David. "If Bill Gates, Elon Musk and Stephen Hawking Are Worried, Shouldn’t You Be?" The Washington Monthly. The Washington Monthly, 8 Feb. 2015. Web. 26 May 2015. <>.

Casey, Michael. "Maybe Artificial Intelligence Won't Destroy Us after All." CBSNews. CBS Interactive, 14 May 2015. Web. 26 May 2015. <>.

"Rise of the Machines." The Economist. The Economist Newspaper, 9 May 2015. Web. 26 May 2015. <>.

Cellan-Jones, Rory. "Stephen Hawking Warns Artificial Intelligence Could End Mankind - BBC News." BBC News. BBC News, 2 Dec. 2014. Web. 26 May 2015. <>.

Kohli, Sonali. "Bill Gates Joins Elon Musk and Stephen Hawking in Saying Artificial Intelligence Is Scary." Quartz. Quartz, 29 Jan. 2015. Web. 26 May 2015. <>.

Zolfagharifard, Ellie. "Newscron." Newscron. Daily Mail, 13 Jan. 2015. Web. 26 May 2015. <>.

Rifkin, Jeremy. The Empathic Civilization: The Race to Global Consciousness in a World in Crisis. New York: J.P. Tarcher/Penguin, 2009. Print.

Dawkins, Richard. The Selfish Gene 30th Anniversary Edition. Oxford: Oxford UP, UK, 2006. Print.

"Deep Blue (chess Computer)." Wikipedia. Wikimedia Foundation, 4 Mar. 2015. Web. 26 May 2015. <>.

Pisani, Bob. "3 Years after 'Jeopardy,' IBM's Watson Gets Serious." CNBC. CNBC, 8 Oct. 2014. Web. 26 May 2015. <>.

"The Dawn of Artificial Intelligence." The Economist. The Economist Newspaper, 9 May 2015. Web. 26 May 2015. <>.

Marcus, Gary. "The Trouble With Brain Science." The New York Times. The New York Times, 11 July 2014. Web. 26 May 2015. <>.

Danaylov, Nikola. "Noam Chomsky on Singularity 1 on 1: The Singularity Is Science Fiction!" Singularity Weblog. Singularity Weblog, 4 Oct. 2013. Web. 26 May 2015. <>.

Van Diggelen, Alison. "Elon Musk: On Obama, Climate Change & Government Regulation (Transcript)." Fresh Dialogues. Fresh Dialogues, 11 Feb. 2013. Web. 26 May 2015. <>.

Kaebler, Jason. "Elon Musk: Burning Fossil Fuels Is the 'Dumbest Experiment in History, By Far'" Motherboard. Motherboard, 26 Mar. 2015. Web. 26 May 2015. <>.

Ford, Martin. Rise of the Robots: Technology and the Threat of a Jobless Future. New York, 2015. Print.

Wing Kosner, Anthony. "What Really Scares Tech Leaders About Artificial Intelligence?" Forbes. Forbes Magazine, 20 Apr. 2015. Web. 26 May 2015. <>.

Moore, Trent. "Check out This New Ex Machina Trailer to Go along with the Expanded Release." Blastr. Blastr, 22 Apr. 2015. Web. 26 May 2015. <>.

Angelica, Amara D. "KurzweilAI | Accelerating Intelligence." KurzweilAI How Watson Works a Conversation with Eric Brown IBM Research Manager Comments. The Kurzweil Accelerating Intelligence Newsletter, 31 Jan. 2011. Web. 26 May 2015. <>.

You may also like

No comments :

© Christine Calo 2016. Please do not reproduce without the expressed consent of Christine Calo. Powered by Blogger.