Why We Should Stop Being Afraid Of Computer Intelligence

Why We Should Stop Being Afraid Of Computer Intelligence

When the IBM computer Deep Blue defeated world chess champion Garry Kasparov in 1997 it seemed to many we had crossed a threshold. By beating us at our (arguably) most complex intellectual task, man had at last been defeated by a machine.

AI picture from Shutterstock

Kasparov’s defeat prompted anguish from those fearful of the colonising power of the machine world. Newspapers framed the battle as a contest pitting humans, with all of our cleverness and weaknesses, against impersonal machines robotically pursuing their objective.

For others Deep Blue’s victory was an inspiration, a harbinger of humankind’s transition to a technological utopia. They foresee an imminent “technological singularity” in which computers pass a critical point to attain a super intelligence beyond human capabilities. Then technology will set its own course according to its own intentions.

Futurist Ray Kurzweil is the guru of an informal cult of those who believe the singularity is near and will take us to a kind of paradise, an afterlife in which the barrier separating humans from machines dissolves and we transcend the limits of our physical bodies and unenhanced brains.

In truth, though, a computer did not beat the Russian champion. With help from several grandmasters, a couple of programmers defeated Kasparov. They worked out how to program a computer with such exactitude it could perform enough calculations of the right sort to win. That’s all.

If Deep Blue had defeated not Garry Kasparov but, say, a computer called Deep Red, we would rightly conclude that Deep Blue’s programmers were cleverer than Deep Red’s. So why did Kasparov’s loss cause so many to rend their shirts and dread the day computers would take over the world, reversing the ascendancy humans have always had over machines?

Deidealising technology

Looked at sociologically, I think the answer is that we have turned the computer into a fetish – that is, an inanimate object worshipped for its apparent magical powers. We have been persuaded computers are vastly more powerful than we are and are capable of breaking free of our control. Neither of these is true.

Take the first. When we get up in the morning we have breakfast, shower, dress and leave our bricks-and-mortar homes for work. If we drive instead of taking the bus, our cars have computers which make them operate more effectively and give them more functions; but 30 years ago our cars got us to work just as well.

We might stop at the ATM on the way to withdraw money, which is convenient. But is it so different from having a teller pass notes over the counter?

At work we send some emails. It’s a quick way to communicate, but the world worked quite well when we communicated by post, fax and telephone. We play games on our laptops, watch movies and shop, but it’s not as though we lacked entertainment opportunities 30 years ago.

So what is really different in our lives as a result of computers? When we stop and think about all of the basic things we do each day the answer is “not much”. Computers have had nothing like the impact on daily life of the industrial revolution and urbanisation, yet we tell ourselves we live in a digital age, one defined by computers.

We have fallen into a world of hype created by those whose lives are bound up in building, operating and selling computers, and boosted by breathless reporters, futurologists on the make and an IT industry on which we rely too heavily.

Undoubtedly this is due in part to an infatuation with our own technological prowess, our limitless ability to gaze in awe at our own inventiveness and reach. Yet at the same time we are frightened of technological overreach, of ceding to machines things that are essential to our humanity, and are afraid that the machines themselves will take on a life of their own.

Why the angst?

It’s an anxiety forged in the Industrial Revolution when processes of production, previously performed by humans and animal power, were mechanised. It has inspired numberless authors, from Mary Shelley to H.G. Wells, and filmmakers, from James Cameron to Stanley Kubrick. The computer age has vindicated and turbo-charged these fears, which helps explain the panic over the Y2K bug, which was fed by newspaper editors who understood and stoked our primitive fear of alien powers.

Why do we constitute computers as an independent and alien power threatening to deprive us of our autonomy? After all, a computer can no more break free from its human programmers than an abacus can take over the bazaar. To imagine otherwise is a projection, like Dorothy’s mental picture of the Wizard of Oz before she drew back the curtain to find a little old man pulling the strings.

When we think about it, it’s hard not to conclude that in this secular world we have made computers occupy some of the space vacated by God. Psychologists have for decades measured a personality trait known as “locus of control“, which measures the extent to which we feel our lives to be controlled by ourselves, rather than outside forces.

Deep Blue is a piece of equipment and could defeat Kasparov only because chess is a game which lends itself to precise algorithms. Even so, it is a machine that works through brute computing power rather than the creative intelligence of the human mind.

A computer can never be intelligent, or autonomous. Besides, playing chess is not what makes us human.

Clive Hamilton is Vice Chancellor’s Chair, Centre For Applied Philosophy & Public Ethics (CAPPE) at Charles Sturt University. He does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
The ConversationThis article was originally published at The Conversation. Read the original article.


  • So the very learned Clive Hamilton, a regular harbinger of doom on all things climate, brings up the Y2K bug to highlight pointless scaremongering? And computers taking the place of God in secular societies? From the author who wrote Requiem for a Species, an ode to the Gaia, who it could be argued, has taken the place of God in secular societies?

    I think he’s blown an irony valve on this one.

  • I get this all the time at work, writing software:
    Them: “The software *should* be able to make these decisions. I mean, humans are doing it right now…”
    Me: “Ok well what logic does the human go through to decide. What makes them decide one way, and not another?”
    Them: “Isn’t it obvious?”
    … insert long discussion on trying to define the logic…
    Me: “So there’s no definitive answer, its just a gut call”
    Them: “But based on their experience and knowledge, though.”
    Me: “Which you can’t specify but you’d like a calculator to do.”
    Them: “Uh, yeah.”

    For some reason, calling a computer program a “calculator” seems to put people back down into the realms of possibility. And at its essence, that’s all it is. A logic calculator with pre-defined algorithms. Until we actually transcend to AI, and it can write the algorithms itself, it will still be bound to the ability of the programmer(s).

    • Well, there are software products out there that do data mining and business intelligence that can make great decisions, sometimes better than humans because they can process more data to make the decision.
      In the case you described, the customer isn’t asking for something impossible, just a multi-million dollar system that would take years to set up and require vast amounts of accurate data fed in to make decisions from. They just aren’t clear on where the boundary is between you coding up a simple algorithm or needing a “deep blue” style solution.

      • No matter how much accurate data you give the system, it will make mistakes if it has no way of making sense of it, computers tend to suck at knowing what’s going on around them (context) and programmers cannot imagine all scenarios a computer would have to deal with, this issue is best seen in the field of computer vision where you can feed in HD video and yet the computer has no actual comprehension of what it’s looking at, just fluctuating colours.

        There are cases where automated decision making is acceptable like elevator doors, but these cases have to be simple. Actually now that I think about it, would it be possible to trap everyone in a building just by putting large boxes in the elevator doorways at ground floor and making the elevator think people are getting in continuously and prevent it from doing anything.

        Even if we had learning computers that can actually make sense of the world around them and how it operates, how would they react in the below nuclear situations where there is incomplete data and a decision had to be made quickly. I don’t know if it’s possible for computer intelligence to ever have ‘instinct’ or gut feelings, and those human qualities have saved us quite a few times.

        Paul Bracken, Yale University Professor of Political Science and author of The Command and Control of Nuclear Forces, describes the integration of the command and control of nuclear weapons as the pitfalls of the systematic evolution of the American and Soviet warning systems. He explains, “The result is a tightly coupled system in which a perturbation in one part can be amplified throughout the entire system.” Bracken explores the pervasive theme of nuclear wars sparked by technical accidents referencing events from every decade since the 1950s.

        In the 1950s, a flock of Canadian geese activated the Distant Early Warning Line radar system. The birds were mistakenly interpreted as a Soviet bomber attack. In the 1960s, meteor showers and lunar radar reflections triggered the new Ballistic Missile Early Warning System radar, indicating to the North American Aerospace Defense Command (NORAD) that the Soviet Union had initiated a missile attack. In 1979, an operator’s mistake resulted in the transmission of an erroneous message that the U.S. was under nuclear attack. When the information was sent to NORAD fighter bases, ten fighters were immediately scrambled from three different bases in the U.S. and Canada. The following year, a malfunctioning chip in a minicomputer caused a similar situation. However, this time a hundred B-52 bombers were prepared for takeoff along with the President’s emergency aircraft.

        In January 1995, a four-stage Norwegian-U.S. joint research rocket was launched with the intention of gathering information about the Northern Lights; however, an error put U.S. and Russian diplomacy to the test once again. In the middle of the night, Russian President Boris Yeltsin was awoken, told that the U.S. had launched a nuclear missile towards Russia, and that he had a mere minutes to decide whether to launch Russia’s own nukes against the U.S. Luckily, President Yeltsin had the wherewithal – some say gall – to question his military commanders’ recommendations of retaliation, and forestall a nuclear war.

  • I could be wrong.. I certainly don’t know everyone in the worlds opinions on this.. But.. It’s a joke. I don’t think anyone except perhaps some paranoid scifi writers are actually afraid or even think about it as more than an amusing anecdote….

  • NO NO and NO!
    Read WIRED FOR WAR by Peter Singer

    What is the point of bringing up ‘Deep Blue’ and saying “it’s ok, it wasn’t actually thinking, so it can’t take over!”
    It’s a sign of things to come when AI does reach the point where it can think on its own and break free of its programming, and that day may only be 10-20 years away and Mr Hamilton will have to eat his words.
    Perhaps he is unaware of the strides of progress being made in robotics, it should be a concern and the questions need to be asked now while we can guide the direction these technologies take responsibly.

    But whether it thinks or not isn’t even the issue, it’s how much control we give to computers, and it’s a problem right now even when we don’t have strong AI capable of making decisions on par with a human.

    We are already delegating perhaps too many decisions to computers, people are already becoming too trusting and reliant on them, with people following their GPSes into rivers coming to mind. The more we trust and rely on the algorithms our computers are executing, the bigger the consequences will be when they fail because now all our infrastructure depends on it, and they will because there’s just so much that can go wrong, you really need to be a programmer to see why.
    It’ll be a bit like how the nazis were just ‘following orders’ in the operating system called government, humans with brains and yet they acted like robots, again the ability to think is not the issue, it is the level of control, people in a dictatorship do not have control.

    Our behaviours are becoming less spontaneous and more routine, just look how everyone says ‘Happy Birthday’ on facebook because they read an alert that was programmed to show up at a specific time, it doesn’t even have to be the actual birthday, people will still say Happy Birthday because there is no meaning behind it.

    With standardisation and conformity we lose nuance and creativity/individuality, statistics are only possible with modern computing and while they will identify general trends, they miss out on all the exceptions, the minor details. This becomes a problem when used for policy.

    Most of our time is spent staring at a screen of some kind so I’d argue we are already enslaved, and we haven’t got true computer intelligence, androids and cyborgs walking around yet. We don’t have a robot Hitler to rule over us and personify everything that’s wrong with computing yet, but give it time and it might happen, so there is every reason to be concerned about computer intelligence.
    It is easy to see problems in retrospect when it’s too late to do anything about, it’s much harder to see them before they happen.

    So when we have cars that drive themselves (and they’re almost ready now), will they have the sense to drive your pregnant wife as quickly as possible to the hospital or will it follow the algorithm that obeys the traffic laws. The problem isn’t whether the car can think or not, whether it can find the quickest route to the hospital, it’s the control.

    Even if the car was ‘self aware’ and had ‘learnt’ all there is to know about human pregnancy, it can know that pregnancy is an urgent human thing and requires speedy decisions, but it can never truly understand the importance of pregnancy. I’m skeptical a computer or robot can ever pick up on the nuances of human behaviour or emotion because there are limits on what it can understand no matter how smart they become. It’s a bit like some academics who are very knowledgeable on their fields of study, can do amazing research, but they don’t know how to interact with people, a bit like knowing the words but not knowing the tune.

    Yes, a new religion is forming, blind faith in god/s is being replaced by blind faith in science/technology, governments (it’s actually a big machine/computer made of humans) and new myths to orient and control society, and it will be thrown out in a few hundred or thousand years when our descendent realise how clueless we are right now.

    Blind faith in technology is not going to save us, only having our eyes open to see potential problems and make sure they don’t become real problems will do it.


  • So much information can be gathered about us (phone tracking, purchases, email content including word choice, our associations and likes and web browsing) computers are going to be able to predict both our movements and actions. DNA profiling could lead to personality profiling. For example a genetic likelihood of criminality or health and then monitoring by our digital information. Then think of huge numbers of people with huge numbers of variables being used to verify and fine tune those patterns and predictions.

    This means computers having an insight into us (maybe better than we understand ourselves) that results in having less freedom and variety. Starting with for example something basic like for only seeing ads for things the computer thinks I can afford or desire.

  • I think someone is feeling threatened by the learning machines of man.
    The best part is; his article is about a computer from 1997.
    At one point our ancestors could only perform simple tasks, look at us now. And we didn’t have Steve Jobs, Bill Gates or Larry Page writing our neuron pathways…

  • There is no such thing as computer intelligence. Intelligence is not something you can ‘program’, it is an organic process consisting of both logical and illogical reasoning, association, identification, emotions, memory and many other currently inexplicable activities. It isn’t even confined to the brain, it functions throughout the body – so, to imagine that we have the ability to make a machine do all these things, is ridiculous. We don’t, not yet anyway, because we still don’t know enough about intelligence.

    • With neural networks that mimic the brain’s structure, thinking intelligence is possible, instead of a complicated program that pretends to be smart based on stimulus. I would argue it would arrive at slightly different conclusions than us because the configuration is different to a human brain/body. It will mimic human thinking, but it will be different, it will probably seem like someone with a mental illness.

      I agree that thinking is not confined to the brain (or the body!), people don’t realise that their gut is practically a second brain with 100 million brain cells (equiv to a cat’s brain), there will be limitations in just mimicking the brain until we understand where consciousness actually comes from, you can simulate chemical processes and neurons in the brain (hardware) in a computer program, but that won’t make it suddenly a conscious being, it doesn’t automatically appear just because you have sensory feedback. It’s a mystery in biology, psychology and philosophy.

      The separation between the cold mind and the warm heart is elusive. Only the mystics tried to balance the mind and the heart, and there are no more mystics.

  • Hamilton’s statement in closing: “A computer can never be intelligent, or autonomous.” is a careless open ended prediction that betrays his fundamentally neo-luddite position and robs his weak, uninformed preceding arguments of any pretension of contributing to the important and necessary dialogue of where our exponentially accelerating, technological capability is taking our civilization and our definition of ourselves as human. Hamilton embarrasses himself by opening with the laughable notion that chess is “our most complex intellectual task”. He compounds this by denigrating an eminent and highly recognized and decorated intellectual, who has made numerous contributions to the community, is an adviser to the national defence force of the US, is Director of Engineering at Google, among many other accolades, as a “guru of a informal cult”. Hamilton argues that Deep Blue did not defeat Garry Kasparov at chess, but that it was the computer engineers and programmers who did. The point is not to attribute credit, the point is to recognize functional capability. The Deep Blue exercise demonstrated a certain machine capability that we should heed as a sign post, extrapolate to all the other mental and emotional processes we exhibit and give our utmost contemplation to the consequences of this. Arguing that there is essentially no difference between the world of 30 years ago and today is perhaps wishful thinking on Hamilton’s part. He points to some examples such as personal transport, cash withdrawal and mail delivery, but he ignores others for example mobile phones and the cloud. Technology has radically transformed the world of 30 years ago. Moreover, many of our mundane age old social processes such as transport and communications are managed by technology that enables us to enjoy these services as we did 30 years ago, but in volume and with reliability that would not be possible today without technology behind the scenes. “Why the angst?” Anybody who looks to the future with open eyes can see there are serious issues to be contemplated. Burying our heads in the sand or holding on to feeble notions of our inalienable human superiority are for the fearful and the foolish.

  • Between now and the robots taking over – remember this:

    All technological advancements are developed by corporations to serve their agendas, not ours.

    If you’re worried about what the robots will do, just wait and see what a greedy corp with a god-like intelligence available to do its bidding will do. Remember, robots have physical limitations, but a the largest portion of our lives depends on systems and infrastructure dependent on computers. This is where we’re at our weakest, and where AI is at its strongest.

    A corp may unleash a sentient intelligence on the world to serve its purposes with complete impunity thanks to digital stealth (track covering) and misinformation. The complexity of this influence may be so broad, subtle and untraceable that even the corp may never be fully aware of the AI’s controlling influence. The possibilities are astounding, when you understand the capacity for technology to subtly & marginally effect real-world circumstances across a broad base, which then trickles up statistically into market value. Given the shear volume of data available; the reality of predictive market an personal data analysis; and the potential capacity of a sentient technological being, we have to concern ourselves with the intent of those who produce and then release such an entity into the world.

  • At least one of this dude’s premises is outdated, and wrong.

    There are several chess programs that can beat any IGM alive, even Magnus Carlsen, who is higher rated than Kasparov was. Some of these chess programs are rated some 400 points higher than any human. The computers have to give pawn odds or greater just so that human international grandmasters have any sort of chance.

    The IM who created Rybka wouldn’t stand a chance against the world’s top players, and the world’s top players don’t stand a chance against his computer program.

    The current strongest chess program is rated over 3300 … some 450 points about Magnus Carlsen. So even the strongest human player to ever live wouldn’t stand a chance …. so clearly, it’s not like the computer programs are limited to the same level of ability as humans. They exceed our abilities, greatly.

    Again, it’s not that a group of international grandmaster peers got together and their combined chess playing skills transferred into a computer are able to outdo the world champion … no, the chess programs abilities exceed those of every human who ever lived, and by a wide margin.

    “Deep Blue is a piece of equipment and could defeat Kasparov only because chess is a game which lends itself to precise algorithms. ”

    I guess the author never heard of WATSON.

Show more comments

Comments are closed.

Log in to comment on this story!