Why Algorithms Always End Up Beating Humans

Why Algorithms Always End Up Beating Humans
To sign up for our daily newsletter covering the latest news, hacks and reviews, head HERE. For a running feed of all our stories, follow us on Twitter HERE. Or you can bookmark the Lifehacker Australia homepage to visit whenever you need a fix.

The world is certainly not short of pundits claiming to have a grasp on where the economy is heading or what the future holds for Ukraine. But history reminds us how poor humans are at making predictions in complex situations. Could a fully automated algorithm beat the predictions of these pundits? Not yet. But history also has a way of vindicating the power of algorithms over human judgement.

Computer chess picture from Shutterstock

The most famous early example of this was the 1997 defeat of world chess champion Garry Kasparov by IBM’s “Deep Blue”. The decisive sixth game lasted only 19 moves, with the deciding factor being human error. By reversing his opening move order, Kasparov allowed Deep Blue to make a tactical sacrifice, resulting in the first time a computer defeated a human champion under classical time controls.

For many chess enthusiasts, the loss was a Copernican blow to the human ego. But for others, computers represented a novel tool to aid their mastery of the game, delivering high-level analysis in real time. Previously, grandmasters prepared for important games by looking over transcripts of older games and studying thick volumes of openings and endgames. But with computers, professional chess players could quickly brute-force billions of tactical combinations and devote more time focusing on deeper strategic questions.

A similar dynamic emerged in the world of insurance five years earlier. Before Hurricane Andrew devastated much of Florida in 1992, insurance underwriters typically estimated the size of future losses only by examining historical losses. After Andrew, it became clear that past data is sometimes a poor indicator of the future. Computational methods offered an alternative — rather than exclusively looking at past damage, catastrophes could be simulated in a digital world to predict losses.

The catastrophe models work in a similar manner to the chess algorithms. Instead of predicting millions of move combinations, the models predict millions of possible hurricane trajectories. This “Monte Carlo” approach can be used to examine other catastrophes as well — earthquakes, pandemics, floods, and even terrorist attacks. By looking at the hypothetical distribution of future catastrophes, insurance companies can set aside enough capital to confidently meet future claims.

Pooling resources

The strength of these models comes not from their ability to replace underwriters, but rather their ability to complement human decision-making. After getting an initial estimate, human experts typically tweak the model to account for their unique experience, in the same way a regular might correct for the dents and creases in the local pub’s billiards table (as Nate Silver suggests). The merits of humans and algorithms working together were not lost on Kasparov either. After the loss to Deep Blue, Kasparov himself helped popularise Centaur Chess, a variant where cyborg teams of one human and one computer compete against each other.

As Kasparov described at the Oxford Martin School, pairing a human’s strategic insight with a computer’s tactical prowess enables these teams to beat even the strongest chess supercomputers — a feat no human has been able to accomplish unaided since 2007. But it is uncertain whether this trend will hold. As technology develops exponentially, we are approaching a point where a human player might end up being more of a liability than an asset to a cyborg team.

The implications for insurance are even more uncertain. In areas with an abundance of data, such as auto or life insurance, there is a real possibility that algorithms may put humans in the back seat. But when it comes to estimating rare events, we’re still a long way off from entirely replacing groups of human experts.

Figuring out how to maximise the benefits of computer models and integrate them into human systems will be a key question for the years to come. Only one pattern seems certain: algorithms are getting exceptionally good at forecasting the future, whether the domain is chess, weather, or even geo-political conundrums. Come 2025, we might hear about the next crisis before it occurs.

Andrew Snyder-Beattie is from the University of Oxford. He does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The ConversationThis article was originally published on The Conversation. Read the original article.


  • Still like to see somebody make an algorithm/AI that can play StarCraft better a pro level player without cheating.

  • Algorithms only beat humans in rigid and well defined processes
    Computers are good at processing data, they cannot make any sense of what the data means, they have no context.
    If you look at computer vision and other areas of AI, you will see the gigantic problems faced and why I don’t put much faith in empty claims like this, especially from an academic who has has no programming background.

    A computer beating someone in chess is trivial, the game’s rules are simple, logical and clearly defined, not remotely the same as looking at economic or weather data or even identifying things in a photograph.

    You can only predict the weather reliably when you can identify all the dynamics of the system
    You could use computers as a tool to try and get a better understanding of those dynamics through massive data crunching, but it does not mean you actually understand how those dynamics operate or interact together, so you’ll end up with a bunch of assumptions (some may be accurate, some overly simplified, others erroneous) about the system that lead to less than useful forecasts or conclusions.

    It sure says something about computer generated forecasts when a Farmer’s Almanac from 200 years ago is more accurate than climate models of today, or Google’s recent failure to identify flu trends from their data which is a vastly simpler problem.
    It is no surprise that the IPCC global warming models have been 96% WRONG, if you don’t understand the system, having a supercomputer behind you doesn’t change anything.

    For this reason I am wary of computer generated forecasts and would never accept computers as decision makers, it would be catastrophic and likely have real consequences.

    In the 1950s, the Distant Early Warning Line radar system mistook geese for Soviet bomber attacks. As warning systems became more sophisticated, variants of the episode inevitably followed. In 1960, meteor showers and lunar radar reflections were mistook for ICBM launches, lucky the computers were not in charge of launching ICBMs back.

Comments are closed.

Log in to comment on this story!