Algorithm Trustworthiness Is The 21st Century’s Big Challenge

Last week, Google CEO Sundar Pichai spoke at a hearing of the House Judiciary Committee in the United States. Ostensibly, one of the main reasons Pichai was asked to attend was in order to answer questions about security issues with various platforms operated by Google. But as you’d expect, the hearing turned into a comedy writer’s dream with enough fodder to fill late night monologues for countless presenters. But buried in the dross was a very important question about the trustworthiness of algorithms.

During one of the more laughable exchanges in Pichai’s hearing, the Republican representative from Iowa, Steve King, asked Pichai to hand over the names of over 1,000 employees who work on the search engine’s algorithm to examine for “a built-in bias”.

Because, nothing screams fairness like a good, old fashioned witch hunt.

Back in high school, I studied a play called “The Crucible”. Written by Arthur Miller, it was an account of the Salem Witch hunts of the 1690s. But, in reality, it was a statement against McCarthyism, where anyone remotely suspected of having Communist inks was ostracised and persecuted. It now seems, in King’s mind, that the creators of algorithms need to be investigated for their bias.

According to King “There is a very strong conviction on this side of the aisle that the algorithms are written with a bias against conservatives. What we don’t know are who are these thousand people and we don’t know what their social media looks like”.

In order for machine learning algorithms to be useful, they need to be fair and transparent – that’s something I wrote about a couple of months ago. But the problem is that Google’s algorithm is a commercial asset. And Google’s dominance in search means they are a virtual monopoly.

[referenced url=”https://www.lifehacker.com.au/2018/10/machine-learning-fairness/” thumb=”https://www.lifehacker.com.au/wp-content/uploads/sites/4/2018/10/machine-learning-classroom-410×231.jpg” title=”Machine Learning Can’t Serve Society Without Transparency” excerpt=”Machine learning holds great promise for helping us to manage vast swathes of data and complementing humans as we try to solve more complex problems in our world. Everything from finding the best route between home and the airport through to finding cures for diseases can be helped by machine learning. But when machine learning is used to make decisions that directly affect people, we need to be able to ask how the models work. Emily Pries, from Lyft, looked at the question of machine learning fairness at the the recent Twilio Signal conference.”]

Politicians, for the most part, don’t like to not be in control of their own messaging. In the United States, that’s reflected in the feeling by Republicans that when some searches for information about repealing the Affordable Care Act, most of the search results are negative. But that’s the same problem, although they cannot see it, as searching for idiot linking strongly to President Trump.

While the political issues here are interesting, they aren’t the biggest problem we face. As we rely more and more on algorithms to assist us, we are increasingly vulnerable to errors in that software. While we can all have a good laugh at some creative Google bombing, the problems are more significant if we rely on algorithms to “fairly” distribute funds. Or rank students for Univeristy placements. Or dispense healthcare. Or determine prison sentences.

All of those are use-cases where machine leaning algorithms are being applied. And, in many cases, the software is proprietary with the people impacted by having no visibility into how decisions are made and little recourse if they disagree. While it’s true that all algorithms are based on a set of assumptions, there’s a more dangerous assumption we make; the algorithm can be trusted.

As technology becomes more embedded in our lives, we need a new type of technologist. We need people who can test the ethics of a system to ensure it meets the standards of fairness and transparency society expects and to somehow balance that with the commercial needs of the people creating those algorithms.

As Uncle Ben said, with great power comes great responsibility. But I wonder if Uncle Ben had a few more breaths whether he might also have said something about accountability.


The Cheapest NBN 50 Plans

Here are the cheapest plans available for Australia’s most popular NBN speed tier.

At Lifehacker, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments