What Neural Networks, Artificial Intelligence And Machine Learning Actually Do In Your Apps

What Neural Networks, Artificial Intelligence And Machine Learning Actually Do In Your Apps

When an app claims to be powered by “artificial intelligence” it feels like you’re in the future. What does that really mean, though? We’re taking a look at what buzzwords like AI, machine learning and neural networks really mean and whether they actually help improve your apps.

Illustration by Sam Woolley.

Just recently, Google and Microsoft both added neural network learning to their translation apps. Google said it’s using machine learning to suggest playlists. Todoist says it’s using AI to suggest when you should finish a task. Any.do claims its AI-powered bot can do some tasks for you. All that’s just from last week. Some of it is marketing fluff to make new features sound impressive, but sometimes the changes are legitimately useful. “Artificial intelligence“, “machine learning” and “neural networks” all describe ways for computers to do more advanced tasks and learn from their environment. While you may hear them used interchangeably by app developers, they can be very different in practice.

Neural Networks Analyse Complex Data By Simulating the Human Brain

Artificial neural networks (ANNs or simply “neural networks” for short) refer to a specific type of learning model that emulates the way synapses work in your brain. Traditional computing uses a series of logic statements to perform a task. Neural networks, on the other hand, use a network of nodes (which act like neurons) and edges (which act like synapses) to process data. Inputs are then run through the system and a series of outputs are generated.

That output is then compared to known data. For example, say you want to train a computer to recognise a picture of a dog. You’d run millions of pictures of a dog through the network to see what images it decided looked like dogs. A human would then confirm which images are actually dogs. The system then favours the pathways through the neural network that led to the correct answer. Over time and millions of iterations, the network will eventually improve the accuracy of its results.

To see how this works in action, you can try out Google’s Quick, Draw! experiment here. In this case, Google is training a network to recognise doodles. It compares the doodle you draw to examples drawn by other people. The network is told what the doodles are and then trained to recognise future doodles based on what the past ones look like. Even if your drawing skills suck (like mine do), the network is pretty good at recognising basic shapes like submarines, house plants and ducks.

Neural networks aren’t the right solution for everything, but they excel at dealing with complex data. Google and Microsoft using neural networks to power their translation apps is legitimately exciting because translating languages is hard. We’ve all seen broken translations, but neural network learning could let the system learn from correct translations to get better over time. We’ve seen a similar thing happen with voice transcription. After introducing neural network learning to Google Voice, transcription errors were reduced by 49 per cent. You may not notice it right away and it won’t be perfect, but this type of learning genuinely makes complex data analysis better which can lead to more natural features in your apps.

Machine Learning Teaches Computers to Improve With Practise

Machine learning is a broad term that encompasses anything where you teach a machine to improve at a task on its own. More specifically, it refers to any system where a machine’s performance at completing a task gets better solely through more experience performing that task. Neural networks are an example of machine learning, but they are not the only way a machine can learn.

For example, one alternative method of machine learning is called reinforcement learning. In this method, a computer performs a task and then it’s graded on the result. The video above from Android Authority uses a chess game as an example. A computer plays a complete game of chess and then it either wins or loses. If it wins, then it assigns a winning value to the series of moves it used during that game. After playing millions of games, the system can determine which moves are most likely to win based on the results of those games.

While neural networks are good for things like pattern recognition in images, other types of machine learning may be more useful for different tasks like determining what kind of music you like. To wit, Google says its music app will find you the music you want when you want it. It does this by selecting playlists for you based on your past behaviour. If you ignore its suggestions, that would (presumably) be labelled as a failure. However, if you choose one of the suggestions, the process it used to give that suggestion is labelled as a success, so it reinforces the process that led to that suggestion.

In cases like this, you might not get the full benefit of machine learning if you don’t use the feature a lot. The first time you open Google’s music app, your recommendations will probably be pretty scattershot. The more you use it, the better the suggestions get. In theory, anyway. Machine learning isn’t a silver bullet, so you could still get junk recommendations. However, you’ll definitely get junk recommendations if you only open the music app once every six months. Without regular use to help it learn, machine learning suggestions aren’t much better than regular “smart” suggestions. As a buzzword, “machine learning” is vaguer than neural networks, but it still implies that the software you’re using will use your feedback to improve its performance.

Artificial Intelligence Just Means Anything That’s ‘Smart’

Just like neural networks are a form of machine learning, machine learning is a form of artificial intelligence. However, the category of what else counts as “artificial intelligence” is so poorly defined that it’s almost meaningless. While it conjures the mental image of futuristic sci-fi, in reality, we’ve already reached milestones that were previously considered the realm of future AI. For example, optical character recognition was once considered too complex for a machine, but now an app on your phone can scan documents and turn them into text. Describing such a now-basic task as AI would make it sound more impressive than it is.

The reason that basic phone tasks can be considered AI is because there are actually two very different categories of artificial intelligence. Weak or narrow AI describes any system that’s designed for a narrow task or set of tasks. For example, Google Assistant and Siri — while powerful — are designed to do a very narrow set of tasks. Namely, take specifics series of voice commands and return answers or launch apps. Research into artificial intelligence powers those features, but it’s still considered “weak”.

In contrast, strong AI — otherwise known as artificial general intelligence or “full AI” — is a system that can perform any task that a human can. It also doesn’t exist. If you were hoping that your to-do list app would be powered by a cute robot voiced by Alan Tudyk, that’s a long way off. Since virtually any AI you’d actually use is considered weak AI, the phrase “artificial intelligence” in an app description really just means “it’s a smart app”. You might get some cool suggestions, but don’t expect it to rival the intelligence of a human.

While the semantics may be muddy, the practical research in AI fields is so useful you’ve probably already incorporated it into your daily life. Every time your phone automatically remembers where you parked, recognises faces in your photos, get search suggestions or automatically groups all your holiday pictures together, you’re benefiting either directly or indirectly from AI research. To a certain extent, “artificial intelligence” really just means apps getting smarter, which is what you’d expect anyway. However, machine learning and neural networks are uniquely suited to improving certain kinds of tasks. If an app just says it’s using “AI” it’s less meaningful than any type of machine learning.

It’s also worth pointing out that neural networks and machine learning are not all created equal. Saying that an app uses machine learning to do something better is a bit like saying a camera is better because it’s “digital”. Yes, digital cameras can do some things that film cameras can’t, but that doesn’t mean that every digital photograph is better than every film photograph. It’s all in how you use it. Some companies will be able to develop powerful neural networks that do really complicated things that make your life better. Others will slap a machine learning label on a feature that already offered “smart” suggestions and you’ll ignore it just the same.

From a behind-the-scenes standpoint, machine learning and neural networks are very exciting. However, if you’re reading an app description that uses these phrases, you can just read it as “This feature is slightly smarter, probably” and continue doing what you’ve always done: Judging apps by how useful they are to you.


The Cheapest NBN 50 Plans

Here are the cheapest plans available for Australia’s most popular NBN speed tier.

At Lifehacker, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments