Android: Microsoft is currently testing a fun new feature that allows you to import data into Excel by taking a picture of it — preferably a data table and not, say, your cat. That won’t do anything.
Tagged With machine learning
I love a game of "buzzword bingo" as much as the next person but I'm getting a little sick of the way companies are using the terms "AI" and machine learning to describe, well, basically everything. In many cases, the terms are being completely misappropriated. And it turns out I'm not the only one that's noticed. A new study has revealed a whopping 40% of European companies that call themselves "AI startups" are talking rubbish.
Depending on your perspective - this is is either pretty cool or quite scary. Research from Stanford and Google found that a machine learning agent that was taking aerial images and constructing maps was "hiding" information in “a nearly imperceptible, high-frequency signal". The software was meant to be recreating the aerial images from the maps it created but piqued the researchers' suspicion when it was performing the task better than expected.
Over the course of this year, I've written about a thousand stories. Across that, there have been some important topics such as the ongoing troubles of Facebook that started with the Cambridge Analytica scandal in April and continued this week with revelations that personal messages were leaked to some of Facebook's partners.
We had the Spectre and Meltdown CPU issues early in the year and, just in the last couple of weeks, the introduction of the Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018. After all that's happened, there's one significant theme that links all those together; trust.
Last week, Google CEO Sundar Pichai spoke at a hearing of the House Judiciary Committee in the United States. Ostensibly, one of the main reasons Pichai was asked to attend was in order to answer questions about security issues with various platforms operated by Google. But as you'd expect, the hearing turned into a comedy writer's dream with enough fodder to fill late night monologues for countless presenters. But buried in the dross was a very important question about the trustworthiness of algorithms.
Over the last few weeks, something has been bothering me. One of the recurring themes I'm hearing about, either directly or indirectly, has been around the intersection between technology and trust. While the issues around government access to encrypted communications have received plenty of airplay, the expanding use of machine learning, broad access to vast swathes of data and increased use of social media has made trust the voluble commodity in tech.
Machine learning holds great promise for helping us to manage vast swathes of data and complementing humans as we try to solve more complex problems in our world. Everything from finding the best route between home and the airport through to finding cures for diseases can be helped by machine learning. But when machine learning is used to make decisions that directly affect people, we need to be able to ask how the models work. Emily Pries, from Lyft, looked at the question of machine learning fairness at the the recent Twilio Signal conference.
As more tasks become automated through the use of machine learning and AI driven systems, there's been a worry that many people would lose their jobs. On the flipside, there's been optimism that automation will take people away from dreary and repetitive tasks and direct their skills to more complex or rewarding work. But a recent study by the United Nations' International Labor Organization (ILO) says the reality, at least for now, is very different.
With job-search sites now using software to help filter candidates, it's important to tailor your CV to ensure it has the highest chance of passing that first round of screening. Recruitment agency Hays recently polled over6000 people and learned that over a quarter have already tweaked the CV with another 54% planning to update over the coming year. Just one in five have no plans to bring their CV into the 21st century. So what do you need to do to get your CV algorithm-friendly?
It feels as though Google has held the market on “point your camera at it to learn more” technology for some time now, first through its Translate app, which let you target signs in foreign languages with your smartphone’s camera and receive translations on the fly, and now via Lens, which expands this technology to give you plenty of information about the objects in photos you’ve taken (or are about to take).
"It's very easy to get intimidated," says Hamayal Choudhry, the robotics engineer who co-created the smartARM, a robotic hand prosthetic that uses a camera to analyse and manipulate objects. "You have this idea for a project, then think, I don't know a thing about this." Here's how Choudhry and his partner Samin Khan, who programmed the smartARM's machine learning algorithm, used code libraries, college assignments, and sponsored hackathons to find and execute a meaningful project at age 20.
You probably use machine-learning systems every day without even knowing it. The technology gives us spam filters, our Facebook News Feeds, digital assistants, search engines, Netflix picks, Amazon recommendations, fraud detection systems, chatbots and more. And it's only going to become more pervasive. For forward-looking parents, it's time to get your kids on it.
Machine learning and artificial intelligence are near the top of the list of items dominating discussions about digital transformation. Chris Bedi is the CIO at ServiceNow and he said, during a briefing at the company's Knowledge 18 event, CEOs are now value in speed over cost. As businesses are changing, he says there's a huge sense of urgency as companies want to ensure they're not left behind.
Lots of discussions about complex topics start with the premise that there are two types of people. That's where Symantec's Chief Technology Officer Hugh Thompson began his discussion on the challenges facing the security industry. He began his entertaining security keynote at this year's CeBIT event in Sydney telling the story of a bird that flew into a commercial aircraft as the plane was being loaded by ground staff. It was trapped in the passenger cabin, only becoming known when the trans-Atlantic flight was in the air. The reactions to the story are indicative, he said, about differing attitudes to security risks.
The modern workplace is undergoing a substantial transition. Systems to foster collaboration, automation and machine learning are creating a workplace that is almost unrecognisable from the 1990s. Careers are built by moving between companies and, increasingly, we are expected to be the masters of our own training and development. Where is this leading and what will the workplace look like in another 20 years?
At the opening of Google's I/O event, the company showed off their new AI tool. In the demonstration, someone told the Google Assistant they wanted to book an appointment. Google found the hairdresser and then phoned them, holding a natural language conversation with a person to make the appointment and add it to a calendar. The party on the other end of the phone didn't know they were talking to a computer (so we're told). This opens up an interesting future.
If you've ever wanted to have a deeper conversation with the printed page - or scan a library of literature for answers to your many questions - Google's Talk to Books tool is a fun little way to do just that. It isn't a Google search for books, but it does offer more conversational answers for your questions than a traditional search.
Machine learning (AKA AI) seems bizarre and complicated. It's the tech behind image and speech recognition, recommendation systems, and all kinds of tasks that computers used to be really bad at but are now really good at. It involves teaching a computer to teach itself. And you can learn to do it in well under a year, according to data scientist Bargava. You'll need to put in a solid 10-20 hours a week, but you will learn a lot along the way.