Huawei: GPUs Won't Dominate Machine Learning In The Future

Im

Today, a lot of high-profile deep machine learning projects in the cloud are powered by GPUs; specifically NVIDIA GPUs. Even Facebook uses them for its own machine learning work behind-the-scenes. GPUs are able to handle the massive amounts of computing power required to train deep neural networks that facilitate these projects. But Huawei deputy chairman and rotating CEO Eric Xu believes the future of machine learning lies in dedicated processors. Read on to find out more.

In the past few years, NVIDIA has made a big push to dethrone CPUs in the deep machine learning space. With its Tesla line of GPU accelerators, it managed to secure support from the likes of Facebook, which uses NVIDIA GPUs for its Big Sur purpose-built deep neural networks system. IBM has added support for Tesla GPU accelerators for its Watson cognitive computing platform as well.

According to a blog post by NVIDIA:

"With GPU acceleration, neural net training is 10-20 times faster than with CPUs. That means training is reduced from days or even weeks to just hours. This lets researchers and data scientists build larger, more sophisticated neural nets, which leads to incredibly intelligent next-gen application."

Huawei has been making a lot of investments in the area of machine learning in the cloud. This is all part of the company's efforts to become a major player of cloud computing. Speaking through a translator at a press conference at the Huawei Connect 2016 event in Shanghai, China, Xu acknowledged that GPUs are a good option for deep machine learning now, but the company is looking at other alternatives:

"GPU is pretty good processor right now for machine and deep learning. But I believe in future, there will be dedicated processors for machine and deep learning. Those processors may not necessarily be GPUs; Huawei is doing research and exploring a way in those specific areas."

He noted that machine learning, deep learning and artificial intelligence (AI) are first and foremost a technology and a capability. Huawei is interested in consolidating all those relevant technologies and capabilities to help bring value to customers. It has already started doing so with some of its customers although Xu did not elaborate on the details.

He revealed that Huawei is also using machine learning internally to drive efficiency and lower cost of operations. This is something that IT giant Microsoft is already doing.

"At the very least, one thing we are trying out right now is to use machine learning translation to convert Chinese into English," Xu said. It's something that is particularly valuable for Huawei's ambitions to be a truly global organisations given that most of its executives are native Chinese speakers.

He patted his translator's shoulder.

"We're going to put you out of a job."

Spandas Lui travelled to Shanghai, China as a guest of Huawei


Comments

    He is of course 100% right to the point it's common sense. GPUs are great for manipulating tensors but dedicated chips can always do better with things like non-volatile ram on the processing die to speed access to the training data, improved interconnects to bump up the cores and so on.

    Deep learning is a massively parallel problem. It's the next big thing - and it will eclipse a lot of current methods of work.

Join the discussion!

Trending Stories Right Now