Nvidia’s GTC is a GPU conference that takes place in the heart of Silicon Valley. This year’s keynote unveiled plenty of exciting new technology, particularly in the areas of virtual reality, deep learning and self-driving cars. Here are all the major announcements that professional developers and tech-loving consumers need to know about.
This year’s GTC keynote was pretty light on gaming tech — there were no new graphics card announcements and little in the way of 3D graphics entertainment. Instead, the emphasis was firmly on super-computing and new applications for researchers and development. With that said, some of the demos were deigned to fascinate no matter what your profession. Here are a few of the highlights from the keynote address:
VR is everywhere
Nvidia is pouring a tonne of resources into virtual reality which it sees as a potential disruptor for multiple industries including video games, design and travel. During the keynote, the company unveiled a new addition to its SDK dubbed Nvidia VRworks. It includes a suite of APIs, sample code and libraries for VR developers. It can also be used to integrate Epic, Max Play and Unity game engines with the Oculus Rift and/or HTC Vive. Naturally, the platform is equally suited to non-gaming applications and can also be used to form the backbone of new VR headsets.
As proof of concept, Nvidia demonstrated two new VR demos: Everest VR and Mars 2030, which was controlled by Steve Wozniak via live video stream. During the demo, Steve complained that the virtual environment was making him dizzy, which Nvidia CEO Jen-Hsun Huang quipped was “not a helpful comment”.
During the demonstration, we watched the Woz trundle about in a three-dimensional Mars landscape dotted with spacecraft and other astronauts. To the outside observer, it just looked like a passable first-person video game. This is the problem with VR demonstrations, as Jen-Hsun freely admitted: “We can show it on screen, but it won’t come close to the grandeur you will experience.”
It’s a different story from inside the headset, as anyone who has tried VR will tell you. You can read one of my own experiences with the Capcom-produced horror VR demo The Kitchen here.
Everst VR, meanwhile, is a painstaking reconstruction of the world’s tallest mountain simulated in full HD using 108 billion pixels. The result is an astonishingly photo-realistic virtual environment complete with weather effects that behave like the real thing. We’ll be experiencing this demo first-hand on the HTC Vive later today, so stay tuned for a hands-on.
The company also unveiled Iray VR — a virtual-reality version of Nvidia’s enterprise-level 3D modelling platform. Iray VR will allow developers to create pre-rendered light probes in regions of interest, rasterize depth for optimal headset eye position and reconstruct images for new viewpoints quickly and efficiently from within the platform.
From June, there will also be a consumer version available dubbed Iray VR Lite. This will work on Android devices using a range of available headsets including Google cardboard.
Much like with smartphones, we’re getting to the point where VR input devices are becoming lightweight and user-friendly enough to entice the majority of consumers. According to Nvidia, the commercial possibilities are set to explode in the months ahead.
Nvidia doubles down on deep learning
It’s been a very big year for deep machine learning. Over the last 12 months, we’ve witnessed a multitude of milestones including Microsoft and Google’s “super human” image recognition, Berkley’s self-learning Brett robot, Deep Speech 2’s dual-language speech recognition network and Google AlphaGo triumphing against the Go world champion.
If Nvidia is to be believed, humanity is now on the cusp of big changes, with artificial intelligence set to transform every industry and dominate all computer applications. As Jen-Hsu explained during his keynote, the chief advantage of deep learning is that it’s incredibly easy to apply — “super human results without super human training”.
As part of its commitment to deep learning, Nvidia is launching DGX-1 — the world’s first deep learning supercomputer which the company described as “120 servers in a box”. Specifically engineered for deep learning, the DGX-1 boasts eight 16GB Tesla P100 GPU accelerators, a 7TB SSD, a pair of Xeon processors and an NVLink Hybrid Cube Mesh. It represents an astonishing 12x speed-up in a single year.
The DGX-1 is priced at $129,000 and will be available from June. Stanford University, Berkely, NYU and the University of Oxford will be among the first institutions to get DGX-1s. Nvidia will also be partnering with Massachusetts General Hospital to bring the power of DGX-1 to medical research; specifically in the areas of radiology, pathology and genomics.
Can AI “robots” create art?
During GTC 2016, Nvidia showed off some of the learnings from Facebook’s AI Research project (FAIR) which Jen-Hsu jokingly described as a neural network with “artistic skills”.
Using unsupervised representative learning, the platform is capable of generating its own landscapes based on images it has been fed previously. Ask for a picture of a beach and it will spit one out worthy of framing. Furthermore, its possible to request certain artistic styles such as pastoral or Romantic era paintings and the AI will dutifully tailor the landscape to meet your needs. You can then tweak the results by removing aspects (like people) and adding others (like sunsets).
FAIR can also create complete “turn” vectors from four average samples of faces looking left and right. By adding interpolation, it fills in the blanks to show the real person’s face from every angle.
Self-driving cars to hit the F1 tarmac this season
At GTC, Nvidia unveiled a new Formula E racing event dubbed Roborace. As its name implies, this is an all-new racing class that will feature fully autonomous cars powered by Drive PX 2 supercomputers. If the chip maker can be believed, the first race is expected to kick off this season.
The Formula E Roborace Championship will see 10 teams compete with 20 driverless electric cars powered by Drive PX 2; Nvidia’s latest self-driving platform that packs in 12 CPU cores and four Pascal GPUs for eight teraflops of computing power. This allows vehicles to adapt to different driving conditions, including asphalt, rain and dirt in real time using artificial intelligence.
The concept vehicle pictured above was designed by Daniel Simon of Tron Legacy light-cycle fame. The cockpit-free build has allowed Nvidia’s engineers to house the Drive PX 2 computer without compromising the size or weight of the vehicle. (It’s tipped to weigh under 1000kg which is roughly in line with other electric Formula 1 cars.)
But the real star of these races will not be the cars, but the software algorithms developed by each race team to maximise steering and breaking efficiency. The winner will not be decided by engine or chassis tweaks — all 20 cars will be identical — but by the AI powering the steering wheel.
Nvidia’s F1 partnership is obviously designed to bring maximum attention to Drive PX 2, but it doesn’t really need it — it’s already one of the most accomplished self-driving platforms on the market. According to the KITTI car detection benchmark suite, Drive PX has an accuracy rating of 83.76 per cent in hard conditions, 89.81 per cent in medium conditions and 90.92 per cent in easy conditions. These are the best accuracy scores on the market. It can detect up to 1.8 million points of interest per second which are assessed in the cloud via DGX-1.
Lifehacker travelled to GTC 2016 in San Jose, California as a guest of Nvidia.