John Underkoffle is the CEO of Oblong Technologies; a telecollaboration company specialising in non-traditional user interfaces (UIs). He was also the brains behind the “spatial operating environment” in the science fiction film Minority Report. Since then, Underkoffle has turned his movie magic into a commercial reality for conference room collaboration. In the very near future, he believes everyone will be touching the air like Tom Cruise to get stuff done in the office. Here is his ethos.
In 2002, Minority Report wowed moviegoers the world over with its tantalising visions of near-future technology. While some of its concepts were fanciful to say the least (Puke Sticks? Really?), the famous PreCrime holographic interface boasted a wonderfully plausible design.
Twelve years on, and the image of Tom Cruise twirling 3D projections around like a technogeek maestro continues to inspire and galvanise real-world developers. Products that owe the fictional UI an acknowledged debt include Obscura Digital’s VisionAire, Intel’s RealSense 3D, MIT’s Sixth Sense and Microsoft’s Kinect. And then there’s Oblong Technologies, a company co-founded by the movie’s chief science advisor.
As you’d expect, Oblong’s technological output has a distinct Minority Report vibe; its software development kits have been used to create everything from digital whiteboards that can be used by multiple people at once, to a video editing suite that allows visual elements to be added and removed by hand. This video gives an idea of what Oblong’s SDK is capable of:
At this year’s Unified Communications expo, Underkoffle claimed that Minority Report-style tech could become the biggest evolutionary step in computing since two-dimensional GUIs. In Underkoffle’s words, the latter was a “thunderclap” in professional computation that allowed non-programmers and even illiterates to use computers via pixels, icons and pictures.
He predicts that the next “thunderclap” is on the horizon. This time, instead of interactive images, it will be driven by an increased need for scalable, collaboration-friendly data:
Where do we go from here? Whether you buy into my assertions or not, these next two points are non-negotiable. One; we have to enable collaboration; we have to provide a set of interlocking digital systems that are as rich as the kind of collaboration that people are capable of when they don’t have computers in the way.
The second non-negotiable point is scale. Think about the way in which workflows and professional processes have a natural physical scale: for example, how big is the logistics problem if you’re Fed Ex? Does it fit on a 3-by-5 card which are the rough dimensions of a smartphone? Does it fit on an 8.5-by-11 piece of paper which is your laptop? No. The logistics problem for a major shipping company does not fit on that.
We need to build a new type of computation that thinks a process on room scale and allows people to work digitally on that scale. We feel this will be achieved through the UI.
This is where the concept of a “spatial operating environment” comes into play — gesture-controlled UIs that can be scaled to fill an entire room. The ability to spread collaborative projects out to any required size is pure Minority Report. It also makes a lot of sense, as anyone who has ever struggled with a sprawling Excel spreadsheet can attest.
“You have to ask yourself; why are we still talking about these scenes and designs from Minority Report? People think it’s the cool gestures, but I think that’s only the conscious part,” Underkoffle said.
“What these scenes show is a large scale space with a more capable user interface that allows people to solve problems while working together collaboratively. There’s no magical artificial intelligence involved: it’s just people spreading work out over an enormous pixel space and getting things done with a new UI.”
Of course, something Minority Report doesn’t show us is the steep learning curve and technical glitches that plague new-concept UIs: instead, everything just works seamlessly. This obviously isn’t the case in the real world — indeed, most of the applications powered by Kinect are a nightmare to navigate. Even moving a cursor across a screen can be difficult.
When we quizzed Underkoffle about this inconvenient truth, he admitted that the real-world examples of spatial operating environments haven’t always been ready for prime time:
“Obviously if people find [a UI] confounding or their expectations aren’t met, we’ve failed entirely. Ultimately, I think we have to say that some of the systems on the market that play in this space aren’t good enough; the raw technology isn’t quite ready. In the case of the Kinect, the depth and spatial resolution window isn’t very high.
“With that said, you can’t be afraid to ask people to learn a little bit – everyone claims the iPad is so intuitive and they instantly know what to do with it, but that’s not true. We like it, so we forget that we had to learn. The trick is to make it as easy as possible.
“The important thing is to build the interface in concert with the input device. It’s the same concept as building a computer mouse; the mouse by itself doesn’t predict the whole UI which needs to be built as well. You also need to be really obsessive about the UI with user-testing being at the very top of your to-do list.”