Over the weekend, six teams in Sydney and Melbourne competed in a hackathon to build a Google Glass application for the Red Cross. This is what they learned — and what you should know if you’re considering developing for Glass.
Creative technology agency DT hosted the 24-hour event in its Melbourne and Sydney offices. I dropped in for the tail-end of the event to see what the teams had developed over that time — concept presentations and basic apps for the most part — and any broader lessons they had learned from the experience.
Google Glass remains one of these technologies that lots of people have heard about but which doesn’t always have a clear use case. Part of the reason DT decided to theme its hackathon around building an app for use by a specific organisation — in this case the Australian Red Cross, which is celebrating its 100th anniversary this year — was to force developers to really focus in on where Glass might be uniquely usable and serve a distinct purpose, not just where it seemed like a nifty piece of technology but wasn’t doing anything that couldn’t be done equally easily on a mobile phone.
“A lot of the discussion has been for the consumer market, and there hasn’t been as much discussion about how it make work in industry or the charity sector,” DT general manager David Pountney told Lifehacker.
These were the six concepts the teams (four in Melbourne and two in Sydney, connected for judging via videoconference) came up with, and how they made use of Glass’ ability to offer hands-free access to information and tracking:
Providing counselling to disaster victims: A key service the Red Cross provides is counselling for people who have been victims of natural disasters, often on site. Glass can be used to deliver a suggested script for this discussion, emphasising elements such as repeating someone’s name and offering reassurance. One huge advantage here? Having the script presented on Glass rather than on a separate screen (or piece of paper) allows the counsellor to maintain eye contact.
Reunite: Helping displaced people find each other after disasters Reconnecting families separated after fires or floods is another key Red Cross role. Using Glass to handle the tracking process could be faster and more efficient than filling out paper forms, especially if facial recognition is incorporated.
Fight Or Flight: Helping people decide when to leave an emergency zone During bushfires or floods, a crucial decision is whether it’s safer to stay where you are or escape the onslaught. Wearing Glass enables someone to be sent accurate information, whether that’s as they leave the scene or if they decide to stay.
Trace: Locating individuals In a similar vein to Reunite, Trace allows people to register once they have left their original location so family or friends can identify their whereabouts. The device can photograph those individuals and store recordings of their voices.
MAPP: Map out disaster zones Using Glass’ built-in mapping features, areas affected by natural disasters can be easily mapped and photographed. In this context, hands-free operation can be particularly useful — trying to take photographs while in deep water or when driving can be a considerable challenge.
In a 24-hour hackathon, obviously none of these apps can be fully developed, but all seem worth exploring further. From those projects, three obvious Google Glass development lessons emerged for me:
Hands-free apps are essential: The best use cases for Glass all occur in situations where hands-free operation is essential. If it isn’t, then it’s harder to make the case that you couldn’t deliver much of the same functionality with a conventional smartphone app.
Google Glass isn’t inherently anti-social Glass is often presented as inherently pervy (“stop photographing me!”) or more generally anti-social (“now you’re always looking at a screen”). However, in contexts where maintaining eye contact is valuable but you also need to access data, Glass seems a particularly appealing solution.
Processing is likely to happen in the cloud Many of the apps suggested during the hackathon rely on features such as facial recognition which are unlikely to be handled directly by Glass, given its limited storage and processing capabilities. These features would have to be implemented via calls to external services.