Generating User Interface Code From Images Using Machine Learning

Generating User Interface Code From Images Using Machine Learning

As machine learning matures, we’re finding more and more applications for the technology, beyond the niche and novelty. It’s even starting to encroach on the jobs of designers and programmers, if Tony Beltramelli’s research project “pix2code” is anything to go by.

As the video above shows, using a trained neural network Beltramelli is able to input an image of a user interface and the program will output code (either for iOS, Android or web) of that UI. According to Beltramelli, the current iteration is able to produce results with 77 per cent accuracy.

Of course, the idea behind pix2code is to simply show what’s possible — it’s in no way ready for prime time. The video also doesn’t reveal the resulting code, which I’m going to guess is a complete mess, if the output of any other WYSIWYG web editor is anything to go by.

That said, I don’t see this replacing people wholesale, instead, a lighter version could be used to quickly create templates or site frameworks, minimising the time spent writing boilerplate or structural code. It could also open doors for artists who want to get into web development, but are still refining their HTML and CSS skills.

If you’re after a more detailed explanation of how it works, there’s a dedicated page with the research paper and a (currently empty) GitHub repo available.

pix2code [YouTube]

Log in to comment on this story!