The technique of using a green screen (or another single colour - blue is sometimes used as well) to divide a background in video shoots so you can later insert any other background is widely used today. Pretty much anyone with the right software and a little bit of skill can add any background to an image or movie shot in front of a green screen.
But Google has taken things a step further. The company says it's bringing "precise, real-time, on-device mobile video segmentation to the YouTube app".
In a recent blog post, Google said they have come up with a way to replace and modify backgrounds without needing any specialised gear.
Their system, unsurprisingly, uses machine learning that has been "trained" with tens of thousands of datasets that they've annotated. With that, they've been able to work out how to separate objects in the foreground from the background so they can be isolated. That allows the background to be replaced.
Google AR on Pixel is insane pic.twitter.com/VmzwBNxllc
— Harry Tucker (@harrytuckerr) December 13, 2017
"The end result of these modifications is that our network runs remarkably fast on mobile devices, achieving 100+ FPS on iPhone 7 and 40+ FPS on Pixel 2 with high accuracy (realizing 94.8% IOU on our validation dataset), delivering a variety of smooth running and responsive effects in YouTube stories," Google explains on its blog.
Their aim is start a limited beta that will allow users to test the technology with a limited set of background effects with an aim of, eventually, rolling it out more broadly.