Friday , November 15 2019
Home / canada / this is how Google makes it better

this is how Google makes it better



  • Google has blogged about the latest improvements in AI and photography – specifically regarding portrait mode at Pixel 3.
  • This post discusses how Google has improved the way its neural networks measure depth.
  • The result is an enhanced bokeh effect in portrait portrait mode.

Google has detailed one of the main photographic achievements achieved at Pixel 3 on its AI blog. In a post published yesterday, Google discussed how to improve portrait mode between Pixel 2 and Pixel 3.

Portrait mode is a popular mode of smartphone photography that blurs background scenes while maintaining focus on the foreground subject (what is sometimes called the bokeh effect). Pixel 3 and the Google Camera application take advantage of advances in neural networks, machine learning, and GPU hardware to help make this effect better.

In Portrait Mode on Pixel 2, the camera will capture two versions of the scene at slightly different angles. In these images, the foreground image, someone in most portrait images, will appear to shift to a level smaller than the background image (an effect known as parallax). This difference is used as a basis for interpreting the depth of the image, and thus which areas are blurry.

An example of a portrait of Google Parallax Mode is rolling. Google Blog

This gives strong results on Pixel 2, but it's not perfect. Both versions of the scene only provide little information about depth, so problems can occur. Most commonly, Pixel 2 (and many others who like it) will fail to accurately separate the foreground from the background.

With the Google Pixel 3 camera, Google includes deeper cues to inform this blurry effect for greater accuracy. As well as parallax, Google uses sharpness as an indicator of depth – objects that are far less sharp than closer objects – and identification of real-world objects. For example, the camera can recognize a person's face in a scene, and find out how close or far it is based on the number of pixels relative to the object around it. Smart.

Google then trains its neural network with the help of new variables to provide a better understanding – or more precisely, the estimate – depth in the image.

Portrait mode of Google Pixel 3 bokeh skull

Pixel portrait mode requires not only human subjects.

What does all this mean?

The result is a better shot at portrait mode when using Pixel 3 compared to previous Pixel cameras (and many other Android phones) thanks to a more accurate blurry background. And, yes, this means less hair is lost because of the blurry background.

There are interesting implications of all this related to chips too. A lot of power is needed to process the data needed to make these photos after being photographed (based on full resolution, multi-megapixel PDAF images); Pixel 3 handles this quite well thanks to the combination of TensorFlow Lite and GPU.

In the future, though, better processing efficiency and dedicated nerve chips will expand the possibilities not only for how quickly these shots will be sent, but for the development of what even developers choose to integrate.

To find out more about Pixel 3 cameras, click the link, and give us your opinion about it in the comments.


Source link