At the Google I/O conference on Wednesday, Google unveiled a new feature bringing computer vision capabilities to various products, starting with Google Assistant and Google Photos later this year.

Called Google Lens, the feature helps you “understand what you’re looking at” and take actions based on that information, CEO Sundar Pichai explained. For instance, you could point your phone at a flower and learn what kind of flower it is. You could point your phone at a restaurant and get contextual information such as its hours of operation. In another example, Pichai said, you could take a picture of a router and rather than typing in the password, “we can automatically do the hard work for you.”

As Google strives to integrate artificial intelligence into all of its products, Google Lens illustrates how far computer vision in particular has come. In fact, the vision error rate of computer vision algorithms is now better than the human error rate.

Pichai called this “clearly at an inflection point with vision.”

“The fact computers can understand images and videos has profound implications for our core mission” of organizing the world’s information, he added.

Source link