© Bloomberg Human beings can instantly recognize the objects in a room. How many are there, what color are they? If anyone is speaking we understand that, too.
Google's next generation of digital devices will soon process and understand what they’re they’re seeing just like we humans.
On Thursday, San Mateo, Calif., chip designer Movidius announced that it was collaborating with Google to speed up the adoption of this type of deep machine learning within mobile devices.
As part of the arrangement, Google will license Movidius’ compact, cheap and low power chips, and in turn help Movidius with its complex neural network technology. Movidius is building its reputation in visual intelligence.
The two companies previously collaborated on Google’s Project Tango, which helps phones and tablets understand where they are and how they move through space. At CES earlier this month, Lenovo announced that it would launch the first such Project Tango phone for consumers this summer.
“Instead of us adapting to computers and understanding their language, computers are becoming more and more intelligent in the sense that they adapt to us,” says Blaise Agϋera y Arcas, head of Google’s machine intelligence group in Seattle, in a video outlining the collaboration.
Consumers are already getting a glimpse of what’s possible when devices start to interpret scenes like we do. The Google Photos app, for example, can recognize people and objects in images and automatically tag scenes.
As part of this latest collaboration, Google will license Movidius’ so-called MA2450 vision processor. While consumers may start to see next-generation Android devices that take advantage of such machine learning advancements relatively soon, the specifics on timing remain sketchy.
Email: ebaig@usatoday.com; Follow USA TODAY Personal Technology Columnist @edbaig on Twitter
end quote from:
http://www.msn.com/en-us/news/technology/your-next-android-phone-might-see-like-humans/ar-BBoNPbN