Artificial intelligence front and center at Google’s I/O conference

World Today

I/O 2017

Compared to 2016, when new consumer devices took the headlines, Google’s I/O 2017 conference in Mountainview, CA was dominated by advances in artificial intelligence (AI) across its spectrum of hardware and software offerings.

Google CEO Sundar Pichai kicked off Wednesday’s developer conference stating that, as human beings interact with computers in more natural ways, the mantra for Google is no longer “mobile first”, but “AI first.”

Google CEO Sundar Pichai

Google CEO Sundar Pichai opening the I/O 2017 conference on May 17, 2017.

Pichai also said that, moving forward, priorities for the company will be developing technology to intelligently anticipate user needs.

To this end, one of Google’s main areas of development has been in an area known as “complete syntactical recognition.” Simply put, an AI that not only identifies and translates language, but also interchanges sound and imagery to expand the conversation between man and machine.

Here are a few AI highlights from the first day of the conference.

TENSOR PROCESSOR V2

Google released the second generation of its Tensor Processing Unit (TPU), which is a cloud based hardware and software system that is at the foundation of the company’s AI and machine learning endeavors. The system’s self learning technology is being used for everything from high power scientific computations, to more accurate facial and object recognition through Google’s search engine.

In March 2016, Google’s AlphaGo program – built on the Tensor system – defeated master Go player Lee Se-dol 3-0 in a best-of-five tournament. The program’s ability to not only learn rules and patterns in the game Go, but also adaptation skills that could defeat a world master was seen by some as a seminal moment for AI computing.

The software used in Googles TPU’s is called TensorFlow, and has been made available as an open-source resource for developers. Google also plans to make the TPU system available as an online computational resource, similar to services like Amazon AWS – which offer web server and other online resources to its customers.

Google TPUs

A server rack containing multiple Tensor Processing Units, which are now used to both train AI systems and help them perform real-time tasks. (PHOTO: Google)

LENS

Due to be released later in the year, Lens appears to be less of a specific product than a technology spread across the Google universe. Based in artificial intelligence, Lens breaks down and learns visual attributes of imagery in order to make useful suggestions to the user. In that sense, contextually, imagery and text become interchangeable for the machine to learn.

ALLO

A first, simple example of Lens is Google’s new smart messaging app, Allo. Combined with new language and image recognition, Allo claims to automatically suggest an appropriate response to any instant message (text or picture). At Wednesday’s event, the Allo demo was able to recognize a picture of a labrador puppy within a message, and suggest three separate responses: “Cute!”, “Lovely Lab!”, and a heart-shaped smiley-face emoticon.

Allo also demonstrated the ability to work with Personal Assistant to recognize fragments of text and suggest next steps. For instance, in one text conversation the topic of a sushi arose and Allo began to ask if the user wanted to look up restaurants, or videos of sushi being made.

Allo can also, manually or automatically, take dates from instant messages and apply them to the Google Calendar App.

PHOTOS

Google developer Anil Sabharwal showed an example of Lens technology using Google Photos. In his demo, Lens was immediate able to identify photos of his children based on facial attributes. Though facial recognition is nothing new or exclusive to Google, Photos went a step further and automatically grouped his kid’s photos into a batch that would be shared with his wife’s photo library.

Put into an another application, Sabharwal also demonstrated what happens when snapping a picture of a concert hall marquee. Without the user adding text, the program not only found information about the venue, but also suggested YouTube videos of the artist listed on the marquee, and provided options to buy tickets to the same show. All based on the visual information from the picture.

VIRTUAL & AUGMENTED REALITY

In addition to expanding its list of devices to experience and broadcast 360 video, the biggest game changer at Wednesday’s conference may have been the introduction of Google’s Visual Positioning Service (VPS), which also uses Lens image learning.

Basically, using any future Google powered phone or camera device, VPS recognizes architecture and common shapes and creates “keypoints” to create a virtual re-creation of any object, or space – inside or out. Tools like these could be used to create an interactive map of a store (Lowes hardware was used in Wednesday’s demo) or museum that can be tagged with details about items for, or even 3D computer projections in augmented reality.

Clay Bavor, Googles VP for VR and AR development, gave a demonstration of how the technology is used in a classroom environment.

While VR and AR are not exclusive domains of Google, the focus on combining them with automatic machine learning and continuous user input may prove to forward the company’s original mission statement: “Organize the world’s information and make it universally accessible and useful.”

As machine learning and AI begin to enter maturity, the next question may be “how much longer will programmers be relevant?”

Google also announced Wednesday its future generations of AI supercomputers will, in part, be designed by machine learning in its current Tensor Processing Units.