From May 18-20 myself and Richard attended the Google IO 2016 conference. We both visited different tracks and have some different experiences we'd like to share. Here are mine. Read on about topics in the likes of VR, Progressive Web Apps, and Artificial Intelligence. For a quick impression have a look at the photo album.

Google CEO Pichai during Google IO keynote at the Amfitheater

A.I. first

It was already made very clear in the keynote by Google CEO Sundar Pichai that Google's way forward will be largely focussed on understanding their users better. All to create a better personalized experience. Pichai called it 'creating a personal Google'. For this to happen, Google has been investing for over 10 years in creating algorythms that can understand speech. Their Natural Language Processing tools are the best on the market right now. This is largely due to the fact that context can be derived from user conversations or search queries with Google. Google is building a new Assistant which has Machine Learning capabilities to improve serving the user over time. Software that makes this possible is (partly) open source. It's called TensorFlow. At Google they think that machines should focus on things that are hard for humans and easy for machines. But the key challenge with machine learning is unsupervised learning. Currently engineers decide for a machine what neural networks to use for a certain task. Google wants machines to be able to make those decisions themself. Google will roll out their new Assistant this fall. It will be the successor of Google Now. The Assistant will see its entrance in various products, including a new chat app called 'Allo' and a new product for the home. Google Home.

"We are shifting from Mobile First to A.I. first" - Jeff Dean (Google Brain leader)

Allo

The Assistant promises to be very helpful in chats you make either with your own contacts or with the Assistant itself. You can even consult the Assistant during a chat you have with a friend, e.g. to quickly search for a restaurant to eat together. The Assistant can make suggestions and with your answers it learns even more about you. Not only when you ask for consultation. The Assistant can even give suggested answers to questions from the person you're chatting with. During a chat (whether it's spoken or written) you can clearly see the magnificance of the Natural Language Processing abilities. E.g. when you perfom a search about Real Madrid and then ask 'When is the next game?', you'll get the date of the next game of, hence, Real Madrid. If you then query 'get me 2 two tickets', the Assistant will suggest two seats for you and asks you if you want to buy the tickets. Etc. etc. Google ensures that everything is private and encrypted, but I'm wondering where the Ads come in. And whether the restaurants it suggests, or the game ticket web site it orders from, have bought priveleges.

Next to Allo, also a new video call app will be launched called Duo. Cool thing about this app is it shows you live video from the caller before you pick up the call. But, yet again, another app. You can pre-register for these 2 apps at the play store. You'll get a notification when the apps are available for download.

Introduction of Google Home at Google IO 2016

Home

All these smart algorythms will also be available (coming Autumn) in your own home in a sort of Chromecast 2.0. The assistant can work with whole families and recognizes the different voices that say 'OK, Google'. It can work together with different devices in your home. So it can playback movies, show appointments, search results, play music, and so on, on different devices. It also comes with quite a decent speaker itself (hope that's true), so the more Home devices you buy the better your experience will be in different rooms. If it promises to work as good as foretold (and in Dutch preferably), I'll definately buy at least one!

IoT

And so the bridge is made to IoT devices. With the Home device being your family's Hal 9000, I can see it being able in the near future to control every device inside your house. Google Everywhere. During the Speechless Cabaret session there even was a joke about Google being part of Alphabet being part of Universe. Apart from Google Home, Google has a few teams working on IoT devices. One is ATAP, the other Physical Web.

Physical Web

The Physical web project is all about Beacons and WebBluetooth. Beacons are small devices that can broadcast a URL. The HTML5 WebBluetooth API can scan for these broadcasts and connect to these devices. A phone, for example, can use a browser + bluetooth connection to receive the messages in the form of push notifications. The Beacon device itself has had quite the update on battery life. The first device could last 6 months, last year's version 2 years. The current version can last a decade! The next version that will be developed is said to last forever with the use of solar power. Typical applications could be (aside from the obvious - advertising) pill bottles, bus stops (London Bus system already makes use of it!) or clothing that broadcast a person's web site. I was lucky enough to receive such a T-shirt, which is now broadcasting my LinkedIn profile to other luggage in the plane. More about Beacons at g.co/beacons.

ATAP (Advanced Technologies And Products)

The ATAP team develops a collection of technologies. There's Jacquard that makes fabrics interactive. A live demo of a Levi's connected jacket got a little bit too much wooo from the audience to my taste, but fan boys will be fan boys!

Next there's project ARA, the modular phone project Google acquired some years back. It looks like they made a lot of progress developing a real product. Finally. There will be an SDK coming out at the end of the year, but I'm wondering how much freedom developers have. I'm in doubt, because Google showed partnerships with all kinds of companies they will be building modules with. On the plus side: there will be personal materials, colors and of course modules available. And the modules can be installed without having to do with drivers or setup wizards. Even decoupling a module is as simple as "OK Google, eject the camera." Which is pretty awesome!

Soli is the last mentioned project and this one looked too cool to be true. Good thing there was a live demo. Soli is radar gesture sensing. The radar hardware is as big as a penny. It can use a processor alike to the one in a smart watch. This is quite a big step, since before a desktop sized computer was needed. The radar can process motions up to 60 fps. Okay cool, so what does it do? From as far away as 5 meters (15 feet) you can make gestures to control devices. Watch this clip to get the idea. Or this photo gives a pretty good impression:

Demo of Soli radar technology at Google IO 2016, a Google ATAP project

Also interesting is the push to build a universal gesture library with all interested developers. So far with the Alpha SDK pretty neat interactions have been developed. From interacting with menus to controlling volume buttons. The beta version of the SDK is coming somewhere next year.

Google IO Buzzword: Progressive Web Apps

Nearly every tech talk had a mention of Progressive Web Apps. I went to two more detailed talks. One from Jake Archibald who explained all the concepts and showed step for step how to apply them with JavaScript. Then another talk from Addy Osmani who dove deeper in how to set it up for various frameworks. One of them being AngularJS. Too bad he went about explaining all the concepts again, but hey, it was interesting enough! And there were a lot of code samples.

First the goal: reduce time to first paint. (Then reduce time to first meaningful paint. And lastly, reduce time to first meaningful interaction). Meaning the site should be displayed within a second. Not all content has to be there, but the user should at least immediately see his viewport filled with meaningful content.

There's a 5 step plan on how to achieve this:

  1. Make web app responsive (duh!)
  2. Create a web app manifest
  3. Add service worker app shell. E.g. with npm: sw-precache and sw-toolbox.
  4. Apply content caching
  5. Universal rendering (pre-rendering server-side)

See this guide how to put this to practice. (You can find more interesting guides over there).

If you want to apply this concept to an Angular 1 app, you can cut the loading time by about halve. Still, the quickest solution Addy showed was about 4 seconds on 3G. With Angular2 however there's much more speed s(h)aving possible. The same app could be loaded with 400 ms. That would be quicker than a native app. I can't wait to play around with this!

There's an upcoming Google Event in Amsterdam, the 20th and 21st of June where there will be a great number of talks about Progressive Web Apps.

These measures all contribute to mobile friendly sites. To make them even more 'Google mobile' friendly, meaning better in the search index, there are some additional steps you can take. Read on.

AMP - Accelerated Mobile Pages

AMP is a new feature developers can apply to let their web apps appear nicer and higher in the search results, and let the app load faster. The goal according to leader of the Search team Richard Gingras is "How to make the web great again." The search results Google presents are becoming richer and richer. E.g. when you search for a particular news item, you'll probably be shown a carousel with cards as the first search result. In it rich previews of articles. By applying the AMP principles to your own web app, it can end up in there too. If you want to know how your app is doing according to Google's standards, you can consult the Google Search Console.

"It's time to figure out how to tell the best stories." - Richard Gringras (Senior Director of News and Social Products at Google)

V.R.

Last but definately not least, a hot topic at this year's Google IO was Virtual Reality. I was wondering whether Google was on the big VR train, because all i knew from google was the cardboard and VR enabled Youtube videos. (What I didn't know, they rebuild the whole of YouTube for this). Somewhere on the line I heard a bit about project Tango, but I didn't read in about it too much. I was still more excited about the HoloLense. After this Google IO this changed a bit. But not too much. Tango is really good in augmented reality. By using a tablet with the Tango technology, you can easily measure up space and plot the dimensions. The software can map its surroundings in 3D, so when VR comes in to place, you won't bump into your furniture. But so far, it is not that much coupled with VR. I found out that some major companies like Autodesk are already using the Tango device in day-to-day architectural work with clients. Also they have a consumer app available. In that talk -with also people from the game industry and a guy from Lowe- they described the extra possibilities Tango brought for their clients. Lowe, for example, also has a consumer app for showcasing (and buying) furniture in your own home. In Autumn, Tango enabled smart phones will roll out. They will be running on android N which does have a lot of support for VR built into the system. The VR environment Google is building, is called Daydream. I hope to see Tango getting coupled with Daydream by then. And game studios releasing VR games you can play in/with your own room.


If you have any questions or would like to know more about a certain subject, feel free to post it in the comments! To see some updates I posted during Google IO, visit my Twitter @vanderwise.

shadow-left