Grails Goodness: Use Random Server Port In Integration Tests

Because Grails 3 is based on Spring Boot we can use a lot of the functionality of Spring Boot in our Grails applications. For example we can start Grails 3 with a random available port number, which is useful in integration testing scenario’s. To use a random port we must set the application property server.port to the value 0. If we want to use the random port number in our code we can access it via the @Value annotation with the expression ${local.server.port}.

Continue reading

Gradle Goodness: Get Property Value With findProperty

Gradle 2.13 added a new method to get a property value: findProperty. This method will return a property value if the property exists or null if the property cannot be found. Gradle also has the property method to return a property value, but this method will throw an exception if the property is missing. With the new findProperty method and the Groovy elvis operator (?:) we can try to get a property value and if not found return a default value.

In the following example we have a task that tries to print the value of the properties sampleOld and sampleNew. We use the findProperty for sampleNew and the property method for sampleOld:

First run the task and not set the project properties sampleOld and sampleNew:

Next we use the -P command line option to set a value for the properties:

Written with Gradle 2.13.

Original blog post

Grails Goodness: Change Version For Dependency Defined By BOM

Since Grails 3 we use Gradle as the build system. This means we also use Gradle to define dependencies we need. The default Gradle build file that is created when we create a new Grails application contains the Gradle dependency management plugin via the Gradle Grails plugin. With the dependency management plugin we can import a Maven Bill Of Materials (BOM) file. And that is exactly what Grails does by importing a BOM with Grails dependencies. A lot of the versions of these dependencies can be overridden via Gradle project properties.

Continue reading

Experiences at Google IO 2016

From May 18-20 myself and Richard attended the Google IO 2016 conference. We both visited different tracks and have some different experiences we’d like to share. Here are mine. Read on about topics in the likes of VR, Progressive Web Apps, and Artificial Intelligence. For a quick impression have a look at the photo album.

Google CEO Pichai during Google IO keynote at the Amfitheater

A.I. first

It was already made very clear in the keynote by Google CEO Sundar Pichai that Google’s way forward will be largely focussed on understanding their users better. All to create a better personalized experience. Pichai called it ‘creating a personal Google’. For this to happen, Google has been investing for over 10 years in creating algorythms that can understand speech. Their Natural Language Processing tools are the best on the market right now. This is largely due to the fact that context can be derived from user conversations or search queries with Google. Google is building a new Assistant which has Machine Learning capabilities to improve serving the user over time. Software that makes this possible is (partly) open source. It’s called TensorFlow. At Google they think that machines should focus on things that are hard for humans and easy for machines. But the key challenge with machine learning is unsupervised learning. Currently engineers decide for a machine what neural networks to use for a certain task. Google wants machines to be able to make those decisions themself. Google will roll out their new Assistant this fall. It will be the successor of Google Now. The Assistant will see its entrance in various products, including a new chat app called ‘Allo’ and a new product for the home. Google Home.

“We are shifting from Mobile First to A.I. first” – Jeff Dean (Google Brain leader)

Allo

The Assistant promises to be very helpful in chats you make either with your own contacts or with the Assistant itself. You can even consult the Assistant during a chat you have with a friend, e.g. to quickly search for a restaurant to eat together. The Assistant can make suggestions and with your answers it learns even more about you. Not only when you ask for consultation. The Assistant can even give suggested answers to questions from the person you’re chatting with. During a chat (whether it’s spoken or written) you can clearly see the magnificance of the Natural Language Processing abilities. E.g. when you perfom a search about Real Madrid and then ask ‘When is the next game?’, you’ll get the date of the next game of, hence, Real Madrid. If you then query ‘get me 2 two tickets’, the Assistant will suggest two seats for you and asks you if you want to buy the tickets. Etc. etc. Google ensures that everything is private and encrypted, but I’m wondering where the Ads come in. And whether the restaurants it suggests, or the game ticket web site it orders from, have bought priveleges.

Next to Allo, also a new video call app will be launched called Duo. Cool thing about this app is it shows you live video from the caller before you pick up the call. But, yet again, another app. You can pre-register for these 2 apps at the play store. You’ll get a notification when the apps are available for download.

Introduction of Google Home at Google IO 2016

Home

All these smart algorythms will also be available (coming Autumn) in your own home in a sort of Chromecast 2.0. The assistant can work with whole families and recognizes the different voices that say ‘OK, Google’. It can work together with different devices in your home. So it can playback movies, show appointments, search results, play music, and so on, on different devices. It also comes with quite a decent speaker itself (hope that’s true), so the more Home devices you buy the better your experience will be in different rooms. If it promises to work as good as foretold (and in Dutch preferably), I’ll definately buy at least one!

IoT

And so the bridge is made to IoT devices. With the Home device being your family’s Hal 9000, I can see it being able in the near future to control every device inside your house. Google Everywhere. During the Speechless Cabaret session there even was a joke about Google being part of Alphabet being part of Universe. Apart from Google Home, Google has a few teams working on IoT devices. One is ATAP, the other Physical Web.

Physical Web

The Physical web project is all about Beacons and WebBluetooth. Beacons are small devices that can broadcast a URL. The HTML5 WebBluetooth API can scan for these broadcasts and connect to these devices. A phone, for example, can use a browser + bluetooth connection to receive the messages in the form of push notifications. The Beacon device itself has had quite the update on battery life. The first device could last 6 months, last year’s version 2 years. The current version can last a decade! The next version that will be developed is said to last forever with the use of solar power. Typical applications could be (aside from the obvious – advertising) pill bottles, bus stops (London Bus system already makes use of it!) or clothing that broadcast a person’s web site. I was lucky enough to receive such a T-shirt, which is now broadcasting my LinkedIn profile to other luggage in the plane. More about Beacons at g.co/beacons.

ATAP (Advanced Technologies And Products)

The ATAP team develops a collection of technologies. There’s Jacquard that makes fabrics interactive. A live demo of a Levi’s connected jacket got a little bit too much wooo from the audience to my taste, but fan boys will be fan boys!

Next there’s project ARA, the modular phone project Google acquired some years back. It looks like they made a lot of progress developing a real product. Finally. There will be an SDK coming out at the end of the year, but I’m wondering how much freedom developers have. I’m in doubt, because Google showed partnerships with all kinds of companies they will be building modules with. On the plus side: there will be personal materials, colors and of course modules available. And the modules can be installed without having to do with drivers or setup wizards. Even decoupling a module is as simple as “OK Google, eject the camera.” Which is pretty awesome!

Soli is the last mentioned project and this one looked too cool to be true. Good thing there was a live demo. Soli is radar gesture sensing. The radar hardware is as big as a penny. It can use a processor alike to the one in a smart watch. This is quite a big step, since before a desktop sized computer was needed. The radar can process motions up to 60 fps. Okay cool, so what does it do? From as far away as 5 meters (15 feet) you can make gestures to control devices. Watch this clip to get the idea. Or this photo gives a pretty good impression:

Demo of Soli radar technology at Google IO 2016, a Google ATAP project

Also interesting is the push to build a universal gesture library with all interested developers. So far with the Alpha SDK pretty neat interactions have been developed. From interacting with menus to controlling volume buttons. The beta version of the SDK is coming somewhere next year.

Google IO Buzzword: Progressive Web Apps

Nearly every tech talk had a mention of Progressive Web Apps. I went to two more detailed talks. One from Jake Archibald who explained all the concepts and showed step for step how to apply them with JavaScript. Then another talk from Addy Osmani who dove deeper in how to set it up for various frameworks. One of them being AngularJS. Too bad he went about explaining all the concepts again, but hey, it was interesting enough! And there were a lot of code samples.

First the goal: reduce time to first paint. (Then reduce time to first meaningful paint. And lastly, reduce time to first meaningful interaction). Meaning the site should be displayed within a second. Not all content has to be there, but the user should at least immediately see his viewport filled with meaningful content.

There’s a 5 step plan on how to achieve this:

See this guide how to put this to practice. (You can find more interesting guides over there).

If you want to apply this concept to an Angular 1 app, you can cut the loading time by about halve. Still, the quickest solution Addy showed was about 4 seconds on 3G. With Angular2 however there’s much more speed s(h)aving possible. The same app could be loaded with 400 ms. That would be quicker than a native app. I can’t wait to play around with this!

There’s an upcoming Google Event in Amsterdam, the 20th and 21st of June where there will be a great number of talks about Progressive Web Apps.

These measures all contribute to mobile friendly sites. To make them even more ‘Google mobile’ friendly, meaning better in the search index, there are some additional steps you can take. Read on.

AMP – Accelerated Mobile Pages

AMP is a new feature developers can apply to let their web apps appear nicer and higher in the search results, and let the app load faster. The goal according to leader of the Search team Richard Gingras is “How to make the web great again.” The search results Google presents are becoming richer and richer. E.g. when you search for a particular news item, you’ll probably be shown a carousel with cards as the first search result. In it rich previews of articles. By applying the AMP principles to your own web app, it can end up in there too. If you want to know how your app is doing according to Google’s standards, you can consult the Google Search Console.

“It’s time to figure out how to tell the best stories.” – Richard Gringras (Senior Director of News and Social Products at Google)

V.R.

Last but definately not least, a hot topic at this year’s Google IO was Virtual Reality. I was wondering whether Google was on the big VR train, because all i knew from google was the cardboard and VR enabled Youtube videos. (What I didn’t know, they rebuild the whole of YouTube for this). Somewhere on the line I heard a bit about project Tango, but I didn’t read in about it too much. I was still more excited about the HoloLense. After this Google IO this changed a bit. But not too much. Tango is really good in augmented reality. By using a tablet with the Tango technology, you can easily measure up space and plot the dimensions. The software can map its surroundings in 3D, so when VR comes in to place, you won’t bump into your furniture.
But so far, it is not that much coupled with VR. I found out that some major companies like Autodesk are already using the Tango device in day-to-day architectural work with clients. Also they have a consumer app available. In that talk -with also people from the game industry and a guy from Lowe- they described the extra possibilities Tango brought for their clients. Lowe, for example, also has a consumer app for showcasing (and buying) furniture in your own home.
In Autumn, Tango enabled smart phones will roll out. They will be running on android N which does have a lot of support for VR built into the system. The VR environment Google is building, is called Daydream. I hope to see Tango getting coupled with Daydream by then. And game studios releasing VR games you can play in/with your own room.


If you have any questions or would like to know more about a certain subject, feel free to post it in the comments!
To see some updates I posted during Google IO, visit my Twitter @vanderwise.

Mission to Mars follow up

Last week I presented my talk ‘MISSION TO MARS: EXPLORING NEW WORLDS WITH AWS IOT’ at IoT Tech Day 2016 and it was great fun! In the presentation I showed how to build a small robot and control it over MQTT messaging via Amazons IoT platform. The room was packed and the demo went well too.

mission_to_mars_presentation

I promised to share some info about it on my blog so here we are. I’ve composed a shopping list and a collection of useful links:
Mission to Mars – Shopping list
Mission to Mars – Useful links

The original presentation is available here:
Mission_to_Mars-Jeroen_Resoort-IoT_Tech_Day.pdf

So what’s next? I should publish my Pi robot and Mission Control Center web client code on github. Maybe I’ll extend the python code for controlling the mBot over a serial connection and make a proper library for it. Will keep you updated…

Grasping AngularJS 1.5 directive bindings by learning from Angular 2

In AngularJS 1.5 we can use attribute binding to allow easy use of input-only, output-only and two-way attributes for a directive or component.

Instead of manually parsing, watching and modifying attribute values through code, we can simply specify an attribute binding by adding a property to the object hash of:

In this blog post we will learn how attribute bindings differ between AngularJS 1.5 and Angular 2 and what we can learn from Angular 2 to make your HTML and JavaScript in Angular 1.5 more descriptive.
Continue reading

Gradle Goodness: Source Sets As IntelliJ IDEA Modules

IntelliJ IDEA 2016.1 introduced better support for Gradle source sets. Each source set in our project becomes a module in the IntelliJ IDEA project. And each module has it’s own dependencies, also between source sets. For example if we simply apply the java plugin in our project we already get two source sets: main and test. For compiling the test sources there is a dependency to the main source set. IntelliJ now knows how to handle this.

Continue reading

Gradle Goodness: Add Spring Facet To IntelliJ IDEA Module

To create IntelliJ IDEA project files with Gradle we first need to apply the idea plugin. We can then further customise the created files. In this blog post we will add a Spring facet to the generated module file. By adding the Spring facet IntelliJ IDEA can automatically search for Spring configuration files. We can then use the Spring view to see which beans are configured in our project.

Continue reading

Gradle Goodness: Set VCS For IntelliJ IDEA In Build File

When we use the IDEA plugin in Gradle we can generate IntelliJ IDEA project files. We can customise the generated files in different ways. One of them is using a simple DSL to configure certain parts of the project file. With the DSL it is easy to set the version control system (VCS) used in our project.

In the next example build file we customise the generated IDEA project file and set Git as the version control system. The property is still incubating, but we can use it to have a proper configuration.

Written with Gradle 2.12 and IntelliJ IDEA 15.

Original blog post

Gradle Goodness: Configure IntelliJ IDEA To Use Gradle As Testrunner

When we run tests in IntelliJ IDEA the code is compiled by IntelliJ IDEA and the JUnit test runner is used. We get a nice graphical overview of the tasks that are executed and their results. If we use Gradle as the build tool for our project we can tell IntelliJ IDEA to always use Gradle for running tests.

Continue reading