Posts by Hubert Klein Ikkink

Spocklight: Undo Changes in Java System Properties

Posted on by  
Hubert Klein Ikkink

If we need to add a Java system property or change the value of a Java system property inside our specification, then the change is kept as long as the JVM is running. We can make sure that changes to Java system properties are restored after a feature method has run. Spock offers the RestoreSystemProperties extension that saves the current Java system properties before a method is run and restores the values after the method is finished. We use the extension with the @RestoreSystemProperties annotation. The annotation can be applied at specification level or per feature method.

In the following example we see that changes to the Java system properties in the first method are undone again in the second method:

Continue reading →

Spocklight: Auto Cleanup Resources

Posted on by  
Hubert Klein Ikkink

Spcok has a lot of nice extensions we can use in our specifications. The AutoCleanup extension makes sure the close() method of an object is called each time a feature method is finished. We could invoke the close() method also from the cleanup method in our specification, but with the @AutoCleanup annotation it is easier and immediately shows our intention. If the object we apply the annotation to doesn't have a close() method to invoke we can specify the method name as the value for the annotation. Finally we can set the attribute quiet to true if we don't want to see any exceptions that are raised when the close() method (or custom method name, that is specified) is invoked.

In the following example code we have a specification that is testing the WatchService implementation. The implementation also implements the Closeable interface, which means we can use the close() method to cleanup the object properly. We also have a custom class WorkDir with a delete() method that needs to be invoked.

Continue reading →

Spocklight: Optimize Run Order Test Methods

Posted on by  
Hubert Klein Ikkink

Spock is able to change the execution order of test methods in a specification. We can tell Spock to re-run failing methods before successful methods. And if we have multiple failing or successful tests, than first run the fastest methods, followed by the slower methods. This way when we re-run the specification we immediately see the failing methods and could stop the execution and fix the errors. We must set the property optimizeRunOrder in the runner configuration of the Spock configuration file. A Spock configuration file with the name SpockConfig.groovy can be placed in the classpath of our test execution or in our USER_HOME/.spock directory. We can also use the Java system property spock.configuration and assign the filename of our Spock configuration file.

In the following example we have a specification with different methods that can be successful or fail and have different durations when executed:

Continue reading →

Gradle Goodness: Quickly Open Test Report in IntelliJ IDEA

Posted on by  
Hubert Klein Ikkink

When we execute a Gradle test task in IntelliJ IDEA the IDE test runner is used. This means we can get a nice overview of all tests that have run. If a test is successful we get a green light, otherwise it is red in case of an error or orange in case of a failure. Gradle also generates a nice HTML report when we run the test task. It is placed in the directory build/reports/tests/. IntelliJ IDEA has a button which opens this report when we click on it. The following screenshot shows the button with the tooltip that opens a Gradle test report:

Continue reading →

Spocklight: Include or Exclude Specifications Based On Class or Interface

Posted on by  
Hubert Klein Ikkink

In a previous post we saw how can use the Spock configuration file to include or exclude specifications based on annotations. Instead of using annotations we can also use classes to include or exclude specifications. For example we could have a base specification class DatabaseSpecification. Other specifications dealing with databases extend this class. To include all these specifications we use the DatabaseSpecification as value for the include property for the test runner configuration.

Because Java (and Groovy) doesn't support real multiple inheritance this might be a problem if we already have specifications that extends a base class, but the base class cannot be used as filter for the include and exclude runner configuration. Luckily we can also use an interface as the value for the inclusion or exclusion. So we could simple create a marker interface and implement this interface for these specifications we want to include or exclude from the test execution.

Continue reading →

Spocklight: Including or Excluding Specifications Based On Annotations

Posted on by  
Hubert Klein Ikkink

One of the lesser known and documented features of Spock if the external Spock configuration file. In this file we can for example specify which specifications to include or exclude from a test run. We can specify a class name (for example a base specification class, like DatabaseSpec) or an annotation. In this post we see how to use annotations to have some specifications run and others not.

The external Spock configuration file is actually a Groovy script file. We must specify a runner method with a closure argument where we configure basically the test runner. To include specification classes or methods with a certain annotation applied to them we configure the include property of the test runner. To exclude a class or method we use the exclude property. Because the configuration file is a Groovy script we can use everything Groovy has to offer, like conditional statements, println statements and more.

Continue reading →

Show Logback Configuration Status with Groovy Configuration

Posted on by  
Hubert Klein Ikkink

Logback is a SLF4J API implementation for logging messages in Java and Groovy. We can configure Logback with a Groovy configuration file. The file is a Groovy script and allows for a nice an clean way (no XML) to configure Logback. If we want to show the logging configuration and see how Logback is configured we must add a StatusListener implementation in our configuration. The StatusListener implementation prints out the configuration when our application starts. Logback provides a StatusListener for outputting the information to system out or system error streams (OnConsoleStatusListener and OnErrorConsoleStatusListener). If we want to disable any status messages we use the NopStatusListener class.

In the following example configuration file we define the status listener with the statusListener method. We use the OnConsoleStatusListener in our example:

Continue reading →

Redirect Logging Output to Standard Error with Logback

Posted on by  
Hubert Klein Ikkink

When we use Logback as SLF4J API implementation in our code we can have our logging output send to our console. By default the standard output is used to display the logging output. We can alter the configuration and have the logging output for the console send to standard error. This can be useful when we use a framework that uses the standard output for communication and we still want to see logging from our application on the console. We set the property target for the ConsoleAppender to the value System.err instead of the default System.out. The following sample Logback configuration in Groovy send the logging output to the console and standard error:

appender("SystemErr", ConsoleAppender) {
    // Enable coloured output.
    withJansi = true

    encoder(PatternLayoutEncoder) {
        pattern = "%blue(%-5level) %green(%logger{35}) - %msg %n"
    }

    // Redirect output to the System.err.
    target = 'System.err'

}

root(DEBUG, ["SystemErr"])

Continue reading →

Gradle Goodness: Using Continuous Build Feature

Posted on by  
Hubert Klein Ikkink

Gradle introduced the continuous build feature in version 2.5. The feature is still incubating, but we can already use it in our daily development. The continuous build feature means Gradle will not shut down after a task is finished, but keeps running and looks for changes to files to re-run tasks automatically. It applies perfectly for a scenario where we want to re-run the test task while we write our code. With the continuous build feature we start Gradle once with the test task and Gradle will automatically recompile source files and run tests if a source file changes.

To use the continuous build feature we must use the command line option --continuous or the shorter version -t. With this option Gradle will start up in continuous mode. To stop Gradle we must use the Ctrl+D key combination.

Continue reading →

NextBuild 2015 Conference Report

Posted on by  
Hubert Klein Ikkink

Saturday May 30th was the first NextBuild developer's conference in Eindhoven at the High Tech Campus. The conference is free for attendees and offers a variety of subjects presented by colleague developer's. This meant all talks were very practical and covered subjects encountered in real projects. This adds real value for me to attend a talk. Although it was on a weekend day there were about 150 attendees present. The location was very nice and allowed for a nice, informal atmosphere with a lot of opportunities to catch up.

The day started with a key note talk by Alex Sinner of Amazon. He looked into microservices and explained the features of AWS and especially the container support and the new AWS Lambda service. With the AWS Lambda service we can deploy functions that are executed on the Amazon infrastructure and we only pay when such a function needs to be executed. After the keynote the conference tracks were separated into five rooms, so sometimes it was difficult to choose a track. I went to the talk by my JDriven colleague Rob Brinkman about a Westy tracking platform he built with Vert.x, Groovy, AngularJS, Redis, Docker and Gradle. For those that don't know, but a Westy is a Volkswagen van used for camping trips. Rob has build a platform where a (cheap) tracker unit communicates to a Vert.x module the location of the Westy. This is all combined with other trip details and information in a web application. Every works with push events and the information is updated in real time in the web application. The talks was very interesting and really shows also the power and elegance of Vert.x. Also the architecture provided is a like a blueprint for Internet of Things (IoT) applications.

Continue reading →

Groovy Goodness: Share Data in Concurrent Environment with Dataflow Variables

Posted on by  
Hubert Klein Ikkink

To work with data in a concurrent environment can be complex. Groovy includes GPars, yes we don't have to download any dependencies, to provide some models to work easily with data in a concurrent environment. In this blog post we are going to look at an example where we use dataflow variables to exchange data between concurrent tasks. In a dataflow algorithm we define certain functions or tasks that have an input and output. A task is started when the input is available for the task. So instead of defining an imperative sequence of tasks that need to be executed, we define a series of tasks that will start executing when their input is available. And the nice thing is that each of these tasks are independent and can run in parallel if needed.
The data that is shared between tasks is stored in dataflow variables. The value of a dataflow variable can only be set once, but it can be read multiple times. When a task wants to read the value, but it is not yet available, the task will wait for the value in a non-blocking way.

In the following example Groovy script we use the Dataflows class. This class provides an easy way to set multiple dataflow variables and get their values. In the script we want to get the temperature in a city in both Celcius and Fahrenheit and we are using remote web services to the data:

Continue reading →

Grails Goodness: Custom Data Binding with @DataBinding Annotation

Posted on by  
Hubert Klein Ikkink

Grails has a data binding mechanism that will convert request parameters to properties of an object of different types. We can customize the default data binding in different ways. One of them is using the @DataBinding annotation. We use a closure as argument for the annotation in which we must return the converted value. We get two arguments, the first is the object the data binding is applied to and the second is the source with all original values of type SimpleMapDataBindingSource. The source could for example be a map like structure or the parameters of a request object.

In the next example code we have a Product class with a ProductId class. We write a custom data binding to convert the String value with the pattern {code}-{identifier} to a ProductId object:

Continue reading →

shadow-left