I am frequently asked by colleagues for advice on how to be a good Scrum Master. I will discuss some of the tips I share in a couple of blog posts.
First of all I do like to state that I believe it’s best to have a Scrum Master that is able to get his hands dirty in the activities of the team (i.e. coding, analyzing, designing, testing etc.). It will enable him/her to engage and coach at more levels than just overall process.
In my opinion one of the most important things a Scrum Master has to do is to make things transparant for the whole team.
Now this seems like very simple advice, and it is. However, when you are in the middle of a sprint and all kinds of (potential) impediments are making successfully reaching the sprint goal harder and harder, the danger of losing transparency always pops up.
Here are three practical tips: Continue reading
NOTE: Just released version 0.1.1 of ngImprovedTesting containing a fix for reported issue #1 as well as support for testing directives with mocked dependencies.
Being able to easily test your application is one of the most powerful features that AngularJS offers. All the services, controllers, filters even directives you develop can be fully (unit) tested.
However the learning curve for writing (proper) unit tests tends to be quite steep.
This is mainly because AngularJS doesn’t really offer any high level API’s to ease the unit testing. Instead you are forced to use the same (low level) services that AngularJS uses internally. That means you have to gain in dept knowledge about the internals of $controller, when to $digest and how to use $provide in order to mock these services. Especially mocking out a dependency of controller, filter or another service is too cumbersome.
This blog will show how you would normally create mocks in AngularJS, why its troublesome and finally introduces the new ngImprovedTesting library that makes mock testing much easier. Continue reading
Sometimes we want to support in our RESTful API a different level of detail in the output for the same resource. For example a default output with the basic fields and a more detailed output with all fields for a resource. The client of our API can then choose if the default or detailed output is needed. One of the ways to implement this in Grails is using converter named configurations.
Grails converters, like
XML, support named configurations. First we need to register a named configuration with the converter. Then we can invoke the
use method of the converter with the name of the configuration and a closure with statements to generate output. The code in the closure is executed in the context of the named configuration.
The default renderers in Grails, for example
DefaultJsonRenderer, have a property
namedConfiguration. The renderer will use the named configuration if the property is set to render the output in the context of the configured named configuration. Let’s configure the appropriate renderers and register named configurations to support the named configuration in the default renderers.
In Grails we can apply the
@Resource AST (Abstract Syntax Tree) annotation to a domain class. Grails will generate a complete new controller which by default extends
grails.rest.RestfulController. We can use our own controller class that will be extended by the
@Resource transformation. For example we might want to disable the
delete action, but still want to use the
@Resource transformation. We simply write a new
RestfulController implementation and use the
superClass attribute for the annotation to assign our custom controller as the value.
We can write a RESTful application with Grails and define our API in different ways. One of them is to subclass the
RestfulController already contains a lot of useful methods to work with resources. For example all CRUD methods (
save/show/update/delete) are available and are mapped to the correct HTTP verbs using a URL mapping with the
Especially when we use Grails to create a RESTful API we want to enable the request header
Accept, so Grails can do content negotiation based on the value in the header. For example we could use the request header
Accept: application/json to get a JSON response. Grails will look at the
boolean configuration property
grails.mime.use.accept.header to see if the
Accept header must be parsed. The default value is
true so the
Accept header is used. But there is another property which determines if the
Accept header is used:
grails.mime.disable.accept.header.userAgents. The value must contain a list or a regular expression pattern of user agents which are ignored for the
Accept header. The default value is
~/(Gecko(?i)|WebKit(?i)|Presto(?i)|Trident(?i))/. So for any request from these user agents, mostly our web browser, the
Accept header is ignored.
We can use data pipes to write data driven tests in Spock. A data pipe (
<<) is fed by a data provider. We can use
Collection objects as data provider, but also
String objects and any class that implements the
Iterable interface. We can write our own data provider class if we implement the
We can write data driven tests with Spock. We can specify for example a data table or data pipes in a
where: block. If we use a data pipe we can specify a data provider that will return the values that are used on each iteration. If our data provider returns multiple results for each row we can assign them immediatelly to multiple variables. We must use the syntax
[var1, var2, var3] < < providerImpl to assign values to the data variables
var3. We know from Groovy the multiple assignment syntax with parenthesis (
(var1, var2, var3)), but with Spock we use square brackets.
We can use the
@IgnoreRest annotation in our Spock specifications to not run the annotated specifications or features. With the
@IgnoreIf annotation we can specify a condition that needs to evaluate to
true to not run the feature or specification. The argument of the annotation is a closure. Inside the closure we can access three extra variables:
properties (Java system properties),
env (environment variables) and
Spock’s unroll feature is very powerful. The provider data variables can be used in the method description of our specification features with placeholders. For each iteration the placeholders are replaced with correct values. This way we get a nice output where we immediately can see the values that were used to run the code. Placeholders are denoted by a hash sign (
#) followed by the variable name. We can even invoke no-argument methods on the variable values or access properties. For example if we have a String value we could get the upper case value with
#variableName.toUpperCase(). If we want to use more complex expressions we must introduce a new data variable in the
where block. The value of the variable will be determined for each test invocation and we can use the result as a value in the method description.