Archive: 2019

Awesome Asciidoctor: Include Asciidoc Markup With Listing or Literal Blocks Inside Listing or Literal Block

Posted on by  
Hubert Klein Ikkink

If we want to include Asciidoc markup as source language and show the markup without transforming it we can use a listing or literal block. For example we are using Asciidoc markup to write a document about Asciidoctor and want to include some Asciidoc markup examples. If the markup contains sections like a listing or literal block and it is enclosed in a listing or literal block, the tranformation goes wrong. Because the beginning of the included listing or literal block is seen as the ending of the enclosing listing or literal block. Let’s see what goes wrong with an example where we have the following Asciidoc markup:

When we transform this to HTML we get the following output:

Continue reading →

Gradle Goodness: Stop Build After One Failing Test

Posted on by  
Hubert Klein Ikkink

Normally when we run tests in our Gradle build, all our tests are executed and at the end we can see which tests are failing. But what if we want to let the build fail at the first failing test? Especially for a large test suite this can save a lot of time, because we don’t have to run all (failing) tests, we immediately get informed that at least one test is failing.

We can do this by passing the command-line option --fail-fast when we run the test task in Gradle. With this option Gradle will stop the build and report a failure at the first failing test. Instead of passing the command-line option --fail-fast we can set the property failFast of the test task to true. Using the property failFast allows to still fail the build on the first failing test even if we for example run a build task that depends on the test task. The command-line option --fail-fast only works if we run the test task directly, not if it is part of the task graph for our build when we run another task.

Continue reading →

Java Joy: Combining Predicates

Posted on by  
Hubert Klein Ikkink

In Java we can use a Predicate to test if something is true or false. This is especially useful when we use the filter method of the Java Stream API. We can use lambda expressions to define our Predicate or implement the Predicate interface. If we want to combine different Predicate objects we can use the or, and and negate methods of the Predicate interfaces. These are default methods of the interface and will return a new Predicate.

Let’s start with an example where we have a list of String values. We want to filter all values that start with Gr or with M. In our first implementation we use a lambda expression as Predicate and implements both tests in this expression:

Continue reading →

Quickly Find Unicode For Character On macOS

Posted on by  
Hubert Klein Ikkink

Sometimes when we are developing we might to need to lookup the unicode value for a character. If we are using macOS we can use the Character Viewer to lookup the unicode. We can open the Character Viewer using the key combination ⌃+⌘+Space (Ctrl+Cmd+Space) or open the Edit menu in our application and select Emoji & Symbols. We can type the character we want to unicode value for in the Search box or look it up in the lists. When we select the character we can see at the right the Unicode for that character:

Continue reading →

Spocklight: Use Stub or Mock For Spring Component Using @SpringBean

Posted on by  
Hubert Klein Ikkink

When we write tests or specifications using Spock for our Spring Boot application, we might want to replace some Spring components with a stub or mock version. With the stub or mock version we can write expected outcomes and behaviour in our specifications. Since Spock 1.2 and the Spock Spring extension we can use the @SpringBean annotation to replace a Spring component with a stub or mock version. (This is quite similar as the @MockBean for Mockito mocks that is supported by Spring Boot). We only have to declare a variable in our specification of the type of the Spring component we want to replace. We directly use the Stub() or Mock() methods to create the stub or mock version when we define the variable. From now on we can describe expected output values or behaviour just like any Spock stub or mock implementation.

Continue reading →

SonarCloud GitHub Pull Request Analysis from Jenkins for Java/Maven projects

Posted on by  
Tim te Beek

SonarCloud is a code quality tool that can identify bugs and vulnerabilities in your code. This post will explore how to integrate SonarCloud, GitHub, Jenkins and Maven to report any new code quality issues on pull requests.

SonarCloud is the cloud based variant of SonarQube, freeing you from running and maintaining a server instance. Older (<7) SonarQube versions had a preview analysis mode to report any new issues in a branch on the associated pull request. In newer versions of SonarQube this functionality has moved to the paid version, or the SonarCloud offering.

Continue reading →

Automation and Measurement as first class citizens in your sprint backlog

Posted on by  
Jasper Bogers

When you start work on a product, your velocity may be low and not reflect the investment you need to make to have proper continuous delivery. Here’s an idea to make it visible.

When you build a soda factory, producing your first can of soda effectively costs as much as the entire factory. Of course you plan to produce a whole lot more, and distribute the cost over your planned production.

This is an analogy that’s worth considering when starting on a new product with your Scrum team. During the first few sprints of work on a product, a team is often busy setting up the delivery pipeline, test framework, local development environment, etc. All this work undeniably has value, but usually isn’t expressed as "product features".

For example: You have 20 similar functional user stories that would be an equal effort to implement. The first 2 sprints your functional burndown is low. This is because during sprint planning, whichever user story gets picked up first has the questionable honour of having subtasks such as "Arrange access to Browserstack", "Set up Jenkins", "Set up AWS account", "Set up OpsGenie for alerting" and "Set up Blazemeter for load test", to name a few.

Consider what the Scrum Guide says about a deliverable increment:

Incremental deliveries of "Done" product ensure a potentially useful version of working product is always available.

a "Done", useable, and potentially releasable product Increment is created

The Increment is the sum of all the Product Backlog items completed during a Sprint and the value of the increments of all previous Sprints. At the end of a Sprint, the new Increment must be "Done," which means it must be in useable condition and meet the Scrum Team’s definition of "Done". An increment is a body of inspectable, done work that supports empiricism at the end of the Sprint. The increment is a step toward a vision or goal. The increment must be in useable condition regardless of whether the Product Owner decides to release it.

Development Teams deliver an Increment of product functionality every Sprint. This Increment is useable, so a Product Owner may choose to immediately release it.

This is problematic because it means your first few sprints tell you little about your ability to deliver value given the manpower and knowledge at your disposal. Also, it may mean your first few sprints fail to deliver any functional increment that could go live. Because what you’ve decided constitutes value is different than what you’re investing in, it may feel like you’re forced to do necessary work without seeing measurable results. You have little to demo during your sprint reviews. Product owners get nervous the longer this takes. You’re destined to be off to a poor start.

See the following sprint backlog and resulting velocity chart. When you hide all the automation and measurement boilerplate work as subtasks underneath whichever user stories you pick up forst, your burndown charts give the impression you achieved very little.

"Fat" user stories with automation and measurement as boilerplating subtasks hidden behind user story velocity

This doesn’t seem fair.

Some resort to starting out with a "Sprint 0" of undefined length and without a sprint goal, to just get all the ramping up out of the way, as though it’s a necessary evil. Don’t do this. Focus on delivering value from the start.

Continue reading →

Publish your backend API typings as an NPM package

Posted on by  
Christophe Hesters
In this post I suggest a way to publish your backend API typings as an NPM package. Frontend projects using can then depend on these typings to gain compile time type safety and code completion when using Typescript.

It’s quite common in a microservice style architecture to provide a type-safe client library that other services can use to communicate with your service. This can be package with a Retrofit client published to nexus by the maintainer of the service. Some projects might also generate that code from a OpenAPI spec or a gRPC proto file.

However, when we expose some of these APIs to the frontend, we lose the types. In this post I suggest a way to publish your backend API types as an NPM package. The frontend can then depend on these typings. Using typescript you now have compile time type safety and code completion. To see a sneak peak, scroll to the end :).

Continue reading →

shadow-left