Java, Docker, Jenkins, automated tests

View on GitHub


Mobile app synchronization using event sourcing

A couple of years ago I followed a course on the Getting Things Done methodology, after having struggled with prioritizing and even remembering stuff that needed doing.

As a result, I now have daily task lists of important stuff, and it’s given me a lot of peace of mind, but the methodology was about more than just day-to-day tasks, and a significant portion of the course was about long-term perspective, and things you wanted to do and reach.

One of things I wanted to do was to learn how to make Android applications, and figuring I’d kill two birds with one stone, I created my own task list app (which also has a web interface) called Coppermind (a reference to the Feruchemists of the Mistborn series, who use “copperminds” to store knowledge).

This app has been working admirably for quite some time now, though last year I did a complete rebuild to switch to a more flexible method of defining repeating tasks.

The problem

The old application synchronized with the backend by performing a simple query: give me all currently relevant data. This included all active tasks, all projects, all rules, etcetera. This query was repeated every few minutes. While the complete set of data transfered in this manner was small, it added up over a span of 24 hours, easily eclipsing most other apps in data usage (the only exception was Youtube I believe).

What I wanted to do, instead of just ask my server to “give me everything”, I instead created a mechanism that provided me answer to the question “what has changed since my last synchronization?”.

Event sourcing

Event sourcing was a technique used at my previous employer to rebuild the data model of an application based on the state of another application. One instance of the source application was queried, and then used to launch (I believe) 5 instances of the target application. Any future change to the data model only needed to communicate what had changed to the target application (a problem I believe they eventually solved by hooking up both applications to the same event bus). The technique is described in more detail in Martin Fowler’s article on the subject.

I expanded my application to keep a log of all changes in much the same manner, and added a REST service that allowed the Android app to retrieve all changes since the last query, which it used to update its local datamodel.

Does this work?

This technique works quite well when it comes to reducing bandwidth usage both for my server and my mobile phone. Only tasks that are not yet known on my phone get transfered, drastically cutting down the bandwidth usage.

Except there is a bug

Last Tuesday I found that my phone would no longer sync with the backend. I first thought this was a simple deployment issue, but the problem persisted, apparently caused by a discrepancy between the tasks known by the phone and those known by the server.

The solution in such a case is simple: wipe the phone data and rebuild the database from scratch using event playback. But this took considerably longer than I expected considering the roughly 40 tasks that are currently active.

As of right now, the database contains 730 tasks (689 of which are marked as completed). The number of events logged, however, is 229112, roughly 300 events for each task. This is divided as follows:

       type        | count 
 Complete task     |   689
 Create project    |    15
 Create repetition | 75667
 Create scope      |     5
 Create scope rule |     4
 Create task       |   743
 Delete project    |     0
 Delete repetition | 75121
 Delete scope      |     0
 Delete scope rule |     0
 Delete task       |    13
 Rename project    |     0
 Rename scope      |     0
 Update task       | 76855

I have not yet determined the cause of this, but considering the fact that most of the events logged are updates (a repetition is updated by deleting the old one and creating a new one), I suspect the app may be registering its updates more than once (perhaps a concurrency issue?).

For now I can probably mitigate this duplicate data by creating a deduplication background job. Once I have solved this mystery I will create a follow-up post.

Using Keycloak with Wicket

I have a number of Wicket-based web applications for various hobby purposes. These applications each serve a specific task, such as keeping track of my financial situation, or to inform me when my favorite YouTube channels have updates I’m interested in. A common feature among these applications is that they have some form of identity management, usually in the form of usernames and passwords.

Having created a number of these login systems over the years, I find the process to be quite tedious. I considered creating my own centralized identity management system to combine the login logic for all these applications when I heard Adam Bien mention Keycloak in one of his episodes.

Intrigued, I did some research, and decided to use Keycloak as identity management solution for my next project, which we’ll refer to as keycloak-client in this post.

The application is bundled as a pair of WAR files, one for the Wicket-based frontend, and one for the Spring and Resteasy-based backend.


Keycloak runs as a Docker container on my home server, and on this installation I created a realm named “Personal” with a client named keycloak-client which connects using OpenID Connect. Once this client was configured, I downloaded the keycloak.json file from the Installation tab in the client view, which looked something like this:

  "realm": "Personal",
  "auth-server-url": "https://host/path/to/keycloak",
  "ssl-required": "external",
  "resource": "keycloak-client",
  "public-client": true,
  "verify-token-audience": true,
  "use-resource-role-mappings": true,
  "confidential-port": 0

This file was placed in the WEB-INF folder of both the backend and the frontend. I then added the Keycloak servlet filter to the pom.xml of both projects:


I then used the following configuration for the REST backend:


And the following for the Wicket frontend (which has no /health service):


With this configuration, all URLs (save for the /health endpoint on the backend) require a valid Keycloak session, and will automatically redirect you to Keycloak if you’re not logged in.

Accessing Keycloak’s session data from Wicket

Once you’re logged in, you’re probably going to want to know important things such as the user’s ID, and possibly what roles the user has. This information is stored as part of the servlet request, in a class of type KeycloakSecurityContext. Accessing this class from Wicket is simply a matter of getting the current request from the RequestCycle:

ServletWebRequest request = (ServletWebRequest) RequestCycle.get().getRequest();
HttpServletRequest containerRequest = request.getContainerRequest();
KeycloakSecurityContext securityContext = (KeycloakSecurityContext) containerRequest.getAttribute(KeycloakSecurityContext.class.getName());

From the KeycloakSecurityContext you can get the AccessToken, which contains the User ID in the Subject property:

AccessToken token = securityContext.getToken();
String subject = token.getSubject();

In addition to this data, the KeycloakSecurityContext contains a wealth of other information, which I won’t cover in detail. The most important information from a Wicket point of view is probably the roles a user has, which can be accessed as follows:

// This will probably differ for your implementation. Check field "resource" in keycloak.json
final String CLIENT_ID = "keycloak-client";

AccessToken token = securityContext.getToken();
Map<String, Access> access = token.getResourceAccess();
Set<String> roles = new HashSet<>();
if (access.containsKey(CLIENT_ID)) { 
	Set<String> userRoles = access.get(CLIENT_ID).getRoles();
	if (userRoles != null) {

You can then use these roles to define access to various pages, perhaps using a homebrew solution that uses an IComponentInstantiationListener, or by adapting an existing solution such as wicket-auth-roles (though it might be easier to only partially use the auth-roles functionality since your application will not have its own login page).

Accessing Keycloak’s session data from JAX-RS

The same logic can be used from a JAX-RS resource, by extracting the KeycloakSecurityContext from the HttpServletRequest:

public class ExampleResource { 
    private HttpServletRequest servletRequest;

    public Response whoAmI() {
        KeycloakSecurityContext context = (KeycloakSecurityContext) servletRequest.getAttribute(KeycloakSecurityContext.class);
        if (context == null) {
        	return Response.status(Response.Status.FORBIDDEN).build();
        return Response.ok(context.getToken().getSubject()).build();


Accessing the backend from the frontend

My frontend communicates with my backend through REST, using ResteasyClient to reuse the JAX-RS interfaces the backend also uses. Since the backend is also secured using Keycloak, you need to pass the access token as header with every request you make to the backend. This can be done using a ClientRequestFilter:

public class KeycloakTokenFilter implements ClientRequestFilter {

	public void filter(ClientRequestContext requestContext) throws IOException {
	    ServletWebRequest request = (ServletWebRequest) RequestCycle.get().getRequest();
        HttpServletRequest containerRequest = request.getContainerRequest();
        KeycloakSecurityContext securityContext = (KeycloakSecurityContext) containerRequest.getAttribute(KeycloakSecurityContext.class.getName());
        requestContext.getHeaders().add("Authorization", "Bearer "+ securityContext.getTokenString());	

This filter is then added to the ResteasyClient with the register method:

        ResteasyClientBuilder builder = new ResteasyClientBuilder();
		ResteasyClient client =;

		ResteasyWebTarget target =;

In conclusion

The amount of configuration needed to setup Keycloak authentication is minimal, and as such is a very useful solution for securing your Wicket applications.

Tales from the interview

Last October I switched jobs. After almost two years of working at a supplier of public transport software solutions (I will refer to this as job.getPrevious() from now on), I returned to the company I had worked for the eight years prior to that. This was not an obvious choice for me; travel time was a significant factor in my decision to leave that company, and going back doubled my commute. But I enjoy this job a lot more than I did my previous one, and many of the reasons I had for leaving are no longer valid.

But as I said, it wasn’t the most obvious choice. I applied at seven other companies prior to getting back in touch with my old employer. This experience reminded me that I don’t really like doing job interviews, and that while I was dissatisfied with my old job, they really weren’t all that bad compared to some of the places I visited.

Ever since switching, I’ve wanted to write down my job interview experiences in the style of Tales from the Interview from The Daily WTF.

Jenkins, Docker, and sharing data between stages

I’ve recently started moving a number of projects from their classic Jenkins Maven Project setup to Jenkins Pipeline setups. I really like the concept of pipelines, but one obstacle I encountered was the need to specify tools for your builds, which depend on the naming by the Jenkins administrator, and places a burden on the Jenkins administrator to keep the tools up-to-date.

Fortunately for me, the servers in question also had Docker installed and configured for use by Jenkins, so I could simply specify I wanted to run the build inside a Docker container:

pipeline {
    agent {
        docker {
            image 'maven:3.5-jdk-8'
            label 'docker'

There are two caveats to doing this, however:

  • The only thing shared between the Jenkins instance (which can be a build slave) and the docker container is the workspace directory. This means that the local Maven repository for the build is blank, and all dependencies will need to be downloaded as part of the build. Depending on your use case, this might not be ideal, but there are ways around this
  • Each stage you specify gets a fresh instance of the Docker container, so anything you modify inside it that is not part of a mounted volume gets reset between stages.

This second caveat is something I ran into in a number of builds.

Wicket ListViews and Ajax listeners

One of my favorite web frameworks is Apache Wicket, a component-based server-side web framework written in Java. I’ve been working with it almost daily since 2008 (though a bit less since switching jobs), and it powers pretty much all of my personal web projects.

In one of those projects, Beholder, I ran into an interesting problem with regard to ListViews and Ajax listeners, which I’ll explain in this post.

When Optionals aren't enough

I’m a big fan of Optionals, primarily when it comes to reduction of branches or the elimination of null-checks.

But a situation that often arises when I write code is that an operation can yield one of two results:

  1. A value of a type I specify
  2. A message explaining why the value could not be determined

For traditional Java code, such a scenario is often expressed in methods as follows:

public Type determineType() throws TypeCouldNotBeDeterminedException;

While this works, you’re now forced to wrap the call to this method in a try/catch block, affecting the readability of your code. What I would prefer is an approach more in line with functional programming.

Determine test effectiveness with Mutation Testing

Has one of the following ever happened to you?

  • You looked at your project’s bug database, and found that half the reported bugs were regressions?
    • Despite the fact that your project has 80% test coverage?
    • And every bug solved in the past three years had a unit test specifically to reproduce said bug?
  • Or perhaps you were refactoring some crucial logic, and broke it
    • And you ran the unit tests to help debug it
    • And found they were all green

At this point, once the panic has passed, you should probably start an investigation:

  • What other features that we thought were properly tested are actually broken?
  • Which of our tests are effective?
  • Which tests do we need to fix?

Manually analyzing your tests to answer these questions will take a considerable amount of time, and depending on the size of your codebase is somewhere between extremely tedious and impossible.

Fortunately, there are tools to help you with this, and a technique known as Mutation Testing.

Branch reduction with Optionals

One thing I tend to put a lot of effort into is maintaining good test coverage for my projects. It’s not always perfect, or even universally measured, but focusing on coverage means you’re spending effort on getting automated tests into your code.

It is important to note, though, that test coverage is a quantitative metric. It tells you how much of your code is tested, but not if your tests are good at what they do (you’ll need to look at mutation coverage for that, but that’s beyond the scope of this article).

To get good code coverage (or more specifically: branch coverage), your tests need to cover all possible situations. Increasing your branch coverage requires one of two actions:

  • Increase the number of tests
  • Decrease the number of branches

Using Java 8, there is an interesting technique for the second option, that in my opinion increases readability.

subscribe via RSS