Java, Docker, Jenkins, automated tests

View on GitHub


JDK17 upgrades

A little over a week ago version 17 of Java was released. Most of my applications were still running Java 11, so the list of new features was pretty impressive, and included:

  • Switch expressions (since JDK 14)
  • Text blocks (since JDK 15)
  • Records (since JDK 16)
  • Pattern matching instanceof (since JDK 16)
  • Sealed classes

Normally I’m quite conservative when it comes to upgrading to new version. I’ve stuck with 11 despite there being newer version, mainly due to the short support cycles of the releases between 11 and 17. But 17, like 11, is a long term support version. So feeling lucky, I went ahead and upgraded all my personal projects.

For most of these projects, this was a very smooth process. For some of them, however, there were obstacles. I will highlight them in this post.

AWS Lambda for Java developers

Serverless computing is something I’ve known about for quite some time, but never really gotten around to doing anything with. Finally deciding to see what the fuss was about, I set out to build something that could be run on AWS Lambda. This was surprisingly easy, and in this article I will illustrate how to create a simple function that converts a long representing seconds since the Unix epoch to an ISO-8601 formatted date.

Fun with Java concurrency classes

One of the topics on the OCP Java SE 11 Programmer exam is concurrency, and several concurrency-related classes from the java.util.concurrent package are featured on the exam.

Having worked with Java since 2002, I’ve encountered my share of multithreaded code, and used a variety of solutions for getting threads to behave nicely, such as the various ways you can use Java’s synchronized keyword as well as the Semaphore class.

Thanks to the OCP exam, however, I’ve also become acquainted with a bunch of classes I hadn’t used before, such as ReentrantLock and CyclicBarrier.

In this post I’m going to give a few examples of each.

OCP Java SE 11 Mnemonics

Two weeks ago I passed the Oracle 1Z0-819 exam, which means I can now call myself an Oracle Certified Professional: Java SE 11 Developer. I’ve already seen a massive increase in the number of recruiters contacting me, so if you’re considering doing this exam, there might be spam afterwards.

OCP Java SE 11 badge

This exam was interesting in a number of ways. First of all, I never originally intended to take this exam, as originally there were two exams for this certification:

  • 1Z0-815 (comparable to the old Java OCA exam)
  • 1Z0-816 (comparable to the old Java OCP exam)

In fact, I had already passed the first exam back in February, and had only started studying for the second one when Oracle contacted me to say the old exams were being invalidated, and I had a week left to do the second one.

Not feeling too confident about compressing what was supposed to be six weeks of study into four days, I cancelled my 1Z0-816 exam and opted to take the 1Z0-819 exam on the same day I originally planned the old exam. The problem: this exam covers all topics of both old exams, so I still had to study twice as much material in six weeks. And I succeeded.

One of the key ingredients to this success was the use of mnemonics. Short words or phrases that helped me memorize key concepts. I had around 25 of these, though at the exam I think I only used two or three of them.

In this post I will share some of the more memorable ones.

Arquillian for Java EE testing

At my day job, our application is built on top of Java EE. It is a fairly old monolithic application (project started in 2006), though we dedicate considerable time and resources to keep most of our tech stack current.

One area we focus a lot of our effort on is automated testing. We use Selenium-type tests for a lot of UI-based testing, and have a pretty large suite of unit tests. While these tests cover a vast amount of logic, our developers were still looking for a way of writing tests for logic more complex than a simple unit test, but not quite as involved as a UI-based test. All attempts so far either involved abstracting away a lot of details through interfaces, or involved heavy use of mocking frameworks such as Mockito (which had the side-effect of making the tests brittle).

What we needed was a way to run tests against parts of a running application, such as a REST endpoint or part of the service layer. The solution we chose was Arquillian.

Background sync in Android with WorkManager

A while ago I posted about a problem I was experiencing with one of my applications. I was using event sourcing to create a more lightweight synchronization mechanism between a backend and my Android app, the goal being that instead of reading the entire state on each update, the app would only request whatever had changed since its last synchronization. While good in theory, there was a flaw in my implementation of the synchronization logic (the part which reports the app’s changes back to the server), causing each change to be registered hundreds of times.

The effect was that the time needed to sync the app was growing longer as time went by, and if I ever managed to wipe the app’s local data, it would spend an hour catching up to all history (which, considering the app is task list, is a bit long).

Turn Java upside down with Vavr

Last June I gave a presentation about vavr at our company’s employee developer conference (site in Dutch), where I gave a small demo about the things you could do with it. This presentation barely scratched the surface with regard to vavr, but it did start a bit of a chain reaction within the company: they wanted me to publish a low-tech summary of how the library makes my life better (with some help from an external bureau to dumb it down), and I was also approached to give a workshop about the library (which took place last Tuesday), and two more offers to give a presentation which I had to decline due to scheduling conflicts.

The article in question is now live, but it isn’t really all that exciting for a technical audience, so I decided to tell a bit more about Vavr on my own blog.

Mobile app synchronization using event sourcing

A couple of years ago I followed a course on the Getting Things Done methodology, after having struggled with prioritizing and even remembering stuff that needed doing.

As a result, I now have daily task lists of important stuff, and it’s given me a lot of peace of mind, but the methodology was about more than just day-to-day tasks, and a significant portion of the course was about long-term perspective, and things you wanted to do and reach.

One of things I wanted to do was to learn how to make Android applications, and figuring I’d kill two birds with one stone, I created my own task list app (which also has a web interface) called Coppermind (a reference to the Feruchemists of the Mistborn series, who use “copperminds” to store knowledge).

This app has been working admirably for quite some time now, though last year I did a complete rebuild to switch to a more flexible method of defining repeating tasks.

Using Keycloak with Wicket

I have a number of Wicket-based web applications for various hobby purposes. These applications each serve a specific task, such as keeping track of my financial situation, or to inform me when my favorite YouTube channels have updates I’m interested in. A common feature among these applications is that they have some form of identity management, usually in the form of usernames and passwords.

Having created a number of these login systems over the years, I find the process to be quite tedious. I considered creating my own centralized identity management system to combine the login logic for all these applications when I heard Adam Bien mention Keycloak in one of his episodes.

Intrigued, I did some research, and decided to use Keycloak as identity management solution for my next project, which we’ll refer to as keycloak-client in this post.

The application is bundled as a pair of WAR files, one for the Wicket-based frontend, and one for the Spring and Resteasy-based backend.

Tales from the interview

Last October I switched jobs. After almost two years of working at a supplier of public transport software solutions (I will refer to this as job.getPrevious() from now on), I returned to the company I had worked for the eight years prior to that. This was not an obvious choice for me; travel time was a significant factor in my decision to leave that company, and going back doubled my commute. But I enjoy this job a lot more than I did my previous one, and many of the reasons I had for leaving are no longer valid.

But as I said, it wasn’t the most obvious choice. I applied at seven other companies prior to getting back in touch with my old employer. This experience reminded me that I don’t really like doing job interviews, and that while I was dissatisfied with my old job, they really weren’t all that bad compared to some of the places I visited.

Ever since switching, I’ve wanted to write down my job interview experiences in the style of Tales from the Interview from The Daily WTF.

Jenkins, Docker, and sharing data between stages

I’ve recently started moving a number of projects from their classic Jenkins Maven Project setup to Jenkins Pipeline setups. I really like the concept of pipelines, but one obstacle I encountered was the need to specify tools for your builds, which depend on the naming by the Jenkins administrator, and places a burden on the Jenkins administrator to keep the tools up-to-date.

Fortunately for me, the servers in question also had Docker installed and configured for use by Jenkins, so I could simply specify I wanted to run the build inside a Docker container:

pipeline {
    agent {
        docker {
            image 'maven:3.5-jdk-8'
            label 'docker'

There are two caveats to doing this, however:

  • The only thing shared between the Jenkins instance (which can be a build slave) and the docker container is the workspace directory. This means that the local Maven repository for the build is blank, and all dependencies will need to be downloaded as part of the build. Depending on your use case, this might not be ideal, but there are ways around this
  • Each stage you specify gets a fresh instance of the Docker container, so anything you modify inside it that is not part of a mounted volume gets reset between stages.

This second caveat is something I ran into in a number of builds.

Wicket ListViews and Ajax listeners

One of my favorite web frameworks is Apache Wicket, a component-based server-side web framework written in Java. I’ve been working with it almost daily since 2008 (though a bit less since switching jobs), and it powers pretty much all of my personal web projects.

In one of those projects, Beholder, I ran into an interesting problem with regard to ListViews and Ajax listeners, which I’ll explain in this post.

When Optionals aren't enough

I’m a big fan of Optionals, primarily when it comes to reduction of branches or the elimination of null-checks.

But a situation that often arises when I write code is that an operation can yield one of two results:

  1. A value of a type I specify
  2. A message explaining why the value could not be determined

For traditional Java code, such a scenario is often expressed in methods as follows:

public Type determineType() throws TypeCouldNotBeDeterminedException;

While this works, you’re now forced to wrap the call to this method in a try/catch block, affecting the readability of your code. What I would prefer is an approach more in line with functional programming.

Determine test effectiveness with Mutation Testing

Has one of the following ever happened to you?

  • You looked at your project’s bug database, and found that half the reported bugs were regressions?
    • Despite the fact that your project has 80% test coverage?
    • And every bug solved in the past three years had a unit test specifically to reproduce said bug?
  • Or perhaps you were refactoring some crucial logic, and broke it
    • And you ran the unit tests to help debug it
    • And found they were all green

At this point, once the panic has passed, you should probably start an investigation:

  • What other features that we thought were properly tested are actually broken?
  • Which of our tests are effective?
  • Which tests do we need to fix?

Manually analyzing your tests to answer these questions will take a considerable amount of time, and depending on the size of your codebase is somewhere between extremely tedious and impossible.

Fortunately, there are tools to help you with this, and a technique known as Mutation Testing.

Branch reduction with Optionals

One thing I tend to put a lot of effort into is maintaining good test coverage for my projects. It’s not always perfect, or even universally measured, but focusing on coverage means you’re spending effort on getting automated tests into your code.

It is important to note, though, that test coverage is a quantitative metric. It tells you how much of your code is tested, but not if your tests are good at what they do (you’ll need to look at mutation coverage for that, but that’s beyond the scope of this article).

To get good code coverage (or more specifically: branch coverage), your tests need to cover all possible situations. Increasing your branch coverage requires one of two actions:

  • Increase the number of tests
  • Decrease the number of branches

Using Java 8, there is an interesting technique for the second option, that in my opinion increases readability.

subscribe via RSS