Junit 5 parameterized tests

My favourite feature of Junit 5 is its improvement to parameterized tests. Previously, each parameterized test must be written in its own class. In the test class, you define a method for the test and another method for its inputs and outputs. Junit 5 provides a much simpler way to define parameterized tests. You can now annotate individual test methods as parameterized, with parameters supplied via annotations.

For example, the following test verifies the method under test throws an InvalidRequestException with the specified list of ISO dates.

@ParameterizedTest
@ValueSource(strings= {"1752-12-31T21:45:00", "10000-01-01T21:45:00"})
public void invalidDate(String testdate) {
    testRequest.setDate(testdate);
    assertThrows(InvalidRequestException.class, 
        () -> testService.testMethod(ORDER_ID, testRequest));
}

With the enum value source and exclude mode, it’s now very easy to test status validation to make sure all statuses except a few are allowed. For example, the code below checks an order can be cancelled only in the PREPROCESS status.

@ParameterizedTest
@EnumSource(value = OrderStatus.class, mode = Mode.EXCLUDE, 
            names = {"PREPROCESS"})
public void testOrderStatusCancellation(OrderStatus status) throws Exception {
    testOrder.setStatus(status);
    assertThrows(InvalidRequestException.class
        () -> testService.cancel(testOrder));
}

If your parameterized test inputs and outputs are more complicated, and cannot be easily supplied inside annotation, you can use the @MethodSource annotation. This allows you to define your test method inputs and outputs with another method, similar to how things were before Junit 5.

import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.Arguments;
import org.junit.jupiter.params.provider.MethodSource;

@ParameterizedTest
@MethodSource("inputOutputProvider")
public void testCodeParsing(String input, Set<Integer> expected) {
    assertEquals(expected, codeReader.getRejectionCodes(comment));
}

private static Stream<Arguments> inputOutputProvider() {
    return Stream.of(
        Arguments.arguments("0010,0015,0041", newHashSet(10,15,41)),
        Arguments.arguments("C22", newHashSet(22)),
        Arguments.arguments("10", newHashSet(10)));
}

Junit 5 tags, maven Failsafe plugin and system integration tests

Automated system integration tests are useful for testing user acceptance criteria as a whole. In addition, we also use the tests to verify environment set up such as file, database and external web service access. We run these tests on production systems to verify deployment. Some of the automated tests in our integration test suites are not safe to be run on production. This blog posts explain how I use Junit 5 @Tag annotation and the Failsafe plugin to separate system integration tests into all enviroment and non production only.

Junit 5 allows test classes and methods to be tagged via the @Tag annotation. The tests can be used to filter test discovery and execution. I used the tag NonProductionOnly for test classes that should not be run on a production environment.

@Tag("NonProductionOnly")
public class PlaceOrderTestsIT {
   ....
}

Junit tests that are safe to be run on all environments are not annotated.

The Failsafe Plugin is designed to run integration tests while the Surefire Plugin is designed to run unit tests. It decouples failing the build if there are test failures from actually running integration tests. For unit tests to be handled by the Failsafe plugin, end the class name with IT. To filter executions with Junit 5 @Tag annotations, the plugin can be configured in the project’s pom.xml.

<build>
  <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-failsafe-plugin</artifactId>
      <version>2.22.2</version>
      <executions>
        <execution>
          <goals>
            <goal>integration-test</goal>
            <goal>verify</goal>
          </goals>
        </execution>
      </executions>
      <configuration>
        <excludedGroups>${failsure.excludedGroups}</excludedGroups>
      </configuration>
    </plugin>
  </plugins> 
</build>

This configures the plugin to exclude groups defined in the pom property ${failsure.excludedGroups}. We set up the property using maven profiles. The tag NothingToExclude does not correspond to any @Tag in the test suite. It is needed in the pom because the property cannot be left empty.

<profiles>
  <profile>
    <id>uat</id>
      <properties>
	<failsure.excludedGroups>NothingToExclude</failsure.excludedGroups>
      </properties>
  </profile>	
  <profile>
    <id>production</id>
    <properties>
      <failsure.excludedGroups>NonProductionOnly</failsure.excludedGroups>
    </properties>
  </profile>
</profiles>

The profiles are passed in from Jenkins configuration for the different environments.

Memory exhaustion, long garbage collection time and the Hibernate query plan cache

Recently, we released a new Spring Boot webservice. It uses Hibernate for database queries and persistence. (Spring Boot 2.1.10 and Hibernate 5.3.13). Within a week of release, Dynatrace alerted us to memory exhaustion and long garbage collection time on all the deployments. The app was started with the -XX:+HeapDumpOnOutOfMemoryError command line option so heap dumps were generated automatically. While the heap dumps were analysed for root cause of the memory problem, the apps were restarted with double the memory limits to keep things running.

I loaded the heap dump with Eclipse memory analyzer and its Leak Suspects Report. The tool identified one problem suspect, which used 321.5MB out of 360MB of heap memory.

One instance of “org.hibernate.internal.SessionFactoryImpl” loaded by “org.springframework.boot.loader.LaunchedURLClassLoader @ 0xe0035958” occupies 337,113,320 (89.29%) bytes. The memory is accumulated in one instance of “org.hibernate.internal.util.collections.BoundedConcurrentHashMap$Segment[]” loaded by “org.springframework.boot.loader.LaunchedURLClassLoader @ 0xe0035958”.

When I looked at the ‘Accumulated Objects in the Dominator Tree’, I noticed there were a large number of Bounded Concurrent Hash Map entries. Clicked on an individual hash map entry and use the list objects with incoming reference option, I could see that the segments of the hash map are query plan cache objects.

To look at what is stored inside the query plan cache, I used the OQL function in the memory analyzer with the follow query.

SELECT l.query.toString() 
FROM INSTANCEOF
org.hibernate.engine.query.spi.QueryPlanCache$HQLQueryPlanKey l

By inpsecting the strings returned by the query, I spotted the problem. The hibernate query cache plan cached a lot of identical queries with the numeric primary key parameter in the where clause. Like this

generatedAlias0.numberDetails as generatedAlias59 left join fetch
generatedAlias59.ddiRange6 as generatedAlias60 where generatedAlias0.id=500685

Reading the hibernate documentation, I learned that by default Criteria queries use bind parameters for literals that are not numeric only. My application used only the numeric primary key for access. Hibernate was therefore caching a new plan for every individual row in the database. As a result, the application was performing worse than not having a query cache plan at all!

To fix this, set the parameter hibernate.criteria.literal_handling_mode to BIND. This instructs hibernate to use bind variables for any literal value. In Spring Boot, the variable is spring.jpa.properties.hibernate.criteria.literal_handling_mode.

To confirm that the query cache plan problem was fixed, I used jconsole to take a heap dump

Running the OQL query again and I can see the query plans have changed to using bind parameters in the where clause.

generatedAlias0.numberDetails as generatedAlias59 left join fetch
generatedAlias59.ddiRange6 as generatedAlias60 where generatedAlias0.id=:param0

Logging outbound HTTP requests from JAX-RS client

In order to track down a bug, I needed to log HTTP requests sent from one of our web services to another third party web service. (We hosted the service, but the software was not developed in house).

Our web service was written in resteasy, a framework I was not especially familiar with. (I prefer to use the Spring stack, and always create new web services using Spring Boot). The code to call the third party web service looked like this

import javax.ws.rs.client.Invocation.Builder;
builder.buildPost(Entity.form(form)).invoke();

Surprisingly, there wasn’t an obvious way to get the request body sent. From various stackoverflow Q&A, the way to log JAX-RS outbound client requests was to create an implementation of ClientRequestFilter, and register it as a Provider in the container.

@Provider
public class MyClientRequestLoggingFilter implements ClientRequestFilter {
  private static final Logger LOG = LoggerFactory.getLogger(MyClientRequestLoggingFilter.class);	
    @Override
    public void filter(ClientRequestContext requestContext) throws IOException {
      LOG.info(requestContext.getEntity().toString());
    }
}

You then configure your web.xml to scan for providers

<context-param>
  <param-name>resteasy.scan.providers</param-name>
  <param-value>true</param-value>
</context-param>

There are quite a few warnings that because the function ClientRequestContext.getEntity() returns an Object, the default toString() may not work as expected. Unmarshalling of the object is required to log the request body.

After banging my head against a wall for an afternoon, I decided to take a completely different approach to the problem. I googled on how to enable request logging in apache httpd instead. This turned out to be a much more straightforward way to achieve what I needed. The module mod_dumpio can used to dump all input and output requests to the server into a log file. You need mod_dumpio present in the apache httpd installation. (In windows, check to see if mod_dumpio.so is in c:\apache-install-dir\modules). Stop the service, edit the httpd.conf file to include the following lines

LoadModule dumpio_module modules/mod_dumpio.so

ErrorLog "logs/error.log"
LogLevel debug
DumpIOInput On
DumpIOOutput On
LogLevel dumpio:trace7

The ErrorLog and LogLevel lines are already present in my httpd.conf. I changed the LogLevel to debug, and added the follwoing three lines to turn on the dumpio module. After server restart, all HTTP requests and responses were successfully logged to the file logs/error.log.

Lesson learnt here – if an approach turned out to be more complicated than expected, it’s worth taking a step back and rethink.

Using two datasources in a Spring Boot application

Using one datasource only in a Spring Boot application is very straight forward. However, using multiple datasources in an application is anything but! It took me quite a bit of googling and fiddling to find a solution that worked.

To use two datasources, you need to set one up as primary. The second datasource will then become the secondary. You set a datasource as the primary by using the primary attribute. Below is an example using XML based configuration

<bean id="greenDataSource" primary="true" class="org.apache.commons.dbcp2.BasicDataSource">
    <property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
    <property name="url" value="${db.green.url}"/>
    <property name="username" value="${db.green.username}"/>
    <property name="password" value="${db.green.password}"/>
</bean>

Then define the secondary datasource like this:

   
<bean id="purpleDataSource" class="org.apache.commons.dbcp2.BasicDataSource">
    <property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
    <property name="url" value="${db.purple.url}"/>
    <property name="username" value="${db.purple.username}"/>
    <property name="password" value="${db.purple.password}"/>
</bean>

You can then wire them into your Java classes using the @Autowired and @Primary annotations:

@Repository
public class AwesomeDaoImpl implements AwesomeDao {
    private JdbcTemplate greenJdbcTemplate;
    private JdbcTemplate purpleJdbcTemplate;

    @Autowired
    @Primary
    public void setGreenDataSource(DataSource greenDataSource) {
        this.greenJdbcTemplate = new JdbcTemplate(greenDataSource);
    }

    @Autowired
    public void setPurpleDataSource(DataSource purpleDataSource) {
        this.ipdcJdbcTemplate = new JdbcTemplate(purpleDataSource);
    }
}

I haven’t figured out how to plumb in more than two datasources without using JNDI. If JNDI is available, then your Spring Boot application can access all the JDNI datasources using the @Resource annotation.

@Repository
public class ColourDaoImpl implements ColourErrorDao {
    private JdbcTemplate jdbcTemplate;

    @Resource(mappedName = "java:jboss/datasources/Green")
    public void setGreenDataSource(DataSource greenDataSource) {
        this.jdbcTemplate = new JdbcTemplate(greenDataSource);
    }
}

Java 8 Date-Time API and good old java.util.Date

Am I the only one who prefer Joda Time over the new Java 8 java.time package? I find the official Oracle documentation poor, and the API is not as intuitive.

No matter which high level datetime library is used in an application, be it java.util.Calendar, Joda Time or java.time, developers still often have to work with old fashion java.util.Date. This is because java.sql.Date is a subclass of java.util.Date and therefore most, if not all, data access layer code expects or returns java.util.Date.

To convert a datetime such as 2016-11-21 09:00 to java.util.Date is very simple in Joda Time.

// from Joda to Date
DateTime dt = new DateTime();
Date jdkDate = dt.toDate();

// from Date to Joda
dt = new DateTime(jdkDate);

Java 8 java.time has two separate ways to represent time – human time vs machine time. Classes such as LocalDateTime and LocalDate represents human time. The Instant class represents machine time. Conversions between date time and java.util.Date must be done via an Instant.

// from LocalDateTime to Date
LocalDateTime dt = LocalDateTime.of(2016, 11, 21, 09, 00);
Instant i = dt.atZone(ZoneOffset.UTC).toInstant());
Date d = Date.from(i);

// from Date to LocalDateTime
i = d.toInstant();
dt = LocalDateTime.ofInstant(i, ZoneOffset.UTC);

You can also compare the documenation of the two libraries on interoperability with java.util.time. The Joda Time one is much shorter and easier to read.

Mule flow variables to JSON payload for REST requests

I was working on a mule flow that submit a static JSON request to a REST endpoint. (The variable part was the id in the URL). My first attempt was to set the JSON request directly using <set-payload>.

<set-variable variableName="orderId" value="#[payload.id]" doc:name="set orderId"/>
<set-payload value="{'note' : 'Order auto-approved by X', 'sendEmail' : true}" doc:name="Set Payload"/>
<http:request config-ref="WS_CONFIG" path="/order/#[flowVars.orderId]/approve" method="POST" doc:name="REST approve request">
  <http:request-builder>
    <http:header headerName="Content-Type" value="application/json"/>
  </http:request-builder>
</http:request>

However, mule refused to submit this request, complaining about ‘Message payload is of type: String’. Most pages I found from googling suggested using the DataWeave Transformer. It can transform data to and from a large range of format, including flow variables into JSON. But the DataWeave Transformer was only available in the enterprise edition. After a frustrating hour of more googling and testing various different transformer, I found another way to achieve this easily by using a expression transformer:

<set-variable variableName="orderId" value="#[payload.id]" doc:name="set orderId"/>
<expression-transformer expression="#[['note' : 'Order auto-approved by X', 'sendEmail' : true]]" doc:name="set payload"/>
<json:object-to-json-transformer doc:name="Object to JSON"/>
<http:request config-ref="WS_CONFIG" path="/order/#[flowVars.orderId]/approve" method="POST" doc:name="REST approve request">
  <http:request-builder>
    <http:header headerName="Content-Type" value="application/json"/>
  </http:request-builder>
</http:request>

The flow I worked on didn’t need the order id in the JSON request. But you can reference flow variables in the payload like this:

<set-variable variableName="orderId" value="#[payload.id]" doc:name="set orderId"/>
<expression-transformer expression="#[['note' : 'Order auto-approved by X', 'id':flowVars.orderId, 'sendEmail' : true]]" doc:name="set payload"/>