Using two datasources in a Spring Boot application

Using one datasource only in a Spring Boot application is very straight forward. However, using multiple datasources in an application is anything but! It took me quite a bit of googling and fiddling to find a solution that worked.

To use two datasources, you need to set one up as primary. The second datasource will then become the secondary. You set a datasource as the primary by using the primary attribute. Below is an example using XML based configuration

<bean id="greenDataSource" primary="true" class="org.apache.commons.dbcp2.BasicDataSource">
    <property name="driverClassName" value=""/>
    <property name="url" value="${}"/>
    <property name="username" value="${}"/>
    <property name="password" value="${}"/>

Then define the secondary datasource like this:

<bean id="purpleDataSource" class="org.apache.commons.dbcp2.BasicDataSource">
    <property name="driverClassName" value=""/>
    <property name="url" value="${db.purple.url}"/>
    <property name="username" value="${db.purple.username}"/>
    <property name="password" value="${db.purple.password}"/>

You can then wire them into your Java classes using the @Autowired and @Primary annotations:

public class AwesomeDaoImpl implements AwesomeDao {
    private JdbcTemplate greenJdbcTemplate;
    private JdbcTemplate purpleJdbcTemplate;

    public void setGreenDataSource(DataSource greenDataSource) {
        this.greenJdbcTemplate = new JdbcTemplate(greenDataSource);

    public void setPurpleDataSource(DataSource purpleDataSource) {
        this.ipdcJdbcTemplate = new JdbcTemplate(purpleDataSource);

I haven’t figured out how to plumb in more than two datasources without using JNDI. If JNDI is available, then your Spring Boot application can access all the JDNI datasources using the @Resource annotation.

public class ColourDaoImpl implements ColourErrorDao {
    private JdbcTemplate jdbcTemplate;

    @Resource(mappedName = "java:jboss/datasources/Green")
    public void setGreenDataSource(DataSource greenDataSource) {
        this.jdbcTemplate = new JdbcTemplate(gnpDataSource);

Java 8 Date-Time API and good old java.util.Date

Am I the only one who prefer Joda Time over the new Java 8 java.time package? I find the official Oracle documentation poor, and the API is not as intuitive.

No matter which high level datetime library is used in an application, be it java.util.Calendar, Joda Time or java.time, developers still often have to work with old fashion java.util.Date. This is because java.sql.Date is a subclass of java.util.Date and therefore most, if not all, data access layer code expects or returns java.util.Date.

To convert a datetime such as 2016-11-21 09:00 to java.util.Date is very simple in Joda Time.

// from Joda to Date
DateTime dt = new DateTime();
Date jdkDate = dt.toDate();

// from Date to Joda
dt = new DateTime(jdkDate);

Java 8 java.time has two separate ways to represent time – human time vs machine time. Classes such as LocalDateTime and LocalDate represents human time. The Instant class represents machine time. Conversions between date time and java.util.Date must be done via an Instant.

// from LocalDateTime to Date
LocalDateTime dt = LocalDateTime.of(2016, 11, 21, 09, 00);
Instant i = dt.toInstant(ZoneOffset.UTC);
Date d = Date.from(i);

// from Date to LocalDateTime
i = d.toInstant();
dt = LocalDateTime.ofInstant(i, ZoneOffset.UTC);

You can also compare the documenation of the two libraries on interoperability with java.util.time. The Joda Time one is much shorter and easier to read.

Mule flow variables to JSON payload for REST requests

I was working on a mule flow that submit a static JSON request to a REST endpoint. (The variable part was the id in the URL). My first attempt was to set the JSON request directly using <set-payload>.

<set-variable variableName="orderId" value="#[]" doc:name="set orderId"/>
<set-payload value="{'note' : 'Order auto-approved by X', 'sendEmail' : true}" doc:name="Set Payload"/>
<http:request config-ref="WS_CONFIG" path="/order/#[flowVars.orderId]/approve" method="POST" doc:name="REST approve request">
    <http:header headerName="Content-Type" value="application/json"/>

However, mule refused to submit this request, complaining about ‘Message payload is of type: String’. Most pages I found from googling suggested using the DataWeave Transformer. It can transform data to and from a large range of format, including flow variables into JSON. But the DataWeave Transformer was only available in the enterprise edition. After a frustrating hour of more googling and testing various different transformer, I found another way to achieve this easily by using a expression transformer:

<set-variable variableName="orderId" value="#[]" doc:name="set orderId"/>
<expression-transformer expression="#[['note' : 'Order auto-approved by X', 'sendEmail' : true]]" doc:name="set payload"/>
<json:object-to-json-transformer doc:name="Object to JSON"/>
<http:request config-ref="WS_CONFIG" path="/order/#[flowVars.orderId]/approve" method="POST" doc:name="REST approve request">
    <http:header headerName="Content-Type" value="application/json"/>

The flow I worked on didn’t need the order id in the JSON request. But you can reference flow variables in the payload like this:

<set-variable variableName="orderId" value="#[]" doc:name="set orderId"/>
<expression-transformer expression="#[['note' : 'Order auto-approved by X', 'id':flowVars.orderId, 'sendEmail' : true]]" doc:name="set payload"/>

Application context XML configuration in a Spring Boot web service

A colleague told me recently he didn’t use Spring for his latest REST project because he couldn’t get the beans defined in a XML configuration file loaded. He was familiar with Spring but had never boot strapped a brand new project. I didn’t realise this could be a problem because I have used Spring MVC for a very long time. He was right. It was not obvious. For example, in the Spring Boot tutorial Building a RESTful Web Service, everything is @Autowired. In a real application, you might need to define some beans in a XML configuration file. For example, database connection information for the persistence layer.

Using the example from my previous post on Spring Boot. You can use the annotation @ImportResource to load XML configuration files.

public class Application extends SpringBootServletInitializer {
  public static void main(String[] args) {, args);

Spring will auto scan classes annotated with @Service, @Repository, @Controller and @Component. Because Spring AOP is proxy-based, your DAO classes should implement interfaces. For example,

public interface OrderDao {
  Order getOrder(int id) throw OrderNotFoundException;
public class OrderDaoImpl implements OrderDao {
  private JdbcTemplate jdbcTemplate;
  public void setMyDataSource(DataSource myDataSource) {
    this.jdbcTemplate = new JdbcTemplate(myDataSource);

For some reason, Spring’s own JdbcDaoSupport class is not autowired enabled. If you choose to extend JdbcDaoSupport, you will need to use XML configuration to set the datasource manually. I prefer to have JdbcTemplate as a member and @Autowired the setter instead.

The datasource is defined in the XML file spring-config.xml. The file is located in src/main/resources in a maven project. (Please use a connection pool in a real application. I’m using BasicDataSource here for simplicity sake).

<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
  <property name="driverClassName" value=""/>
  <property name="url" value="${db.url}"/>
  <property name="username" value="${db.username}"/>
  <property name="password" value="${db.password}"/>

The properties are defined in, also in src/main/resources.


Note: I’m using Spring Boot 1.3.5

Hot deploy JSPs in Wildfly 8.2.0

Wildfly has a development mode for JSP. In development mode, the wildfly server will check for changes in JSP files in deployed applications. JSPs can therefore be edited and tested without recompiling and redeploying the entire war file. Very handy.

This is configured in ${WILDFLY_ROOT}/standalone/configuration/standalone.xml. To enable it, set the development attribute to true in the element <jsp-config>.

<subsystem xmlns="urn:jboss:domain:undertow:1.2">
      <servlet-container name="default">
                <jsp-config development="true" tag-pooling="false"/>

If the original war file is deployed with maven, the exploded application can be find in ${WILDFLY_ROOT}/standalone/tmp/vfs/temp/tempxxxxxxx/content-xxxxxxx, where xxxxxxx is a series of random numbers. The JSPs should be in the WEB-INF directory under the application root. Replacing a JSP with a newer version and changes are immediately reflected when the webpage is reloaded.

Debug remote Mule server applications in Eclipse

Normally, I debug mule applications using the Debug as Mule Application option in Anypoint Studio. However, I recently started on an existing mule project that does not run within Anypoint Studio.

I have set up my development environment using eclipse as my IDE, deploying into a mule standalone server, and debugging via Java Debug Wire Protocol (JDWP). JDWP is the protocol used for communication between a debugger and the Java VM which it debugs. In other words, it allows you to set breakpoints, step, evaluate expressions in Java applications running within the target server container, outside eclipse.

First, the mule server must be set up to run in debug mode. Open wrapper.conf and uncomment (or add) the following entries,server=y,suspend=y,address=5005

When the mule server is next started, there should be an entry in wrapper.log showing the server is running in debug mode:

Listening for transport dt_socket at address: 5005

To debug in eclipse, go to Run -> Debug Configurations. Create a new debug configuration for Remote Java Application. Set the port as 5005. Hit the Debug button and eclipse will attempt to connect to the mule server. Sometimes, eclipse does not switch to the Debug perspective automatically. To do so manually, go to Window -> Perspective -> Open Perspective -> Debug.

Initially, I made the mistake of deploying my mule applications in zipped files. When I tried to connect to the mule server within eclipse, I get the following errors

  • Hot code replace failed – Scheme change not implemented
  • Hot code replace failed – delete method not implemented

To say they are cryptic is an understatement! This error basically says the IDE cannot hot deploy code in the target VM, which is needed when you debug a remote Java application. You must use the same compiler for both the IDE and the application files running on the remote Java server. And this includes not letting the remote server to ‘explode’ an application during deployment. The simplest way to set this up is to use the maven install step to copy the application files to the mule server, and then run maven within eclipse.

To undeploy the previous installation, delete the anchor file in the mule server apps directory. Never delete a deployed application directory manually. Always let mule do the deletion. This intervenes with mule’s hot deployment layer and can lead to jar locking problems.

PS. I’m running mule standalone server 3.4 and eclipse mars 4.5.

Installing Anypoint Studio Plug-in 5.3.0 in Eclipse Mars

Installing anypoint studio should be a straight forward process, according to the official documentation on the mulesoft website. You add mulesoft to the list of available software site, select anypoint studio, click finish. Eclipse install software wizard should, theoretically, pull in required dependencies.

However, it didn’t work. I got the following exception

No repository found containing: osgi.bundle,org.eclipse.m2e.archetype.common,

I had a Java EE installation of Eclipse Mars (4.5). It already included maven support via m2e-wtp. Anypoint studio wasn’t happy with this version of m2e, and was also unable to pull in the required version of m2e itself. I had to manually add m2e’s update site and install m2e. After that, anypoint studio was installed without a problem.

Unit test Audit Logging Requirements with Mockito

During development of server-side software, I routinely encountered requirements to audit logging. This ranged from logging routine events such as system startup and shutdown, to information when errors occur. For example, a component I worked on was responsible for generating binary data structures and forwarding them to a distribution component. The data generator must limit the size of the structures to protect end devices from data packets that were too large to be processed within their resource constraints. The requirements stipulated that data attributes should be placed into the structure in ascending order, and an error message shall be logged in the audit log for any attributes that were not included in the structure. I have added verification on audit logging in the unit tests that test the handling of attribute omission. Since unit tests have been implemented to check the error handling already, it was not much more effort to include checks on the logging for the same conditions.

A naïve approach to unit test logging is to create the error within a unit test, write the log to a file, read the log file from disk, and then search the file for the expected error message. However, there is a simpler and more elegant way to assert logging behavior by using mock objects. By adding a mock Appender to the target Logger, all log requests to the target Logger are now also forwarded to the mock Appender. Logging events can then be verified directly with the mock Appender.

I use JUnit and Mockito for my unit testing. The following code snippet sets up a mock Appender for use in the unit test class.

@Mock private Appender appender;
private static final Logger AUDIT = Logger.getLogger("");

public void setUp() {

public void tearDown() {

To assert that a log message is written during a unit test, I use Mockito’s verify on the mock Appender’s doAppend method:

verify(appender).doAppend((LoggingEvent) anyObject());

Obviously, this only checks for the existence of a log message when the unit test is run. I normally run our unit tests with logging at trace level to make sure logging calls at the lowest level are exercised. If a debug message is written during the unit test, the above verification will pass even when no error messages were written.

Mockito’s ArgumentCaptor can be used to check for a specific message at a specific logging level. The following code captures all the LoggingEvents that occur in a unit test. It then iterates over the list of captured events, checking their logging level and message content. If one of the log messages matches the expected level and keyword, then the test will pass the assert statement after the loop. Searching for an exact match in the message could lead to brittle tests. They will break if the log message is updated. Therefore I prefer searching for keywords like ‘error’, ‘exceeds’, ‘dropped’ instead.

verify(appender, atLeastOnce()).doAppend(argumentCaptor.capture());
List<LoggingEvent> loggingEvents = argumentCaptor.getAllValues();
for (LoggingEvent le : loggingEvents) {
  if (le.getLevel().equals(ERROR) &&
      le.getMessage().toString().contains(keyword) {
      matched = true;
assertTrue("Cannot find error message [" + keyword + "] in audit log", matched);

Mock objects provide an easy way to test logging in unit tests. To do this I exploit the fact that a mock object remembers all its interactions. Logging can be verified by looking at interactions with the mock Appender’s doAppend method, instead of reading and parsing the log file on disk. By including logging in unit tests, I can verify that audit logging requirements are fulfilled. It also guards against future changes to the code from inadvertently breaking the logging requirement compliance. Problems with audit logging caused by refactoring can be caught during development automatically, removing the chance that the failure could reach testers or even clients.

This post was originally written and published in August 2013 for a newsletter.

Verify nulls with Mockito argument matchers

When using verify in Mockito, all the function arguments must either be exact values or argument matchers. You cannot mix and match the two within the same verify. In other words, the following two statements are correct,

verify(mockFtpClient).rename("a.txt", "b.txt");
verify(mockFtpClient).rename(eq("a.txt"), eq("b.txt"));

But this is not

verify(mockFtpClient).rename(eq("a.txt"), "b.txt");

Today, I needed to match an anyString() and a null in the same verify statement. Mockito’s documentation didn’t have an example on how to use an argument matcher for null string. It turns out there is a hamcrest style isNull matcher in Mockito:

verify(mockFtpClient).rename(anyString(), org.mockito.Matchers.isNull(String.class));

(The above example makes no sense semantically because you’d never want to pass a null string to the FtpClient’s rename function).