Using two datasources in a Spring Boot application

Using one datasource only in a Spring Boot application is very straight forward. However, using multiple datasources in an application is anything but! It took me quite a bit of googling and fiddling to find a solution that worked.

To use two datasources, you need to set one up as primary. The second datasource will then become the secondary. You set a datasource as the primary by using the primary attribute. Below is an example using XML based configuration

<bean id="greenDataSource" primary="true" class="org.apache.commons.dbcp2.BasicDataSource">
    <property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
    <property name="url" value="${db.green.url}"/>
    <property name="username" value="${db.green.username}"/>
    <property name="password" value="${db.green.password}"/>
</bean>

Then define the secondary datasource like this:

   
<bean id="purpleDataSource" class="org.apache.commons.dbcp2.BasicDataSource">
    <property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
    <property name="url" value="${db.purple.url}"/>
    <property name="username" value="${db.purple.username}"/>
    <property name="password" value="${db.purple.password}"/>
</bean>

You can then wire them into your Java classes using the @Autowired and @Primary annotations:

@Repository
public class AwesomeDaoImpl implements AwesomeDao {
    private JdbcTemplate greenJdbcTemplate;
    private JdbcTemplate purpleJdbcTemplate;

    @Autowired
    @Primary
    public void setGreenDataSource(DataSource greenDataSource) {
        this.greenJdbcTemplate = new JdbcTemplate(greenDataSource);
    }

    @Autowired
    public void setPurpleDataSource(DataSource purpleDataSource) {
        this.ipdcJdbcTemplate = new JdbcTemplate(purpleDataSource);
    }
}

I haven’t figured out how to plumb in more than two datasources without using JNDI. If JNDI is available, then your Spring Boot application can access all the JDNI datasources using the @Resource annotation.

@Repository
public class ColourDaoImpl implements ColourErrorDao {
    private JdbcTemplate jdbcTemplate;

    @Resource(mappedName = "java:jboss/datasources/Green")
    public void setGreenDataSource(DataSource greenDataSource) {
        this.jdbcTemplate = new JdbcTemplate(gnpDataSource);
    }
}

Dropping unknown default constraints in SQL server

For releases, we had to provide SQL scripts to install database changes. This had to be done outside of SQL server management studio and run on multiple environments. Occasionally, I had to drop unnamed constraints in the script. (Most developers wrote their install scripts with named constraints to avoid the difficulty). Every time I encountered this problem, I googled and followed this excellent blog post by Rob Farley. It gave the SQL commands to query for the name of an unnamed constraint, given the table and column name. It stopped short of giving you the SQL code to actually drop the constraint.

So here is my take:

declare @df_name varchar(max)
select @df_name = d.name from sys.tables t
    join sys.default_constraints d
        on d.parent_object_id = t.object_id
    join sys.columns c
        on c.object_id = t.object_id
        and c.column_id = d.parent_column_id
    where t.name = 'some_db_table'
    and c.name = 'some_column_in_table'

if @df_name is not null 
begin
	declare @sql varchar(max) = 'alter table some_db_table ' + ' drop constraint ' + @df_name 
	exec (@sql)
end

Java 8 Date-Time API and good old java.util.Date

Am I the only one who prefer Joda Time over the new Java 8 java.time package? I find the official Oracle documentation poor, and the API is not as intuitive.

No matter which high level datetime library is used in an application, be it java.util.Calendar, Joda Time or java.time, developers still often have to work with old fashion java.util.Date. This is because java.sql.Date is a subclass of java.util.Date and therefore most, if not all, data access layer code expects or returns java.util.Date.

To convert a datetime such as 2016-11-21 09:00 to java.util.Date is very simple in Joda Time.

// from Joda to Date
DateTime dt = new DateTime();
Date jdkDate = dt.toDate();

// from Date to Joda
dt = new DateTime(jdkDate);

Java 8 java.time has two separate ways to represent time – human time vs machine time. Classes such as LocalDateTime and LocalDate represents human time. The Instant class represents machine time. Conversions between date time and java.util.Date must be done via an Instant.

// from LocalDateTime to Date
LocalDateTime dt = LocalDateTime.of(2016, 11, 21, 09, 00);
Instant i = dt.toInstant(ZoneOffset.UTC);
Date d = Date.from(i);

// from Date to LocalDateTime
i = d.toInstant();
dt = LocalDateTime.ofInstant(i, ZoneOffset.UTC);

You can also compare the documenation of the two libraries on interoperability with java.util.time. The Joda Time one is much shorter and easier to read.

Mule flow variables to JSON payload for REST requests

I was working on a mule flow that submit a static JSON request to a REST endpoint. (The variable part was the id in the URL). My first attempt was to set the JSON request directly using <set-payload>.

<set-variable variableName="orderId" value="#[payload.id]" doc:name="set orderId"/>
<set-payload value="{'note' : 'Order auto-approved by X', 'sendEmail' : true}" doc:name="Set Payload"/>
<http:request config-ref="WS_CONFIG" path="/order/#[flowVars.orderId]/approve" method="POST" doc:name="REST approve request">
  <http:request-builder>
    <http:header headerName="Content-Type" value="application/json"/>
  </http:request-builder>
</http:request>

However, mule refused to submit this request, complaining about ‘Message payload is of type: String’. Most pages I found from googling suggested using the DataWeave Transformer. It can transform data to and from a large range of format, including flow variables into JSON. But the DataWeave Transformer was only available in the enterprise edition. After a frustrating hour of more googling and testing various different transformer, I found another way to achieve this easily by using a expression transformer:

<set-variable variableName="orderId" value="#[payload.id]" doc:name="set orderId"/>
<expression-transformer expression="#[['note' : 'Order auto-approved by X', 'sendEmail' : true]]" doc:name="set payload"/>
<json:object-to-json-transformer doc:name="Object to JSON"/>
<http:request config-ref="WS_CONFIG" path="/order/#[flowVars.orderId]/approve" method="POST" doc:name="REST approve request">
  <http:request-builder>
    <http:header headerName="Content-Type" value="application/json"/>
  </http:request-builder>
</http:request>

The flow I worked on didn’t need the order id in the JSON request. But you can reference flow variables in the payload like this:

<set-variable variableName="orderId" value="#[payload.id]" doc:name="set orderId"/>
<expression-transformer expression="#[['note' : 'Order auto-approved by X', 'id':flowVars.orderId, 'sendEmail' : true]]" doc:name="set payload"/>

Application context XML configuration in a Spring Boot web service

A colleague told me recently he didn’t use Spring for his latest REST project because he couldn’t get the beans defined in a XML configuration file loaded. He was familiar with Spring but had never boot strapped a brand new project. I didn’t realise this could be a problem because I have used Spring MVC for a very long time. He was right. It was not obvious. For example, in the Spring Boot tutorial Building a RESTful Web Service, everything is @Autowired. In a real application, you might need to define some beans in a XML configuration file. For example, database connection information for the persistence layer.

Using the example from my previous post on Spring Boot. You can use the annotation @ImportResource to load XML configuration files.

@SpringBootApplication
@ComponentScan
@ImportResource("classpath:spring-config.xml")
public class Application extends SpringBootServletInitializer {
  public static void main(String[] args) {
    SpringApplication.run(Application.class, args);
  }
}

Spring will auto scan classes annotated with @Service, @Repository, @Controller and @Component. Because Spring AOP is proxy-based, your DAO classes should implement interfaces. For example,

public interface OrderDao {
  Order getOrder(int id) throw OrderNotFoundException;
}
@Repository
public class OrderDaoImpl implements OrderDao {
  private JdbcTemplate jdbcTemplate;
  @Autowired 
  public void setMyDataSource(DataSource myDataSource) {
    this.jdbcTemplate = new JdbcTemplate(myDataSource);
  }
}

For some reason, Spring’s own JdbcDaoSupport class is not autowired enabled. If you choose to extend JdbcDaoSupport, you will need to use XML configuration to set the datasource manually. I prefer to have JdbcTemplate as a member and @Autowired the setter instead.

The datasource is defined in the XML file spring-config.xml. The file is located in src/main/resources in a maven project. (Please use a connection pool in a real application. I’m using BasicDataSource here for simplicity sake).

<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
  <property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
  <property name="url" value="${db.url}"/>
  <property name="username" value="${db.username}"/>
  <property name="password" value="${db.password}"/>
</bean>

The properties are defined in application.properties, also in src/main/resources.

db.url=jdbc:sqlserver://localhost:1433;DatabaseName=Orders
db.username=Luser
db.password=Pwd

Note: I’m using Spring Boot 1.3.5

Building a Spring Boot RESTful Web Service for Wildfly 8.2

The Spring Boot project promises a easy and fuss free way to build and configure Spring applications. I finally got a chance to try it out today. I needed to build a simple RESTful Web Service to deploy on Wildfly 8.2.

I followed the tutorial at https://spring.io/guides/gs/rest-service/. The tutorial was written for an embedded web server. I needed to make a few tweaks to get my app running on Wildfly.

By adding

<packaging>war</packaging></code>

to pom.xml, I was able to generate a war file using mvn package. However, when I deployed the war file in Wildfly, I got the following exception:

 java.lang.RuntimeException: 
    java.lang.ClassCastException:
      org.apache.tomcat.websocket.server.WsServerContainer cannot be cast to
      io.undertow.websockets.jsr.ServerWebSocketContainer

Spring boot packaged tomcat jars into the war file and they conflicted with Wildfly. I then added the following exclusions to pom.xml.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
  <exclusions>
    <exclusion>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-tomcat</artifactId>
    </exclusion>
  </exclusions>
</dependency>

After this change, the web app deployed and started, but could not receive any requests. All GET requests returned 403 forbidden and POST requests returned 405 method not allowed. There was nothing in the log files indicating what was wrong. After a bit of head banging I found out the problem was Wildfly couldn’t forward requests to the web app! I needed to

  1. include servlet-api jars, and
    <dependency>
      <groupId>javax.servlet</groupId>
      <artifactId>servlet-api</artifactId>
      <version>2.5</version>
    </dependency>
    
  2. annotate the main class with @ComponentScan and make it a subclass of SpringBootServletInitializer
    @SpringBootApplication
    @ComponentScan
    public class Application extends SpringBootServletInitializer {
    
        public static void main(String[] args) {
            SpringApplication.run(Application.class, args);
        }
    }
    

That’s it! I was impressed by how little configuration I needed to get my app up and running.

Note: I was using Spring Boot version 1.3.3

Stop accidental git commits of local dev changes to config files

During development, I often make changes to a few configuration files for local testing. Most of the time, I add each file individually into the staging area so these local config changes aren’t committed. Yesterday, I made a mistake and committed the local config. I wasn’t sure how it happened but I must have clicked the commit all tracked files accidentally. The test server was then built with my local config. Oops.

To stop this from happening again, I did some googling and found this handy git command:

git update-index --assume-unchanged <file>

This will temporarily ignore changes in the specified file. All without changing .gitignore which is a tracked file in the project.

SQL Server stored procedures continue execution on error

This week I came across a bug in a stored procedure caused by this baffling default behaviour in SQL Server. By default, the execution of a stored procedure continues onto the next statement after an error is raised. The stored procedure in question calls another stored procedure, which has been modified in the previous release to take an extra parameter.

DECLARE @tmp TABLE (value VARCHAR(100))
INSERT INTO @tmp 
	EXEC sp_anotherStoredProcedure
		@orderId = @orderId,
		@petType = 'cat'

	IF NOT EXISTS(
		SELECT * 
		FROM @tmp l
		INNER JOIN Awesome_Event_Buffer r ON l.value = r.Description
		WHERE r.customer = @customer
	)
		print 'do some important things'

Because of the addition of the new parameter, the nested store procedure call now always fail with an error. However, the calling stored procedure is not halted, but continues with the next statement. The if not exists check now always return true, and the important things are done regardless.

To abort a stored procedure whenever a runtime error occurs, set When SET XACT_ABORT to ON, like

ALTER PROCEDURE [dbo].[sp_storedProcedure]
	@parameter1 INT,
        @parameter2 INT
AS
	set xact_abort on
        ...

Alternatively, you can check for errors explicitly using the error number @@error

if (@@error <> 0)
    return (@@error)

SQL union does not guarantee the order of the result set

I introduced a bug in a SQL script I wrote recently because I was unaware that SQL union did not guarantee the order of the returned result set.

The SQL stored procedure finds the primary and secondary network nodes that match a set of criteria. It looks something like:

select bd.name, n.nodeId, r.realmName
from BuildDefaults bd
join NetworkNodes n on bd.nodeId = n.nodeId
join NetworkRealms r on cn.realmId = r.realmId
where bd.typeId = @type and bd.locationId = @location
union
select bd.name, n.nodeId, r.realmName
from BuildDefaults bd
join NetworkNodes n on bd.nodeId = n.nodeId
join NetworkRealms r on n.realmId = r.realmId
where bd.typeId = @type and bd.locationId = @secLocation

I assumed the result returned by the first select would always be the first row of the final result set. (And the result from the second select would be the second row). In retrospect, it should have been obvious that this assumption was wrong. SQL never guarantees order unless the keyword order by is used!

To guarantee the order in a SQL union, an extra order column has to be used:

select bd.name, n.nodeId, r.realmName, 1 as nodeOrder
from BuildDefaults bd
join NetworkNodes n on bd.nodeId = n.nodeId
join NetworkRealms r on cn.realmId = r.realmId
where bd.typeId = @type and bd.locationId = @location
union
select bd.name, n.nodeId, r.realmName, 2 as nodeOrder
from BuildDefaults bd
join NetworkNodes n on bd.nodeId = n.nodeId
join NetworkRealms r on n.realmId = r.realmId
where bd.typeId = @type and bd.locationId = @secLocation
order by nodeOrder