Java 8 Date-Time API and good old java.util.Date

Am I the only one who prefer Joda Time over the new Java 8 java.time package? I find the official Oracle documentation poor, and the API is not as intuitive.

No matter which high level datetime library is used in an application, be it java.util.Calendar, Joda Time or java.time, developers still often have to work with old fashion java.util.Date. This is because java.sql.Date is a subclass of java.util.Date and therefore most, if not all, data access layer code expects or returns java.util.Date.

To convert a datetime such as 2016-11-21 09:00 to java.util.Date is very simple in Joda Time.

// from Joda to Date
DateTime dt = new DateTime();
Date jdkDate = dt.toDate();

// from Date to Joda
dt = new DateTime(jdkDate);

Java 8 java.time has two separate ways to represent time – human time vs machine time. Classes such as LocalDateTime and LocalDate represents human time. The Instant class represents machine time. Conversions between date time and java.util.Date must be done via an Instant.

// from LocalDateTime to Date
LocalDateTime dt = LocalDateTime.of(2016, 11, 21, 09, 00);
Instant i = dt.atZone(ZoneOffset.UTC).toInstant());
Date d = Date.from(i);

// from Date to LocalDateTime
i = d.toInstant();
dt = LocalDateTime.ofInstant(i, ZoneOffset.UTC);

You can also compare the documenation of the two libraries on interoperability with java.util.time. The Joda Time one is much shorter and easier to read.

Mule flow variables to JSON payload for REST requests

I was working on a mule flow that submit a static JSON request to a REST endpoint. (The variable part was the id in the URL). My first attempt was to set the JSON request directly using <set-payload>.

<set-variable variableName="orderId" value="#[payload.id]" doc:name="set orderId"/>
<set-payload value="{'note' : 'Order auto-approved by X', 'sendEmail' : true}" doc:name="Set Payload"/>
<http:request config-ref="WS_CONFIG" path="/order/#[flowVars.orderId]/approve" method="POST" doc:name="REST approve request">
  <http:request-builder>
    <http:header headerName="Content-Type" value="application/json"/>
  </http:request-builder>
</http:request>

However, mule refused to submit this request, complaining about ‘Message payload is of type: String’. Most pages I found from googling suggested using the DataWeave Transformer. It can transform data to and from a large range of format, including flow variables into JSON. But the DataWeave Transformer was only available in the enterprise edition. After a frustrating hour of more googling and testing various different transformer, I found another way to achieve this easily by using a expression transformer:

<set-variable variableName="orderId" value="#[payload.id]" doc:name="set orderId"/>
<expression-transformer expression="#[['note' : 'Order auto-approved by X', 'sendEmail' : true]]" doc:name="set payload"/>
<json:object-to-json-transformer doc:name="Object to JSON"/>
<http:request config-ref="WS_CONFIG" path="/order/#[flowVars.orderId]/approve" method="POST" doc:name="REST approve request">
  <http:request-builder>
    <http:header headerName="Content-Type" value="application/json"/>
  </http:request-builder>
</http:request>

The flow I worked on didn’t need the order id in the JSON request. But you can reference flow variables in the payload like this:

<set-variable variableName="orderId" value="#[payload.id]" doc:name="set orderId"/>
<expression-transformer expression="#[['note' : 'Order auto-approved by X', 'id':flowVars.orderId, 'sendEmail' : true]]" doc:name="set payload"/>

Application context XML configuration in a Spring Boot web service

A colleague told me recently he didn’t use Spring for his latest REST project because he couldn’t get the beans defined in a XML configuration file loaded. He was familiar with Spring but had never boot strapped a brand new project. I didn’t realise this could be a problem because I have used Spring MVC for a very long time. He was right. It was not obvious. For example, in the Spring Boot tutorial Building a RESTful Web Service, everything is @Autowired. In a real application, you might need to define some beans in a XML configuration file. For example, database connection information for the persistence layer.

Using the example from my previous post on Spring Boot. You can use the annotation @ImportResource to load XML configuration files.

@SpringBootApplication
@ComponentScan
@ImportResource("classpath:spring-config.xml")
public class Application extends SpringBootServletInitializer {
  public static void main(String[] args) {
    SpringApplication.run(Application.class, args);
  }
}

Spring will auto scan classes annotated with @Service, @Repository, @Controller and @Component. Because Spring AOP is proxy-based, your DAO classes should implement interfaces. For example,

public interface OrderDao {
  Order getOrder(int id) throw OrderNotFoundException;
}
@Repository
public class OrderDaoImpl implements OrderDao {
  private JdbcTemplate jdbcTemplate;
  @Autowired 
  public void setMyDataSource(DataSource myDataSource) {
    this.jdbcTemplate = new JdbcTemplate(myDataSource);
  }
}

For some reason, Spring’s own JdbcDaoSupport class is not autowired enabled. If you choose to extend JdbcDaoSupport, you will need to use XML configuration to set the datasource manually. I prefer to have JdbcTemplate as a member and @Autowired the setter instead.

The datasource is defined in the XML file spring-config.xml. The file is located in src/main/resources in a maven project. (Please use a connection pool in a real application. I’m using BasicDataSource here for simplicity sake).

<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
  <property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
  <property name="url" value="${db.url}"/>
  <property name="username" value="${db.username}"/>
  <property name="password" value="${db.password}"/>
</bean>

The properties are defined in application.properties, also in src/main/resources.

db.url=jdbc:sqlserver://localhost:1433;DatabaseName=Orders
db.username=Luser
db.password=Pwd

Note: I’m using Spring Boot 1.3.5

Building a Spring Boot RESTful Web Service for Wildfly 8.2

The Spring Boot project promises a easy and fuss free way to build and configure Spring applications. I finally got a chance to try it out today. I needed to build a simple RESTful Web Service to deploy on Wildfly 8.2.

I followed the tutorial at https://spring.io/guides/gs/rest-service/. The tutorial was written for an embedded web server. I needed to make a few tweaks to get my app running on Wildfly.

By adding

<packaging>war</packaging></code>

to pom.xml, I was able to generate a war file using mvn package. However, when I deployed the war file in Wildfly, I got the following exception:

 java.lang.RuntimeException: 
    java.lang.ClassCastException:
      org.apache.tomcat.websocket.server.WsServerContainer cannot be cast to
      io.undertow.websockets.jsr.ServerWebSocketContainer

Spring boot packaged tomcat jars into the war file and they conflicted with Wildfly. I then added the following exclusions to pom.xml.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
  <exclusions>
    <exclusion>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-tomcat</artifactId>
    </exclusion>
  </exclusions>
</dependency>

After this change, the web app deployed and started, but could not receive any requests. All GET requests returned 403 forbidden and POST requests returned 405 method not allowed. There was nothing in the log files indicating what was wrong. After a bit of head banging I found out the problem was Wildfly couldn’t forward requests to the web app! I needed to

  1. include servlet-api jars, and
    <dependency>
      <groupId>javax.servlet</groupId>
      <artifactId>servlet-api</artifactId>
      <version>2.5</version>
    </dependency>
    
  2. annotate the main class with @ComponentScan and make it a subclass of SpringBootServletInitializer
    @SpringBootApplication
    @ComponentScan
    public class Application extends SpringBootServletInitializer {
    
        public static void main(String[] args) {
            SpringApplication.run(Application.class, args);
        }
    }
    

That’s it! I was impressed by how little configuration I needed to get my app up and running.

Note: I was using Spring Boot version 1.3.3

Stop accidental git commits of local dev changes to config files

During development, I often make changes to a few configuration files for local testing. Most of the time, I add each file individually into the staging area so these local config changes aren’t committed. Yesterday, I made a mistake and committed the local config. I wasn’t sure how it happened but I must have clicked the commit all tracked files accidentally. The test server was then built with my local config. Oops.

To stop this from happening again, I did some googling and found this handy git command:

git update-index --assume-unchanged <file>

This will temporarily ignore changes in the specified file. All without changing .gitignore which is a tracked file in the project.

SQL Server stored procedures continue execution on error

This week I came across a bug in a stored procedure caused by this baffling default behaviour in SQL Server. By default, the execution of a stored procedure continues onto the next statement after an error is raised. The stored procedure in question calls another stored procedure, which has been modified in the previous release to take an extra parameter.

DECLARE @tmp TABLE (value VARCHAR(100))
INSERT INTO @tmp 
	EXEC sp_anotherStoredProcedure
		@orderId = @orderId,
		@petType = 'cat'

	IF NOT EXISTS(
		SELECT * 
		FROM @tmp l
		INNER JOIN Awesome_Event_Buffer r ON l.value = r.Description
		WHERE r.customer = @customer
	)
		print 'do some important things'

Because of the addition of the new parameter, the nested store procedure call now always fail with an error. However, the calling stored procedure is not halted, but continues with the next statement. The if not exists check now always return true, and the important things are done regardless.

To abort a stored procedure whenever a runtime error occurs, set When SET XACT_ABORT to ON, like

ALTER PROCEDURE [dbo].[sp_storedProcedure]
	@parameter1 INT,
        @parameter2 INT
AS
	set xact_abort on
        ...

Alternatively, you can check for errors explicitly using the error number @@error

if (@@error <> 0)
    return (@@error)

SQL union does not guarantee the order of the result set

I introduced a bug in a SQL script I wrote recently because I was unaware that SQL union did not guarantee the order of the returned result set.

The SQL stored procedure finds the primary and secondary network nodes that match a set of criteria. It looks something like:

select bd.name, n.nodeId, r.realmName
from BuildDefaults bd
join NetworkNodes n on bd.nodeId = n.nodeId
join NetworkRealms r on cn.realmId = r.realmId
where bd.typeId = @type and bd.locationId = @location
union
select bd.name, n.nodeId, r.realmName
from BuildDefaults bd
join NetworkNodes n on bd.nodeId = n.nodeId
join NetworkRealms r on n.realmId = r.realmId
where bd.typeId = @type and bd.locationId = @secLocation

I assumed the result returned by the first select would always be the first row of the final result set. (And the result from the second select would be the second row). In retrospect, it should have been obvious that this assumption was wrong. SQL never guarantees order unless the keyword order by is used!

To guarantee the order in a SQL union, an extra order column has to be used:

select bd.name, n.nodeId, r.realmName, 1 as nodeOrder
from BuildDefaults bd
join NetworkNodes n on bd.nodeId = n.nodeId
join NetworkRealms r on cn.realmId = r.realmId
where bd.typeId = @type and bd.locationId = @location
union
select bd.name, n.nodeId, r.realmName, 2 as nodeOrder
from BuildDefaults bd
join NetworkNodes n on bd.nodeId = n.nodeId
join NetworkRealms r on n.realmId = r.realmId
where bd.typeId = @type and bd.locationId = @secLocation
order by nodeOrder

Running SQLCMD in SQL Server Management Studio 2012

I normally run sqlcmd commands on the command line. However, there are others in my team who run them inside SQL Server Management Studio (SSMS). Therefore I have to make sure my scripts run within SSMS as well.

By default, SSMS doesn’t understand sqlcmd commands.

Not in SQLCMD mode

To use the Database Engine Query Editor to write or edit SQLCMD scripts, you must enable the SQLCMD scripting mode. By default, SQLCMD mode is not enabled in the Query Editor. You can enable scripting mode by clicking the SQLCMD Mode icon in the toolbar or by selecting SQLCMD Mode from the Query menu.

To use SSMS to write or edit sqlcmd scripts, sqlcmd scripting mode must be enabled before script execution.

SQLCMD mode button

However, the sqlcmd mode icon is not on the toolbar in SSMS 2012 by default. To drop the icon onto the toolbar,

  1. Open the menu Tools -> Customize -> Command, choose the SQL editor toolbar.
  2. Then under ‘Query Options’, click the ‘Add Command…’ button.
  3. Select ‘Query’ in Categories, and choose ‘SQLCMD Mode’ in the commands.

Now when you have the query window highlighted, you should be able to toggle SQLCMD mode, before running execute.

Hot deploy JSPs in Wildfly 8.2.0

Wildfly has a development mode for JSP. In development mode, the wildfly server will check for changes in JSP files in deployed applications. JSPs can therefore be edited and tested without recompiling and redeploying the entire war file. Very handy.

This is configured in ${WILDFLY_ROOT}/standalone/configuration/standalone.xml. To enable it, set the development attribute to true in the element <jsp-config>.

<subsystem xmlns="urn:jboss:domain:undertow:1.2">
      <servlet-container name="default">
                <jsp-config development="true" tag-pooling="false"/>
            </servlet-container>
</subsystem>

If the original war file is deployed with maven, the exploded application can be find in ${WILDFLY_ROOT}/standalone/tmp/vfs/temp/tempxxxxxxx/content-xxxxxxx, where xxxxxxx is a series of random numbers. The JSPs should be in the WEB-INF directory under the application root. Replacing a JSP with a newer version and changes are immediately reflected when the webpage is reloaded.