While loop in Mule ESB, or not

It’s a common use case to need an automated job that run once a day, executing a list of queued requests received throughout the day. However, there is no direct support for while loops in mule. Google leads me to this article in dzone. It implements a while loop using subflows and recursion.

In this article I put this implementation of while loop in mule to the test.

<flow name="test-loop-flow" processingStrategy="synchronous">
  <poll doc:name="Poll">
    <schedulers:cron-scheduler expression="0/30 * 6-23 * * ?"/>
    <logger message="Scheduled job started" level="DEBUG" doc:name="Logger"/>
  <set-variable variableName="counter" value="40000" doc:name="Set max number to process"/>
  <flow-ref name="while_loop" doc:name="call while loop"/>
<flow name="while_loop" processingStrategy="synchronous">
  <db:select config-ref="DATASOURCE_CONFIG" doc:name="Get Request">
     <db:parameterized-query><![CDATA[select top 1 id from RequestsToProcess]]></db:parameterized-query>
  <logger message="#[payload]" level="DEBUG"/>
  <choice doc:name="Choice">
    <when expression="#[flowVars.counter == 0 || payload == empty]">
      <logger message="The loop breaks" level="DEBUG"/> 
      <logger message="do something" level="DEBUG"/>
      <set-variable variableName="counter" value="#[flowVars.counter-1]" doc:name="count down"/>
      <db:delete config-ref="DATASOURCE_CONFIG" doc:name="Remove request">
        <db:parameterized-query><![CDATA[delete from RequestsToProcess where id = #[payload[0]['id']]]]
      <flow-ref name="while_loop"/>

This above mule snippet sets up a job that starts every 30s between the hours of 06:00 and 23:00. It sets a counter to 40k, representing the maximum number of requests to process in each invocation. (The 40k is from an actual use case I’m working on. We are limited to 40k requests each day). The processing strategy for both flows are set to synchronous, where only one instance of the flow will be invoked at any time.

The database table RequestsToProcess has been populated with more than 40k entries. The time taken to select and delete 40k entries from a database table is significantly longer than 30s, the time between each invocation of the flow. Can you guess what happens?

06 08 2018 14:31:19 | DEBUG | org.mule.api.processor.LoggerMessageProcessor | [{id=106}]
06 08 2018 14:31:19 | DEBUG | org.mule.api.processor.LoggerMessageProcessor | do something
06 08 2018 14:31:20 | ERROR | org.quartz.core.JobRunShell | Job endpoint.polling.1912630717.test-loop-flow~
threw an unhandled Exception: 
java.lang.NoClassDefFoundError: Could not initialize class org.mule.config.ExceptionHelper
	at org.mule.api.MuleException.<init>(MuleException.java:56) ~[?:?]
	at org.mule.api.MessagingException.<init>(MessagingException.java:125) ~[?:?]
06 08 2018 14:31:29 | DEBUG | org.mule.api.processor.LoggerMessageProcessor | Scheduled job started
06 08 2018 14:31:30 | DEBUG | org.mule.api.processor.LoggerMessageProcessor | [{id=106}]

The log shows that at 30s, the cron scheduler starts a new invocation, causes an exception, halts the loop, and picks up request id 106 again. The reason a new invocation is started before the existing long running job has finished is because the previous instance is looping within the flow while_loop, while the cron scheduler invokes the calling test-loop-flow. In other words, if your job is still running when the next invocation starts, mule will throw a cryptic exception, and halts the execution of the existing job.

What if I set the cron scheduler to run the job only once a daily, eliminating the chance this particular job not finishing before the next invocation? The cron expression is edited as below, running once at 02:00 each day

<schedulers:cron-scheduler expression="0 0 2 * * ?"/>

This will lead to a stack overflow exception in mule, because of the recursion

Message               : null (java.lang.StackOverflowError). Message payload is of type: Integer
Code                  : MULE_ERROR--2
Exception stack is:
1. null (java.lang.StackOverflowError)
  java.util.ResourceBundle$CacheKey:-1 (null)
2. null (java.lang.StackOverflowError). Message payload is of type: Integer (org.mule.api.MessagingException)
Root Exception stack trace:
	at java.util.ResourceBundle$CacheKey.equals(Unknown Source)

From this little experiment, I would not recommend using recursion to implement a while loop in mule. I normally use a cron expression that runs the job multiple times for a period of time each day, processing multiple requests in each invocation.

<flow name="request-to-export-flow" processingStrategy="synchronous">
  <poll doc:name="Poll">
    <schedulers:cron-scheduler expression="0/5 * 6-23 * * ?"/>
    <logger message="Polling request table" level="DEBUG" doc:name="Logger"/>
  <db:select config-ref="DATASOURCE_CONFIG" doc:name="Get BT999 Request">
    <db:template-query-ref name="spPopRequest"/>
    <db:in-param name="topcount" type="INTEGER" value="100"/>
  <foreach doc:name="For Each">

This is tested with Mule ESB 3.7 CE.

Logging outbound HTTP requests from JAX-RS client

In order to track down a bug, I needed to log HTTP requests sent from one of our web services to another third party web service. (We hosted the service, but the software was not developed in house).

Our web service was written in resteasy, a framework I was not especially familiar with. (I prefer to use the Spring stack, and always create new web services using Spring Boot). The code to call the third party web service looked like this

import javax.ws.rs.client.Invocation.Builder;

Surprisingly, there wasn’t an obvious way to get the request body sent. From various stackoverflow Q&A, the way to log JAX-RS outbound client requests was to create an implementation of ClientRequestFilter, and register it as a Provider in the container.

public class MyClientRequestLoggingFilter implements ClientRequestFilter {
  private static final Logger LOG = LoggerFactory.getLogger(MyClientRequestLoggingFilter.class);	
    public void filter(ClientRequestContext requestContext) throws IOException {

You then configure your web.xml to scan for providers


There are quite a few warnings that because the function ClientRequestContext.getEntity() returns an Object, the default toString() may not work as expected. Unmarshalling of the object is required to log the request body.

After banging my head against a wall for an afternoon, I decided to take a completely different approach to the problem. I googled on how to enable request logging in apache httpd instead. This turned out to be a much more straightforward way to achieve what I needed. The module mod_dumpio can used to dump all input and output requests to the server into a log file. You need mod_dumpio present in the apache httpd installation. (In windows, check to see if mod_dumpio.so is in c:\apache-install-dir\modules). Stop the service, edit the httpd.conf file to include the following lines

LoadModule dumpio_module modules/mod_dumpio.so

ErrorLog "logs/error.log"
LogLevel debug
DumpIOInput On
DumpIOOutput On
LogLevel dumpio:trace7

The ErrorLog and LogLevel lines are already present in my httpd.conf. I changed the LogLevel to debug, and added the follwoing three lines to turn on the dumpio module. After server restart, all HTTP requests and responses were successfully logged to the file logs/error.log.

Lesson learnt here – if an approach turned out to be more complicated than expected, it’s worth taking a step back and rethink.

Using two datasources in a Spring Boot application

Using one datasource only in a Spring Boot application is very straight forward. However, using multiple datasources in an application is anything but! It took me quite a bit of googling and fiddling to find a solution that worked.

To use two datasources, you need to set one up as primary. The second datasource will then become the secondary. You set a datasource as the primary by using the primary attribute. Below is an example using XML based configuration

<bean id="greenDataSource" primary="true" class="org.apache.commons.dbcp2.BasicDataSource">
    <property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
    <property name="url" value="${db.green.url}"/>
    <property name="username" value="${db.green.username}"/>
    <property name="password" value="${db.green.password}"/>

Then define the secondary datasource like this:

<bean id="purpleDataSource" class="org.apache.commons.dbcp2.BasicDataSource">
    <property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
    <property name="url" value="${db.purple.url}"/>
    <property name="username" value="${db.purple.username}"/>
    <property name="password" value="${db.purple.password}"/>

You can then wire them into your Java classes using the @Autowired and @Primary annotations:

public class AwesomeDaoImpl implements AwesomeDao {
    private JdbcTemplate greenJdbcTemplate;
    private JdbcTemplate purpleJdbcTemplate;

    public void setGreenDataSource(DataSource greenDataSource) {
        this.greenJdbcTemplate = new JdbcTemplate(greenDataSource);

    public void setPurpleDataSource(DataSource purpleDataSource) {
        this.ipdcJdbcTemplate = new JdbcTemplate(purpleDataSource);

I haven’t figured out how to plumb in more than two datasources without using JNDI. If JNDI is available, then your Spring Boot application can access all the JDNI datasources using the @Resource annotation.

public class ColourDaoImpl implements ColourErrorDao {
    private JdbcTemplate jdbcTemplate;

    @Resource(mappedName = "java:jboss/datasources/Green")
    public void setGreenDataSource(DataSource greenDataSource) {
        this.jdbcTemplate = new JdbcTemplate(greenDataSource);

Dropping unknown default constraints in SQL server

For releases, we had to provide SQL scripts to install database changes. This had to be done outside of SQL server management studio and run on multiple environments. Occasionally, I had to drop unnamed constraints in the script. (Most developers wrote their install scripts with named constraints to avoid the difficulty). Every time I encountered this problem, I googled and followed this excellent blog post by Rob Farley. It gave the SQL commands to query for the name of an unnamed constraint, given the table and column name. It stopped short of giving you the SQL code to actually drop the constraint.

So here is my take:

declare @df_name varchar(max)
select @df_name = d.name from sys.tables t
    join sys.default_constraints d
        on d.parent_object_id = t.object_id
    join sys.columns c
        on c.object_id = t.object_id
        and c.column_id = d.parent_column_id
    where t.name = 'some_db_table'
    and c.name = 'some_column_in_table'

if @df_name is not null 
	declare @sql varchar(max) = 'alter table some_db_table ' + ' drop constraint ' + @df_name 
	exec (@sql)

Java 8 Date-Time API and good old java.util.Date

Am I the only one who prefer Joda Time over the new Java 8 java.time package? I find the official Oracle documentation poor, and the API is not as intuitive.

No matter which high level datetime library is used in an application, be it java.util.Calendar, Joda Time or java.time, developers still often have to work with old fashion java.util.Date. This is because java.sql.Date is a subclass of java.util.Date and therefore most, if not all, data access layer code expects or returns java.util.Date.

To convert a datetime such as 2016-11-21 09:00 to java.util.Date is very simple in Joda Time.

// from Joda to Date
DateTime dt = new DateTime();
Date jdkDate = dt.toDate();

// from Date to Joda
dt = new DateTime(jdkDate);

Java 8 java.time has two separate ways to represent time – human time vs machine time. Classes such as LocalDateTime and LocalDate represents human time. The Instant class represents machine time. Conversions between date time and java.util.Date must be done via an Instant.

// from LocalDateTime to Date
LocalDateTime dt = LocalDateTime.of(2016, 11, 21, 09, 00);
Instant i = dt.atZone(ZoneOffset.UTC).toInstant());
Date d = Date.from(i);

// from Date to LocalDateTime
i = d.toInstant();
dt = LocalDateTime.ofInstant(i, ZoneOffset.UTC);

You can also compare the documenation of the two libraries on interoperability with java.util.time. The Joda Time one is much shorter and easier to read.

Mule flow variables to JSON payload for REST requests

I was working on a mule flow that submit a static JSON request to a REST endpoint. (The variable part was the id in the URL). My first attempt was to set the JSON request directly using <set-payload>.

<set-variable variableName="orderId" value="#[payload.id]" doc:name="set orderId"/>
<set-payload value="{'note' : 'Order auto-approved by X', 'sendEmail' : true}" doc:name="Set Payload"/>
<http:request config-ref="WS_CONFIG" path="/order/#[flowVars.orderId]/approve" method="POST" doc:name="REST approve request">
    <http:header headerName="Content-Type" value="application/json"/>

However, mule refused to submit this request, complaining about ‘Message payload is of type: String’. Most pages I found from googling suggested using the DataWeave Transformer. It can transform data to and from a large range of format, including flow variables into JSON. But the DataWeave Transformer was only available in the enterprise edition. After a frustrating hour of more googling and testing various different transformer, I found another way to achieve this easily by using a expression transformer:

<set-variable variableName="orderId" value="#[payload.id]" doc:name="set orderId"/>
<expression-transformer expression="#[['note' : 'Order auto-approved by X', 'sendEmail' : true]]" doc:name="set payload"/>
<json:object-to-json-transformer doc:name="Object to JSON"/>
<http:request config-ref="WS_CONFIG" path="/order/#[flowVars.orderId]/approve" method="POST" doc:name="REST approve request">
    <http:header headerName="Content-Type" value="application/json"/>

The flow I worked on didn’t need the order id in the JSON request. But you can reference flow variables in the payload like this:

<set-variable variableName="orderId" value="#[payload.id]" doc:name="set orderId"/>
<expression-transformer expression="#[['note' : 'Order auto-approved by X', 'id':flowVars.orderId, 'sendEmail' : true]]" doc:name="set payload"/>

Application context XML configuration in a Spring Boot web service

A colleague told me recently he didn’t use Spring for his latest REST project because he couldn’t get the beans defined in a XML configuration file loaded. He was familiar with Spring but had never boot strapped a brand new project. I didn’t realise this could be a problem because I have used Spring MVC for a very long time. He was right. It was not obvious. For example, in the Spring Boot tutorial Building a RESTful Web Service, everything is @Autowired. In a real application, you might need to define some beans in a XML configuration file. For example, database connection information for the persistence layer.

Using the example from my previous post on Spring Boot. You can use the annotation @ImportResource to load XML configuration files.

public class Application extends SpringBootServletInitializer {
  public static void main(String[] args) {
    SpringApplication.run(Application.class, args);

Spring will auto scan classes annotated with @Service, @Repository, @Controller and @Component. Because Spring AOP is proxy-based, your DAO classes should implement interfaces. For example,

public interface OrderDao {
  Order getOrder(int id) throw OrderNotFoundException;
public class OrderDaoImpl implements OrderDao {
  private JdbcTemplate jdbcTemplate;
  public void setMyDataSource(DataSource myDataSource) {
    this.jdbcTemplate = new JdbcTemplate(myDataSource);

For some reason, Spring’s own JdbcDaoSupport class is not autowired enabled. If you choose to extend JdbcDaoSupport, you will need to use XML configuration to set the datasource manually. I prefer to have JdbcTemplate as a member and @Autowired the setter instead.

The datasource is defined in the XML file spring-config.xml. The file is located in src/main/resources in a maven project. (Please use a connection pool in a real application. I’m using BasicDataSource here for simplicity sake).

<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
  <property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
  <property name="url" value="${db.url}"/>
  <property name="username" value="${db.username}"/>
  <property name="password" value="${db.password}"/>

The properties are defined in application.properties, also in src/main/resources.


Note: I’m using Spring Boot 1.3.5

Building a Spring Boot RESTful Web Service for Wildfly 8.2

The Spring Boot project promises a easy and fuss free way to build and configure Spring applications. I finally got a chance to try it out today. I needed to build a simple RESTful Web Service to deploy on Wildfly 8.2.

I followed the tutorial at https://spring.io/guides/gs/rest-service/. The tutorial was written for an embedded web server. I needed to make a few tweaks to get my app running on Wildfly.

By adding


to pom.xml, I was able to generate a war file using mvn package. However, when I deployed the war file in Wildfly, I got the following exception:

      org.apache.tomcat.websocket.server.WsServerContainer cannot be cast to

Spring boot packaged tomcat jars into the war file and they conflicted with Wildfly. I then added the following exclusions to pom.xml.


After this change, the web app deployed and started, but could not receive any requests. All GET requests returned 403 forbidden and POST requests returned 405 method not allowed. There was nothing in the log files indicating what was wrong. After a bit of head banging I found out the problem was Wildfly couldn’t forward requests to the web app! I needed to

  1. include servlet-api jars, and
  2. annotate the main class with @ComponentScan and make it a subclass of SpringBootServletInitializer
    public class Application extends SpringBootServletInitializer {
        public static void main(String[] args) {
            SpringApplication.run(Application.class, args);

That’s it! I was impressed by how little configuration I needed to get my app up and running.

Note: I was using Spring Boot version 1.3.3

Stop accidental git commits of local dev changes to config files

During development, I often make changes to a few configuration files for local testing. Most of the time, I add each file individually into the staging area so these local config changes aren’t committed. Yesterday, I made a mistake and committed the local config. I wasn’t sure how it happened but I must have clicked the commit all tracked files accidentally. The test server was then built with my local config. Oops.

To stop this from happening again, I did some googling and found this handy git command:

git update-index --assume-unchanged <file>

This will temporarily ignore changes in the specified file. All without changing .gitignore which is a tracked file in the project.