The Strange Default Behaviour of Git Push

It felt strange that only after a year of using git, I encountered this strange pushing logic from git. (I’m blaming it on gerrit, where pushes are always done to the review staging area using refs/for/master, instead of directly to origin/master).

My work has recently moved from svn to git. I worked on my features by creating a branch locally that tracked changes in remote master

git checkout -b feature origin/master

However when I tried to push using

git push origin/master

I got a warning along the lines of push.default is unset. Git helpfully suggested me to look at ‘git help config’. From the built in help pages and googling, I found that a simple push mode was introduced in 1.7.11. This is the default behaviour and will only push if the upstream branch’s name is the same as the local one. Because I always create a local branch using a feature name, git can’t push it to the remote server using the default behaviour.

To allow a different local branch name, I need to set the push.default config variable to upstream, which simply pushes the current branch to its upstream branch.

git config --global push.default upstream

The mysterious “the statement did not return a result set” SQL Server Exception

Last week I worked on a SQL server stored procedure that looked something like this

create procedure sp_Awesome_Proc
(
  @numbers XML,
  @customerId int
)
AS
BEGIN
declare @output table(......)
insert into @output(....) select .... from ...
... some more updates and joins ...
select * from @output
END

This stored procedure was called from Java using Spring’s JdbcTemplate

query("exec sp_Awesome_Proc @numbers=?, @customerId=?", new Object[] {a, b}, new RowMapper {....}); 

When I called the stored procedure via the above Java code, I kept getting the exception SQLServerException: The statement did not return a result set. However, if I used the same parameters and called the stored procedure within SQL Server Studio, it returned a table.

It turned out that if the stored procedure performed any inserts or updates prior to the final select, this baffling statement did not return a result set exception will be thrown. In my case, a series of queries were executed to populate a table variable which is returned at the end of the stored procedure.

A simple solution to this problem is to add SET NOCOUNT ON at the start of the stored procedure.

create procedure  (
  @numbers XML,
  @customerId int
)
AS
BEGIN
SET NOCOUNT ON 
...
END

In fact, all the stored procedure within our code base had this statement. I have been copy and pasting this into all the previous stored procedures I created, without knowing its significance. Only now I learned the why behind it.

Verify nulls with Mockito argument matchers

When using verify in Mockito, all the function arguments must either be exact values or argument matchers. You cannot mix and match the two within the same verify. In other words, the following two statements are correct,

verify(mockFtpClient).rename("a.txt", "b.txt");
verify(mockFtpClient).rename(eq("a.txt"), eq("b.txt"));

But this is not

verify(mockFtpClient).rename(eq("a.txt"), "b.txt");

Today, I needed to match an anyString() and a null in the same verify statement. Mockito’s documentation didn’t have an example on how to use an argument matcher for null string. It turns out there is a hamcrest style isNull matcher in Mockito:

verify(mockFtpClient).rename(anyString(), org.mockito.Matchers.isNull(String.class));

(The above example makes no sense semantically because you’d never want to pass a null string to the FtpClient’s rename function).

Logging exceptions in Mule

The simplest way to log exception thrown within a mule flow is to use the mule expression langauge with the Logger component

<logger level="ERROR" doc:name="Logger" message="#[exception.causeException]"/>
 

However, this only logs the message from the root cause. Sometimes, I need to log the full stack trace in debug. (Or at error level if it’s only developers reading the application log). To do this, mule provides a class called ExceptionUtils. For example,

<logger level="ERROR" doc:name="Logger" message="#[org.mule.util.ExceptionUtils.getFullStackTrace(exception)]"/>

Classes implementing the Callable interface can also access any exception thrown using the exception payload. I had to do this in an application I was working on because I needed to save any error messages in the database request buffer for audit purposes.

@Component("UpdateDB")
public class UpdateDB implements Callable {
	@Override
	public Object onCall(MuleEventContext muleContext) throws Exception {
		ExceptionPayload exceptionPayload = muleContext.getMessage().getExceptionPayload();
		String errorMessage = exceptionPayload.getRootException().getMessage();
		return null;
	}
}

Accessing mule flow variables from Java components

In one of the earlier steps of a mule flow, I extracted some data from the payload, and stored them in flow variables. This was done to save information like the database primary key which I would need to update the status buffer later, before transforming the payload to the required response message.

<flow name="someflow">
  <inbound-endpoint ref="jdbcEndpoint" doc:name="Generic" />
  ..
  <set-variable variableName="requestId" value="#[message.payload.requestId]"/>
  ..
</flow>

Getting to these flow variables from within a Java component turned out to be a lot harder I had anticipated.

My first attempt was to use the @Mule annotation. I annotated my Java method as follows

    public void process(@Payload AmazingFilePayload payload,  
                        @Mule("flowVars['requestId']") Integer requestId) {
        // do stuff
    }

The MEL was valid because I could access the flow variable within the mule flow with

<logger level="DEBUG" message="#[flowVars['requestId']]"/>

However, the above Java gave a StringIndexOutOfBoundsException with the message String index out of range: -1. Looking through the documentation, I couldn’t see how you access flow variables at all with Java annotations.

In the end, I resorted to implementing the Callable interface. It seemed an unsatisfactory work around to me, because

  1. the Java component was no longer a POJO
  2. I needed a different class for each update method, instead of writing a single class with many related methods
public class UpdateBuffer implements Callable {
   @Override
	public Object onCall(MuleEventContext muleContext) throws Exception {
		Integer requestId = (Integer) muleContext.getMessage().getProperty("requestId", PropertyScope.INVOCATION);
		Integer requestId2 = (Integer) muleContext.getMessage().getInvocationProperty("requestId");  // alternative
		return null;
	}
}

Spring dependency injection for Struts Actions

Being used to Spring MVC, I was surprised when I discovered Struts did not use the Action bean I created in the Spring config file when handling web requests. Basically I needed a DAO wired into an existing Struts Action. I created a bean with the appropriate setter for the Struts Action in the Spring config file. However, I got a nasty null pointer exception because the setter was never called.

This should have been obvious if I have given it a thought. The Struts Actions in the web app are managed by Struts, not Spring. To get Spring to perform dependency injection on Struts Actions, you need to use the DelegatingActionProxy.

In the Struts config file

<action path="/store/order" type="org.springframework.web.struts.DelegatingActionProxy" name="orderForm" validate="false">
	<forward name="success" path="/jsp/view.jsp" />
</action>

In the Spring config file

<bean name="/store/order" class="com.whileloop.web.action.OrderAction">
    <property name="basketDao" ref="basketDao"/>
</bean>

Log4j rolling file appenders in Windows

I’ve been using the DailyRollingFileAppender in log4j for years without any problems. It came as a surprise when my trusted appender failed to rollover in a new web service. A bit of googling made me realised it is a widespread problem. The only reason I haven’t encountered this problem before was because I have exclusively developed for Linux. And now my new work is a Windows shop.

Essentially, the log4j DailyRollingFileAppender renames the day’s log file at the end of the day. This runs into file contention problems in Windows, and the renaming regularly fails. A very simple solution to this is to create your log file with the date prefix already in place, and thus avoid renaming it entirely. This is the solution taken by Geoff Mottram on the DatedFileAppender he released to the public domain back in 2005. (This is the appender I found configured for some of the web services deployed on the company’s mule server).

The log4j crew also recognised this problem, and according to its bug tracker, the problem has been fixed for 1.3. But since the 1.3 series have been abandoned, the patch is now available as part of Log4j Extras.

Using the new log4j rolling file appender

To include log4j extras using maven

	<dependency>
		<groupId>log4j</groupId>
		<artifactId>apache-log4j-extras</artifactId>
		<version>1.2.17</version>
	</dependency>    

A sample log4j.xml

	<appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender">
		<rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
	      		<param name="FileNamePattern" value="D:/logs/app-%d.log.gz"/>
	   	</rollingPolicy>
		<layout class="org.apache.log4j.PatternLayout">
			<param name="ConversionPattern" value="%d{ABSOLUTE} %-5p [%c{1}] %m%n" />
		</layout>
	</appender>

How often the log file rolls is specified by the date format in the FileNamePattern. It uses the same formatter as Java’s SimpleDateFormat. By default, (%d in app-%d.log), a new log file is created daily. To create a new log file every minute, use something like app-%d{yyyy-MM-dd-HH-mm}.log. The gz suffix in app-%d.log.gz means old log files will be gzipped automatically.

More hamcrest collections goodness

I have been using Hamcrest more in my unit tests. JUnit 4.11 included only a portion of the matchers available in Hamcrest 1.3. (The ones packaged in hamcrest-core specifically). To include other useful matchers from Hamcrest, add the following to the maven pom.xml

		<dependency>
			<groupId>org.hamcrest</groupId>
			<artifactId>hamcrest-library</artifactId>
			<version>1.3</version>
			<scope>test</scope>
		</dependency>

I found the collections one very handy. For example, to test the size of a list:

import static org.hamcrest.collection.IsCollectionWithSize.hasSize;
import static org.junit.Assert.assertThat;
...
List list = new ArrayList();
assertThat(list, hasSize(0));

JUnit 4.11 and its new Matchers

I have never used the Hamcrest matchers with JUnit before. Not until last week. I noticed in the release note that JUnit 4.11 included Hamcrest 1.3, with its Matchers and improved assertThat syntax. Reading the examples on the release note, I was intrigued.

To use the new Matchers and assertThat, you need to include the following imports

import static org.junit.Assert.*;
import static org.hamcrest.CoreMatchers.*;
Number Objects

The first improvement I noticed were comparison with Java number objects.

Long l = new Long(10);
assertEquals(10L, l);
assertThat(l, is(10L));

With the old assertEquals, the compiler would complain about The method assertEquals(Object, Object) is ambiguous for type X. You need to change both parameters to either long values or Long objects for the assert to work, for example

assertEquals(10L, l.longValue());

On the other hand, assertThat and the is() matcher works just fine.

Collections

I saw a few very handy looking matchers for Collections from looking at the CoreMatchers javadoc. For example, hasItem, hasItems, everyItem. I had the opportunity to use hasItems in my unit tests last week to check if a List object contains items from a given list of values. It was as simple to use as this

assertThat(list, hasItems("apples", "oranges"));

I’m a fan of this new way of matching things in JUnit.

Linked Servers : Creating a local development database in MS SQL Part 3

One of the tables I wanted to copy to my local SQL server has nearly 200 million entries. It would take far too long to copy if I generate scrips for the data using the SQL server scripting tool. Besides, I didn’t need all 200 million entries for development anyway.

The Linked Servers feature in SQL server management studio makes it simple to copy a selection of data from table to another. Once two database instances are linked, you can use SQL select and insert commands to copy data.

Create a linked server

Open the local database in SQL server management studio. Select Server Objects -> Linked Servers. Then right click and select New Linked Server. The following dialogue will appear on screen.

null

Add an appropriate name in the Linked Server text box (for example, lotsofdata-server). Under server type, select the SQL server radio button. Choose Security on the left navigation pane. Select the radio button Be made using this security context. Enter the correct username and password for the remote server.

To copy the first 10000 rows from the remote server lotsofdata-server into a table that doesn’t exist in the local database

select TOP 10000 * into dbo.[awesomeTable]  from [lotsofdata-server].[awesomeDatabase].dbo.[awesomeTable]

If the table already exist in the local database,

insert into dbo.[awesomeTable] select TOP 10000 * from [lotsofdata-server].[awesomeDatabase].dbo.[awesomeTable]
Copying an identity column

If the table you wanted to copy contains an identity column, then you need to turn on identity insert

SET IDENTITY_INSERT awesomeTable ON

and explicitly specify all the columns you are inserting into the table, like

insert into dbo.[awesomeTable] (col1, col2) select TOP 10000 (col1, col2) from [lotsofdata-server].[a-database-name].dbo.[awesomeTable]

Otherwise, SQL server will complain along the lines of cannot insert explicit value for identity column in table awesomeTable.