Spring dependency injection for Struts Actions

Being used to Spring MVC, I was surprised when I discovered Struts did not use the Action bean I created in the Spring config file when handling web requests. Basically I needed a DAO wired into an existing Struts Action. I created a bean with the appropriate setter for the Struts Action in the Spring config file. However, I got a nasty null pointer exception because the setter was never called.

This should have been obvious if I have given it a thought. The Struts Actions in the web app are managed by Struts, not Spring. To get Spring to perform dependency injection on Struts Actions, you need to use the DelegatingActionProxy.

In the Struts config file

<action path="/store/order" type="org.springframework.web.struts.DelegatingActionProxy" name="orderForm" validate="false">
	<forward name="success" path="/jsp/view.jsp" />
</action>

In the Spring config file

<bean name="/store/order" class="com.whileloop.web.action.OrderAction">
    <property name="basketDao" ref="basketDao"/>
</bean>

Log4j rolling file appenders in Windows

I’ve been using the DailyRollingFileAppender in log4j for years without any problems. It came as a surprise when my trusted appender failed to rollover in a new web service. A bit of googling made me realised it is a widespread problem. The only reason I haven’t encountered this problem before was because I have exclusively developed for Linux. And now my new work is a Windows shop.

Essentially, the log4j DailyRollingFileAppender renames the day’s log file at the end of the day. This runs into file contention problems in Windows, and the renaming regularly fails. A very simple solution to this is to create your log file with the date prefix already in place, and thus avoid renaming it entirely. This is the solution taken by Geoff Mottram on the DatedFileAppender he released to the public domain back in 2005. (This is the appender I found configured for some of the web services deployed on the company’s mule server).

The log4j crew also recognised this problem, and according to its bug tracker, the problem has been fixed for 1.3. But since the 1.3 series have been abandoned, the patch is now available as part of Log4j Extras.

Using the new log4j rolling file appender

To include log4j extras using maven

	<dependency>
		<groupId>log4j</groupId>
		<artifactId>apache-log4j-extras</artifactId>
		<version>1.2.17</version>
	</dependency>    

A sample log4j.xml

	<appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender">
		<rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
	      		<param name="FileNamePattern" value="D:/logs/app-%d.log.gz"/>
	   	</rollingPolicy>
		<layout class="org.apache.log4j.PatternLayout">
			<param name="ConversionPattern" value="%d{ABSOLUTE} %-5p [%c{1}] %m%n" />
		</layout>
	</appender>

How often the log file rolls is specified by the date format in the FileNamePattern. It uses the same formatter as Java’s SimpleDateFormat. By default, (%d in app-%d.log), a new log file is created daily. To create a new log file every minute, use something like app-%d{yyyy-MM-dd-HH-mm}.log. The gz suffix in app-%d.log.gz means old log files will be gzipped automatically.

More hamcrest collections goodness

I have been using Hamcrest more in my unit tests. JUnit 4.11 included only a portion of the matchers available in Hamcrest 1.3. (The ones packaged in hamcrest-core specifically). To include other useful matchers from Hamcrest, add the following to the maven pom.xml

		<dependency>
			<groupId>org.hamcrest</groupId>
			<artifactId>hamcrest-library</artifactId>
			<version>1.3</version>
			<scope>test</scope>
		</dependency>

I found the collections one very handy. For example, to test the size of a list:

import static org.hamcrest.collection.IsCollectionWithSize.hasSize;
import static org.junit.Assert.assertThat;
...
List list = new ArrayList();
assertThat(list, hasSize(0));

JUnit 4.11 and its new Matchers

I have never used the Hamcrest matchers with JUnit before. Not until last week. I noticed in the release note that JUnit 4.11 included Hamcrest 1.3, with its Matchers and improved assertThat syntax. Reading the examples on the release note, I was intrigued.

To use the new Matchers and assertThat, you need to include the following imports

import static org.junit.Assert.*;
import static org.hamcrest.CoreMatchers.*;
Number Objects

The first improvement I noticed were comparison with Java number objects.

Long l = new Long(10);
assertEquals(10L, l);
assertThat(l, is(10L));

With the old assertEquals, the compiler would complain about The method assertEquals(Object, Object) is ambiguous for type X. You need to change both parameters to either long values or Long objects for the assert to work, for example

assertEquals(10L, l.longValue());

On the other hand, assertThat and the is() matcher works just fine.

Collections

I saw a few very handy looking matchers for Collections from looking at the CoreMatchers javadoc. For example, hasItem, hasItems, everyItem. I had the opportunity to use hasItems in my unit tests last week to check if a List object contains items from a given list of values. It was as simple to use as this

assertThat(list, hasItems("apples", "oranges"));

I’m a fan of this new way of matching things in JUnit.

Linked Servers : Creating a local development database in MS SQL Part 3

One of the tables I wanted to copy to my local SQL server has nearly 200 million entries. It would take far too long to copy if I generate scrips for the data using the SQL server scripting tool. Besides, I didn’t need all 200 million entries for development anyway.

The Linked Servers feature in SQL server management studio makes it simple to copy a selection of data from table to another. Once two database instances are linked, you can use SQL select and insert commands to copy data.

Create a linked server

Open the local database in SQL server management studio. Select Server Objects -> Linked Servers. Then right click and select New Linked Server. The following dialogue will appear on screen.

null

Add an appropriate name in the Linked Server text box (for example, lotsofdata-server). Under server type, select the SQL server radio button. Choose Security on the left navigation pane. Select the radio button Be made using this security context. Enter the correct username and password for the remote server.

To copy the first 10000 rows from the remote server lotsofdata-server into a table that doesn’t exist in the local database

select TOP 10000 * into dbo.[awesomeTable]  from [lotsofdata-server].[awesomeDatabase].dbo.[awesomeTable]

If the table already exist in the local database,

insert into dbo.[awesomeTable] select TOP 10000 * from [lotsofdata-server].[awesomeDatabase].dbo.[awesomeTable]
Copying an identity column

If the table you wanted to copy contains an identity column, then you need to turn on identity insert

SET IDENTITY_INSERT awesomeTable ON

and explicitly specify all the columns you are inserting into the table, like

insert into dbo.[awesomeTable] (col1, col2) select TOP 10000 (col1, col2) from [lotsofdata-server].[a-database-name].dbo.[awesomeTable]

Otherwise, SQL server will complain along the lines of cannot insert explicit value for identity column in table awesomeTable.

Getting new subversion branches after initial svn-git cloning

Earlier this week, I needed to work on a feature branch on the company’s subversion repository. (The one I did a git copy of a month ago).

Imagine my surprise when I couldn’t see the feature branch with git branch -r. The command shows all the branches and tags in a git repository. The branch was created after my initial cloning, and was not pulled down with subsequent git svn rebase.

It turned out to get subversion branches created after cloning, you need to do a fetch instead.

git svn fetch
git branch -r

Reading the git-svn man page more carefully, rebase only fetches revisions from the SVN parent of the current HEAD. In comparison, fetch fetches unfetched revisions from the tracked Subversion remote.

Checkstyle configuration

I spent yesterday configuration Jenkins for my current projects. Unless your team formatted their code to a high standard, checkstyle threw up a lot of noise about things that no team member would undertake to fix. Like excess white spaces, tabs, or Java docs for getters and setters. (Personally I loath tabs, but many editors put tabs in by default).

I couldn’t be the only one who wanted to run all checkstyle checks by default, except the ones I specifically exclude? I didn’t care that the checks included each installation of checkstyle might differ. I simply wanted to reduce the signal to noise ratio by removing the noisiest offender.

However, checkstyle configuration didn’t work on the principle of exclusion. If a configuration file was provided, then all the checks to be run must be specified in the configuration. I have tried fiddling with the suppression filter but could not get it to work this way either.

The only way to achieve this seemed to be adopting one of the checkstyle.xml templates one could find on the web. In the end, I used this one which claimed to be the Sun Coding Convention.

Creating a test user : Creating a local development database in MS SQL Part 2

When I was given my work laptop, it already had MS SQL server and the management studio installed. It was set up to use windows authentication. On the other hand, our populated test/development database used a SQL server authentication (ie with username/password). I needed to create a login for a test user on my local MS SQL server to achieve compatibility.

Create a test user login

First, in SQL server management studio, open a new query window by right clicking on the server name and select New Query. (This will create a query window for the master database). Create a login for user ${db.username} with ${db.password}.

USE master;
IF NOT EXISTS (SELECT * FROM master.dbo.syslogins WHERE loginname = N'${db.username}')
CREATE LOGIN [${db.username}] WITH PASSWORD = '${db.password}';

Then to add the new user to the test database ${db.name}.

USE [${db.name}];
CREATE USER [${db.username}] FOR LOGIN [${db.username}] WITH DEFAULT_SCHEMA=[dbo];
ALTER ROLE [db_owner] ADD MEMBER [${db.username}];
ALTER ROLE [db_datareader] ADD MEMBER [${db.username}];
ALTER ROLE [db_datawriter] ADD MEMBER [${db.username}];
Mixed authentication

My SQL server was originally set to only allow windows authentication. I needed the SQL server instance to accept mixed authentication instead. (Mixed authentication allows both windows and sql server style authentications). In SQL server management studio, right click on the server name, then choose Properties. On the Security page, under Server authentication, select SQL Server and Windows Authentication mode.

You need to restart the SQL server instance to activate this feature. You can restart the server by right clicking on the server name again, and choose Restart. However, you might want to activate TCP/IP authentication before restarting.

Enable TCP/IP Protocol

Lastly, open the SQL server configuration manager (via the windows start menu). Under SQL Server Network Configuration -> Protocols for MSSQLSERVER, toggle the status for TCP/IP to Enabled. You can restart the server now by going to SQL Server Services (in the left navigation pane), right click on SQL Server (MSSQLSERVER) and choose Restart.

Copying an existing database : Creating a local development database in MS SQL Part 1

One of the first tasks I set myself at my new role was to create my own local development database. Currently, the developers were developing against a populated database shared amongst developers and testers.

I couldn’t find the scripts to recreate the database schema from scratch. The Copy Database Wizard in MS SQL Server Management Studio (SMS) failed with authentication problems. Backup Database stored the sqldump on the remote server hard disk which I didn’t have access to. I was ready to give up when a colleague from another team told me about the scripting functionality in SMS.

Database Objects Scripts

In SMS, right click on the database you want to replicate, then select Tasks -> Generate Scripts. A dialogue will pop up, where you can select the database objects you want to copy. You can copy specific tables, views, stored procedures etc, as shown in the following diagram.

Then in the next dialogue, under the advanced options, there are two options I found especially useful.

  • Script DROP and CREATE
  • Types of data to script: options are schema only, data only and both schema and data

Once the script was generated, it can be applied in a query window for the target database.

Mule Studio and Maven Profiles

The maven project I’m working on has profiles for different environments, such as testing, development and deployment.

<profiles>
	<profile>
		<id>test</id>
		<activation>
			<activeByDefault>true</activeByDefault>
		</activation>
		<properties>
			<db.host>testdb.mycompany.com</db.host>
			<db.name>projectx</db.name>
		</properties>
	</profile>

	<profile>
		<id>development</id>
		<properties>
			<db.host>127.0.0.1</db.host>
		</properties>
	</profile>
</profiles>

To activate multiple profiles at run time, you use the command line option -P
mvn test -P test,development

Or inside eclipse with m2e, you can configure a list of active profiles under Run Configurations.

However, with Mule Studio, if you run the project as a Mule Application with Maven, there are no options to select maven profiles.

The way to get around this is to edit the maven profiles to be activated by a property or a file. In my case, I updated my pom.xml to

<profiles>
	<profile>
		<id>Test</id>
		<activation>
			<activeByDefault>true</activeByDefault>
			<property>
				<name>env</name>
				<value>test</value>
			</property>
		</activation>
		<properties>
			<db.host>testdb.mycompany.com</db.host>
			<db.name>projectx</db.name>
		</properties>
	</profile>

	<profile>
		<id>Development</id>
		<activation>
			<file>
				<exists>.git</exists>
			</file>
		</activation>
		<properties>
			<db.host>127.0.0.1</db.host>
		</properties>
	</profile>
</profiles>

The test profile is activated by setting the system property env to test. This is done in Mule Studio under Windows -> Preferences -> Mule Studio -> Maven Settings. In the “MAVEN_OPTS environment variable” text box, add -Denv=test. The development profile is activated by the existence of a .git file in the project root. Now when I run this as a mule+maven project in eclipse, the properties from both of these profiles are available.

You might ask wouldn’t it be easier to just add -P test,development to the MAVEN_OPTS text box? Yes it would definitely be, but mule studio complained about -P being an unrecognized option.

PS. I’m using mule studio 3.5.