Solutions to exercises in Object-Oriented JavaScript chapters 3 and 4

Since there are no official solutions to the exercises in Stoyan Stefano’s Object-Oriented JavaScript, I’m posting my own here. Hopefully it will be useful to someone. I can’t guarantee their correctness, don’t complain if your teacher/boss gives you a big fat 0 for my answers.

Chapter 3 Functions

Chapter 4 Objects

Node.js and learning JavaScript

I’m slowly working my way through Object-Oriented JavaScript by Stefan and Sharma. All the tutorials I’ve seen online suggest using the web console in browsers for typing and testing code. While this method is fine for one or two lines of code, it is not the most efficient way for larger programs.

This is where node.js comes in handy. According to its website, it is a platform built on Chrome’s JavaScript runtime for easily building fast, scalable network applications. For a JavaScript newbie, it means you can write your code in a text editor and run it outside a browser environment. All you have to do is

  1. open the node.js command prompt (via the windows menu)
  2. change to the directory where the JavaScript program is located
  3. type node filename.js

iOS 8 on an 8GB iPhone 5c: the mysterious case of missing memory

Ever since I upgraded my iphone 5c from iOS 7 to iOS 8, it was running very low on memory. I kept getting the low memory warning after taking 3-4 pictures. I deleted all the apps on the phone (except google maps), removed all the photos, music and video, and I still couldn’t get more than 300-400Mb. The phone was basically unusable. I knew iOS 8 had a larger memory footprint than iOS 7, but I didn’t expect it would be usuable on a 8GB phone. Especially iPhone 6 has just been released, so an iPhone 5 wasn’t that old!

I wanted to downgrade back to iOS7, but I was a day too late. Apple stopped signing iOS 7 on 29 Sept. Also jailbreaking wasn’t available for iOS 8 at the time. I was stuck with a more or less useless smart phone.

Then my 16GB iPad mini was also starting to run out of memory. However I had 1GB of photos synced from my old mac, so my first thought was to remove the photos. However, I no longer had the mac, and I couldn’t remove the photos with my windows iTunes installation. My only option was to reset my iPad mini.

With a fresh new installation of iOS 8, I noticed I had 11GB free on the iPad. It occurred to me immediately that I should have have a lot more memory on my phone instead of the 400MB I was struggling on. And indeed, after resetting the phone, I suddenly had 4GB.

I still don’t know what was eating up all the memory on my iDevices. (The lost memory wasn’t listed under Settings -> Usage). I am blaming the iOS 7 to iOS 8 upgrade being very poor, leaving a lot of temporary files behind on the devices.

Unit test Audit Logging Requirements with Mockito

During development of server-side software, I routinely encountered requirements to audit logging. This ranged from logging routine events such as system startup and shutdown, to information when errors occur. For example, a component I worked on was responsible for generating binary data structures and forwarding them to a distribution component. The data generator must limit the size of the structures to protect end devices from data packets that were too large to be processed within their resource constraints. The requirements stipulated that data attributes should be placed into the structure in ascending order, and an error message shall be logged in the audit log for any attributes that were not included in the structure. I have added verification on audit logging in the unit tests that test the handling of attribute omission. Since unit tests have been implemented to check the error handling already, it was not much more effort to include checks on the logging for the same conditions.

A naïve approach to unit test logging is to create the error within a unit test, write the log to a file, read the log file from disk, and then search the file for the expected error message. However, there is a simpler and more elegant way to assert logging behavior by using mock objects. By adding a mock Appender to the target Logger, all log requests to the target Logger are now also forwarded to the mock Appender. Logging events can then be verified directly with the mock Appender.

I use JUnit and Mockito for my unit testing. The following code snippet sets up a mock Appender for use in the unit test class.

@Mock private Appender appender;
private static final Logger AUDIT = Logger.getLogger("org.whileloop.org.data.generator");

@Before
public void setUp() {
  AUDIT.addAppender(appender);
}

@After
public void tearDown() {
  AUDIT.removeAppender(appender);
}

To assert that a log message is written during a unit test, I use Mockito’s verify on the mock Appender’s doAppend method:

verify(appender).doAppend((LoggingEvent) anyObject());

Obviously, this only checks for the existence of a log message when the unit test is run. I normally run our unit tests with logging at trace level to make sure logging calls at the lowest level are exercised. If a debug message is written during the unit test, the above verification will pass even when no error messages were written.

Mockito’s ArgumentCaptor can be used to check for a specific message at a specific logging level. The following code captures all the LoggingEvents that occur in a unit test. It then iterates over the list of captured events, checking their logging level and message content. If one of the log messages matches the expected level and keyword, then the test will pass the assert statement after the loop. Searching for an exact match in the message could lead to brittle tests. They will break if the log message is updated. Therefore I prefer searching for keywords like ‘error’, ‘exceeds’, ‘dropped’ instead.

verify(appender, atLeastOnce()).doAppend(argumentCaptor.capture());
List<LoggingEvent> loggingEvents = argumentCaptor.getAllValues();
for (LoggingEvent le : loggingEvents) {
  if (le.getLevel().equals(ERROR) &&
      le.getMessage().toString().contains(keyword) {
      matched = true;
  }
}
assertTrue("Cannot find error message [" + keyword + "] in audit log", matched);

Mock objects provide an easy way to test logging in unit tests. To do this I exploit the fact that a mock object remembers all its interactions. Logging can be verified by looking at interactions with the mock Appender’s doAppend method, instead of reading and parsing the log file on disk. By including logging in unit tests, I can verify that audit logging requirements are fulfilled. It also guards against future changes to the code from inadvertently breaking the logging requirement compliance. Problems with audit logging caused by refactoring can be caught during development automatically, removing the chance that the failure could reach testers or even clients.

This post was originally written and published in August 2013 for a newsletter.

Caching Password for Git HTTP/HTTPS connections

I got sick of entering my username and password every time I do a git operation. Luckily, git provides handy options for caching passwords. The safer option is probably to just cache the credentials in memory

git config --global credential.helper cache

This would keep the password in memory for 15 minutes. To permanently save the credentials on disk (in plain text format), use

git config --global credential.helper store

PS. I chose the later unsecure lazy option.

The Strange Default Behaviour of Git Push

It felt strange that only after a year of using git, I encountered this strange pushing logic from git. (I’m blaming it on gerrit, where pushes are always done to the review staging area using refs/for/master, instead of directly to origin/master).

My work has recently moved from svn to git. I worked on my features by creating a branch locally that tracked changes in remote master

git checkout -b feature origin/master

However when I tried to push using

git push origin/master

I got a warning along the lines of push.default is unset. Git helpfully suggested me to look at ‘git help config’. From the built in help pages and googling, I found that a simple push mode was introduced in 1.7.11. This is the default behaviour and will only push if the upstream branch’s name is the same as the local one. Because I always create a local branch using a feature name, git can’t push it to the remote server using the default behaviour.

To allow a different local branch name, I need to set the push.default config variable to upstream, which simply pushes the current branch to its upstream branch.

git config --global push.default upstream

The mysterious “the statement did not return a result set” SQL Server Exception

Last week I worked on a SQL server stored procedure that looked something like this

create procedure sp_Awesome_Proc
(
  @numbers XML,
  @customerId int
)
AS
BEGIN
declare @output table(......)
insert into @output(....) select .... from ...
... some more updates and joins ...
select * from @output
END

This stored procedure was called from Java using Spring’s JdbcTemplate

query("exec sp_Awesome_Proc @numbers=?, @customerId=?", new Object[] {a, b}, new RowMapper {....}); 

When I called the stored procedure via the above Java code, I kept getting the exception SQLServerException: The statement did not return a result set. However, if I used the same parameters and called the stored procedure within SQL Server Studio, it returned a table.

It turned out that if the stored procedure performed any inserts or updates prior to the final select, this baffling statement did not return a result set exception will be thrown. In my case, a series of queries were executed to populate a table variable which is returned at the end of the stored procedure.

A simple solution to this problem is to add SET NOCOUNT ON at the start of the stored procedure.

create procedure  (
  @numbers XML,
  @customerId int
)
AS
BEGIN
SET NOCOUNT ON 
...
END

In fact, all the stored procedure within our code base had this statement. I have been copy and pasting this into all the previous stored procedures I created, without knowing its significance. Only now I learned the why behind it.

Verify nulls with Mockito argument matchers

When using verify in Mockito, all the function arguments must either be exact values or argument matchers. You cannot mix and match the two within the same verify. In other words, the following two statements are correct,

verify(mockFtpClient).rename("a.txt", "b.txt");
verify(mockFtpClient).rename(eq("a.txt"), eq("b.txt"));

But this is not

verify(mockFtpClient).rename(eq("a.txt"), "b.txt");

Today, I needed to match an anyString() and a null in the same verify statement. Mockito’s documentation didn’t have an example on how to use an argument matcher for null string. It turns out there is a hamcrest style isNull matcher in Mockito:

verify(mockFtpClient).rename(anyString(), org.mockito.Matchers.isNull(String.class));

(The above example makes no sense semantically because you’d never want to pass a null string to the FtpClient’s rename function).

Logging exceptions in Mule

The simplest way to log exception thrown within a mule flow is to use the mule expression langauge with the Logger component

<logger level="ERROR" doc:name="Logger" message="#[exception.causeException]"/>
 

However, this only logs the message from the root cause. Sometimes, I need to log the full stack trace in debug. (Or at error level if it’s only developers reading the application log). To do this, mule provides a class called ExceptionUtils. For example,

<logger level="ERROR" doc:name="Logger" message="#[org.mule.util.ExceptionUtils.getFullStackTrace(exception)]"/>

Classes implementing the Callable interface can also access any exception thrown using the exception payload. I had to do this in an application I was working on because I needed to save any error messages in the database request buffer for audit purposes.

@Component("UpdateDB")
public class UpdateDB implements Callable {
	@Override
	public Object onCall(MuleEventContext muleContext) throws Exception {
		ExceptionPayload exceptionPayload = muleContext.getMessage().getExceptionPayload();
		String errorMessage = exceptionPayload.getRootException().getMessage();
		return null;
	}
}

Accessing mule flow variables from Java components

In one of the earlier steps of a mule flow, I extracted some data from the payload, and stored them in flow variables. This was done to save information like the database primary key which I would need to update the status buffer later, before transforming the payload to the required response message.

<flow name="someflow">
  <inbound-endpoint ref="jdbcEndpoint" doc:name="Generic" />
  ..
  <set-variable variableName="requestId" value="#[message.payload.requestId]"/>
  ..
</flow>

Getting to these flow variables from within a Java component turned out to be a lot harder I had anticipated.

My first attempt was to use the @Mule annotation. I annotated my Java method as follows

    public void process(@Payload AmazingFilePayload payload,  
                        @Mule("flowVars['requestId']") Integer requestId) {
        // do stuff
    }

The MEL was valid because I could access the flow variable within the mule flow with

<logger level="DEBUG" message="#[flowVars['requestId']]"/>

However, the above Java gave a StringIndexOutOfBoundsException with the message String index out of range: -1. Looking through the documentation, I couldn’t see how you access flow variables at all with Java annotations.

In the end, I resorted to implementing the Callable interface. It seemed an unsatisfactory work around to me, because

  1. the Java component was no longer a POJO
  2. I needed a different class for each update method, instead of writing a single class with many related methods
public class UpdateBuffer implements Callable {
   @Override
	public Object onCall(MuleEventContext muleContext) throws Exception {
		Integer requestId = (Integer) muleContext.getMessage().getProperty("requestId", PropertyScope.INVOCATION);
		Integer requestId2 = (Integer) muleContext.getMessage().getInvocationProperty("requestId");  // alternative
		return null;
	}
}