Service layer woes

July 18, 2007

I have posted an entry on the blog of my employer:

DSL’s: a higher abstraction to the rescue

June 29, 2007

One of the things that bother me is that it is very easy to loose overview in large wired structures and I think DSL’s can limit the distance between specification and implementation. Example of something set up with a very low abstraction in Spring:

    <bean id="fileWritingProcess"

    <bean id="resequenceProcess"

    <bean id="fileWritingProcessor"
        <constructor-arg index="0"
        <constructor-arg index="1">
                <ref bean="resequenceProcess"/>
                <ref bean="fileWritingProcess"/>

    <bean id="fileWritingProcessorRepeater"
        <constructor-arg index="0">
            <bean class="org.codehaus.prometheus.processors.ProcessorRepeatable">
                <constructor-arg ref="fileWritingProcessor"/>

        <property name="exceptionHandler">
            <bean class="org.codehaus.prometheus.exceptionhandler.Log4JExceptionHandler"/>

        <property name="shutdownAfterFalse"

And this is only 1/4 of the XML configuration.

Or now written in a DSL in Groovy (I’m still playing with the concepts and Groovy, so it could be that the syntax isn’t correct).

def piped = environment{


I think most can imagine what the last script is doing without knowing anything of the application. That is the power of a DSL.

Spring and visibility problems

March 1, 2007

I posted a blogentry on Spring and visibility problems on the company website. If you want to read the post, you can find it here:

Make deployment part of continuous integration

December 26, 2006

Last year I started working with Cruise Control as continuous integration server. Continuous integration is very important for Agile projects, because it is a source of continuous feedback: without continuous feedback you can’t be agile. But I have been focusing on the unit testing aspect too much: I thought of Cruise Control as a mechanism to run all unit tests when it detects changes in the source repository. But a continuous integration server can do much more than just running your unit tests.`

It can also build the war/ear and deploy it. This provides feedback on a lot of stuff that is hard to test otherwise:

  1. is the application able to deploy on the target application server? Often there are differences between the development and production environment: different operating systems, different versions of the application server, different virtual machines, different configurations, etc. Personally I don’t mind much if there are small differences, but the closer you get to the production environment, the smaller the differences should be.
  2. I use Spring for most projects, so are there errors in the applicationcontext? The best way to make sure the applicationcontext is valid, is to deploy. This could fail for a lot of reasons: an error in the jndi-name of the datasource for example.

If you make the deployment part of the continuous integration process, and make sure the build breaks when the deployment fails, you have a very good source of feedback.

At the moment I’m using Oracle OC4J as application server. I added a deploy target in my ANT script and this task is added to the list of targets that Cruise Control executes when it detects a change. Letting the deploy target drop the war/ear in a directory where it is picked up eventually is not enough btw, the build won’t break when the deploy fails.

JMX and concurrency problems

November 9, 2006


In this entry I try to explain that introducing Java Management Extensions (JMX) requires some additional care for concurrency control. If JMX is added without extra care, the system could be subject to all kinds of concurrency related problems like data races and visibility problems.


A few months ago I was working on an existing batch application and sometimes strange problems occurred: a process was throwing strange runtime exceptions (like ArrayIndexOutOfBoundsException and NullPointerException). The confusing thing was that everything was (unit)tested very heavy and the tests didn’t rapport any exceptions.

After a deeper inspection we figured out that the process was running concurrently (so multiple instances of that process were running at the same time) and that process was reading and writing to a data structure. The consequence of the concurrent execution was that the data structure got corrupted and that caused the exceptions to be thrown. The strange thing was that this process should not be able to run concurrently. It was executed by Quartz (and wired up in Spring) and by making the job stateful, concurrent execution should be impossible.

After a while I found the cause of the problem: the process was not only executed by Quartz, but also by a JMX client. The MBean (acting in behalf of the JMX client) bypassed the Quartz mechanism completely and just called the process method on the service. The fix was quite easy: instead of calling the method directly, the Quartz scheduling mechanism should be used to schedule the task for immediate execution.

The bigger problem

The bigger problem is that JMX is added into a system without realizing that ‘components’ are now used in a multithreaded environment. The consequence is you will get into all kinds of concurrency related problems like dataraces and visibility problems. Even the most simplest case: setting a property on a component is not without problems if the property isn’t volatile or used in a synchronized context (final also is a perfect but under appreciated way to prevent these problems).


The bottom line is that adding JMX to a system is not something that should be thought of lightly. The fact that certain environments (like Spring) make it very easy to JMX-ify beans, doesn’t mean you should forget about your responsibilities.

Improving the Spring TaskExecutor

October 28, 2006


In this blogpost I try to explain how the Spring TaskExecutor should be improved.


I’m interested in concurrency control and also in the Spring Framework. I have used the new concurrency library of Java 5 in quite a few Spring based projects and it works like a dream. One of the interfaces I use quite often is the Executor from Java 5. It is great for separating the submission of tasks from the execution mechanism for running the task. The consequence is that you don’t have to fiddle around with creating threads yourself and this makes your components a lot cleaner, more customizable and better testable.

It is interesting to see that Spring 2 also has support for this concept, in the form of the TaskExecutor. The advantage of using the TaskExecutor instead of the Java 5 specific Executor is that it doesn’t make code dependant of a specific implementation, eg:

  1. JSR 166: part of Java 5.
  2. backport of JSR 166: great if you still need to work with Java 1.4.
  3. CommonJ WorkManager: if you work in an environment where it is not allowed or acceptable to create unmanaged threads. There are plans to make it possible to use JSR-166 just in just these kind of environments.
  4. one of the Spring 2 implementations.

In most cases I prefer using the Executor (or one of the subinterfaces like the ExecutorService) because I often need more control, like waiting for the completion of a task. Often I have the freedom to use the concurrency library that suits my needs, so why not make use of the library directly. But if you don’t have this freedom, the TaskExecutor is a good solution.

What is the problem?

The problem is that there is no clearly defined way to detect if a task is not accepted by a TaskExecutor implementation. There are various reasons why a task can’t be accepted:

  1. if a spring executor implementation uses a workqueue, for example the ThreadPoolTaskExecutor, it could decide to rejects a task when the workqueue is full. If there is no limit on the size of the workqueue (SimpleThreadPoolTaskExecutor or a ThreadPoolExecutor with an unbound workqueue), it could lead to resource problems like running out of memory. This means that the system doesn’t degrade gracefully under heavy load.
  2. if a spring executor is shutting down, is not started or even when it is momentarily pausing.

That is why the execute method of the Executor throws a java.util.concurrent.RejectedExecutionException to signal when a task is rejected. However the execute method of the TaskExecutor, doesn’t define an exception that is thrown when a task is rejected.

You might wonder if is a good thing to deal with this exception, instead of relying on a some kind of general exception handler. I think it is good to create the possibility to deal with the exception immediately, because it is not a situation where all bets are off, like a programming or database exception. When a task can’t be executed, you give the client (another java object) the option to catch it and deal with it. And in most cases the client is aware of threading because it is dealing with an asynchronous call.


Try to make the number of components that are aware of threading, as small as possible. This makes the system a lot easier to deal with because you don’t have to worry about it all the time. In most cases, isolation (confinement) and immutability are your biggest friends to reduce concurrency control related complexity.


The solution is letting the execute method of the TaskExecutor throw a Spring specific unchecked exception, eg the : RejectedTaskExecutionException. This solution is quite simple and they already do it in other parts of the system like:

  1. DataAccessExcepton for the access to datasources
  2. RemoteAccessException for remoting.

Unification of different exception hierarchies into a single one, makes it more reliable to switch between implementations because code does not depend on implementation specific exceptions. And because the exception is not checked:

  • clients are not forced to deal with it if they don’t need to
  • it doesn’t break any existing implementations of the TaskExecutor interface

So I wonder why they have not added it. I placed a JIRA issue for it some time ago, but it hasn’t been fixed although Spring 2 has been released.

Asynchronous calls are not an implementation detail

September 4, 2006


In this post I want to make clear that the asynchronous behaviour of a method call, is not just an implementation or configuration detail, but should be made explicit in code and documentation.

What is the problem

One of the things I have seen in projects that use multithreading of some sort, is that some method calls are made asynchronous transparently. This means that the caller has no way of knowing if a method call is executed on a different thread or on the thread of the caller..

Origin of the problem

Making a method call asynchronous is quite easy with methods that return void: you could create some sort of asynchronous decorator that executes the real call on a different thread. The caller can return without waiting for the completion of the task, while the thread executes the task at some point in the future. example:

interface Watercooker{
..void activate();

class AsynchronousWatercooker implements Watercooker{
..private final Watercooker cooker;
..public AsynchronousWatercooker(Watercooker cooker){
....this.cooker = cooker;

..public void activate(){
....Runnable task = new Runnable(){
......public void run(){
....}; Thread(task).start();

Especially with a powerful framework like Spring it is quite easy to do:

  1. Spring allows unmanaged threads, unlike EJB. So your components are allowed to create threads (you are able to set the resources yourself).
  2. Spring promotes using interfaces rather than classes. So creating an asynchronous decorator is easy. And because classes don’t create instances of dependencies themselves (but rely on Dependency Injection) it is easy to inject an instance of the asynchronous decorator.
  3. Spring has powerful AOP functionality: it is quite easy to create an aspect that makes a call asynchronous.

It doesn’t mean Spring is a bad framework: it is powerful but you still need to design good software.

Why is it a problem

There are a few reasons why transparantly adding asynchronous behaviour could lead to problems:

  1. concurrency issues
  2. different interface requirements
  3. postcondition don’t hold

Why is it a problem: concurrency issues

If data is shared between threads, some sort of concurrency control is required. If the caller is not aware of this, it could lead all kinds of nasty problems like race problems and corrupted data structures.

Why is it a problem: different interface requirements

Interfaces of asynchronous calls are different than interfaces of synchronous calls. The most important one is the difference between exceptions: with a synchronous method call you get an exception that is a consequence of the operation, like CookerIsEmptyException. A asynchronous call typically doesn’t have this kind of exception because the job (in most cases) isn’t executed directly and the caller returns after the task is posted. Asynchronous calls typically have exceptions like TimeOutException (if the job isn’t accepted in a certain amount of time), AlreadyStartedException (if a different thread already has activated the cooker).

Why is it a problem: postconditions don’t hold

Another problem is that you don’t know if postconditions of a call will hold. In case of the Watercooker it could be that you return from the activate method, but the water is still cold: you don’t want to have cold coffee!


That is why the (a)synchronous behaviour of a call should be designed and documented explicitly. In some cases the asynchronous interface will look a lot like the synchronous interface, but there are big differences. Don’t make the mistake by treating them the same, because they are not.


I want to thank the guys on the concurrency mailinglist for some useful feedback.