Liking Gradle

April 17, 2010

Last week I have spend a lot of time getting Maven 2 the way I wanted. In most cases Maven is doable, but in case Multiverse, I have some very specific requirements. I need to run the same tests again using different configurations (javaagent vs static instrumentation) and I also need to fabricate a very special jar that integrates all dependent libraries (using jarjar) and does some pre-compilation.

The big problem with Maven is that defining complex builds using the plug-in system is not that great. I have spend a lot of time on configuring the plugins because they were not working like expected or not working at all.

A long time ago I took a look at Gant, but ANT (even if you do it in groovy) still is ANT; meaning a lot of manual labor. So I wanted to give Gradle a try; the power of Groovy combined with a plugin system and predefined build system from Maven. And after 1 day of playing with the scripts, I have got most stuff up and running exactly like the current Maven system (even the artifacts being deployed to the local maven repository!). And I guess that during this week I can make the switch final.

One of the things I really need to get started on, is creating an elaborate set of benchmarks. Using Maven it is fighting the framework, but with the complete freedom you have in Gradle, but still all the facilities provided by plugins, it think it is going to a joy to begin with.

We’ll see what Maven 3 is going to bring, but I’m glad that there is a usable alternative available.


Multiverse and constructor optimizations

April 2, 2010

One of other important optimizations I have just implemented and is going to be added to the Multiverse 0.5 release, is that transactions that only create/create+read objects are very cheap (reads already are very cheap) For a construction only one volatile read (for the version) and one volatile write (for the content to be stored). This means that creating transactional objects is going to be very fast and won’t cause scalability problems.

I hope I’ll be able to inline the complete transaction for the constructor when it is very basic (just like the getter/setter optimization).


Multiverse STM: Inferencing optimal transaction settingss

March 15, 2010

In Multiverse there are a lot of parameters that can activate certain functionality on the transaction. Parameters like:

  1. readonly
  2. interruptible
  3. automaticReadTracking (needed for blocking operations)
  4. maxRetryCount
  5. allowWriteSkewProblem

Each additional feature causes overhead on the STM because it takes longer to process a transaction and it also leads to increased memory usage. In Multiverse 0.4 I already added some learning functionality to the STM runtime that makes speculative choices and adapts if a choice didn’t work out well. I use this for selecting a good performing transaction implementation; based on how many reads/writes the transaction encountered the previous time the system is able to select a better fitting transaction next time.

There is no reason why not to inference optimal settings; just start with readonly and no automatic readtracking. If an update is done, the system knows that next time an update transaction is needed. The same goes for a retry; it a retry is done and automaticReadTracking is not enabled, the next transaction can run with this feature enabled. Perhaps similar techniques could be used to determine if readtracking is useful because a transaction runs often in read conflicts.

David Dice already wrote about this in his famous TL2 paper and I certainly think that it is going to save developers a lot of time. And if someone wants to fiddle with settings themselves, they can always override settings (or perhaps even deactivating the learning system). This is certainly something I want to integrate in one of the next Multiverse releases.


Developing software is like working with iron

January 7, 2010

When I talk to family about my job, it is always hard to explain what I’m doing. Yes.. I write software.. and no.. I can’t help you with your Windows problems. But some parts of software development can be compared to working with iron:

  1. When you start, you can create small parts and connect them to create bigger parts, just like with Lego or Meccano
  2. When you have accomplished a lot, in some cases it feels good to do fine tuning and put the dots on the i by getting a very fine sandpaper and polishing it: just feeling proud.
  3. If there already is existing software and some parts aren’t not behaving as you want it to behave, you get a crowbar and try to bend it in shape. In some cases a crowbar isn’t sufficient and you need to get something like a sledge hammer or blowtorch
  4. In some cases the quality is that bad, that a sledge hammer will not help. The best thing to do is to melt everything and start from scratch.

Greg Young at Devnology

September 17, 2009

Yesterday I attended a Distributed Domain Driven Design presentation given by Greg Young and organized by devnology at Xebia office. In the past I have been playing with DDD concepts but a lot of questions remained unanswered, even after asking them to Eric Evans when he was at the Xebia office a few years ago.

The information provided by Greg made a lot of sense and it was nice to see that there is a better way for writing domain driven applications instead of the current often awkward approach. Creating different models for different needs and storing the event chain instead of storing the new state of an object really made sense. So it is certainly something I want to experiment with in the future. The well known “Patterns of Enterprise Application Architecture” written by Martin Fowler and “Domain-Driven Design: Tackling Complexity in the Heart of Software” written by Eric Evans, made the first steps in the enterprise domain driven design possible, but I think that Greg could lead us to the next step.

Apart from the technical content (very very important to me) it was also nice to see an enthusiastic and professional speaker. I would certainly place him in the same league as Fowler and Evens and I think that we are going to hear more about Greg in the future (a book perhaps?).


Remapping a method with ASM

August 2, 2009

Last few weeks I have been working at home on Multiverse (summer vacation), a Java based STM implementation. I use ASM based instrumentation to transforms POJO’s (with some additional annotations) so that certain interfaces and method implementations are added.

With Multiverse 0.2, I did all the method generation by hand (manually written bytecode) and this is a very time consuming and error-prone task. That is why I came up with a different idea for 0.3: make an abstract class that contains (most of) the implementation, and move the code from that class to another (essentially the class has become a mixin). By copying the methods/fields instead of making the mixin a super class, it prevents imposing limitations on the class hierarchy. Luckily ASM has some functionality for this called the RemappingMethodAdapter. The problem is that this functionality is made to be used in the visitor api of ASM and not the Tree API and I’m using the latter one.

So I wrote function that iterates over the bytecode and transforms it. The problem is that this leads to more code to maintain and test. RĂ©mi Forax of the ASM discussion group made a suggestion that the RemappingMethodAdapter can be used with the Tree api because the MethodNode has an accept function.

So to make a long story short, this is the code I’m using to remap a method from one class to another. And I hope it helps other people struggling with the same problems:

public static MethodNode remap(MethodNode originalMethod, Remapper remapper) {
        String[] exceptions = getExceptions(originalMethod);

        MethodNode mappedMethod = new MethodNode(
                originalMethod.access,
                originalMethod.name,
                remapper.mapMethodDesc(originalMethod.desc),
                remapper.mapSignature(originalMethod.signature, false),
                remapper.mapTypes(exceptions));

        RemappingMethodAdapter remapVisitor = new RemappingMethodAdapter(
                mappedMethod.access,
                mappedMethod.desc,
                mappedMethod,
                remapper);
        originalMethod.accept(remapVisitor);
        return mappedMethod;
    }

    public static String[] getExceptions(MethodNode originalMethod) {
        if (originalMethod.exceptions == null) {
            return new String[]{};
        }

        String[] exceptions = new String[originalMethod.exceptions.size()];
        originalMethod.exceptions.toArray(exceptions);
        return exceptions;
    }

Buying some kickass hardware for Multiverse

April 23, 2009

This week I’m going to order my kick ass system for Multiverse, a STM implementation I’m working on. 2 Intel Xeon E5520 cpu’s (4 cores each + hyperthreading -> 16 virtual processors in total!). 12 Gig DDR3 1066 Mhz ram, a super fast Intel 32 gigabyte SSD and 2x 1.5 terabyte old school disk drives. And it is going to run Linux and perhaps even Solaris (I want to play with D-trace).

As soon as I have this up and running (and moved it somewhere so I don’t have to look at it) I can run some serious performance and scalability tests. At the moment I only have a dual core laptop for this purpose and that is not sufficient.

And if I’m lucky, I can upgrade the cpu’s in the near future to 8 cores each (resulting in 32 virtual processors!).