Liking Gradle

April 17, 2010

Last week I have spend a lot of time getting Maven 2 the way I wanted. In most cases Maven is doable, but in case Multiverse, I have some very specific requirements. I need to run the same tests again using different configurations (javaagent vs static instrumentation) and I also need to fabricate a very special jar that integrates all dependent libraries (using jarjar) and does some pre-compilation.

The big problem with Maven is that defining complex builds using the plug-in system is not that great. I have spend a lot of time on configuring the plugins because they were not working like expected or not working at all.

A long time ago I took a look at Gant, but ANT (even if you do it in groovy) still is ANT; meaning a lot of manual labor. So I wanted to give Gradle a try; the power of Groovy combined with a plugin system and predefined build system from Maven. And after 1 day of playing with the scripts, I have got most stuff up and running exactly like the current Maven system (even the artifacts being deployed to the local maven repository!). And I guess that during this week I can make the switch final.

One of the things I really need to get started on, is creating an elaborate set of benchmarks. Using Maven it is fighting the framework, but with the complete freedom you have in Gradle, but still all the facilities provided by plugins, it think it is going to a joy to begin with.

We’ll see what Maven 3 is going to bring, but I’m glad that there is a usable alternative available.


Multiverse and constructor optimizations

April 2, 2010

One of other important optimizations I have just implemented and is going to be added to the Multiverse 0.5 release, is that transactions that only create/create+read objects are very cheap (reads already are very cheap) For a construction only one volatile read (for the version) and one volatile write (for the content to be stored). This means that creating transactional objects is going to be very fast and won’t cause scalability problems.

I hope I’ll be able to inline the complete transaction for the constructor when it is very basic (just like the getter/setter optimization).


Multiverse STM: Inferencing optimal transaction settingss

March 15, 2010

In Multiverse there are a lot of parameters that can activate certain functionality on the transaction. Parameters like:

  1. readonly
  2. interruptible
  3. automaticReadTracking (needed for blocking operations)
  4. maxRetryCount
  5. allowWriteSkewProblem

Each additional feature causes overhead on the STM because it takes longer to process a transaction and it also leads to increased memory usage. In Multiverse 0.4 I already added some learning functionality to the STM runtime that makes speculative choices and adapts if a choice didn’t work out well. I use this for selecting a good performing transaction implementation; based on how many reads/writes the transaction encountered the previous time the system is able to select a better fitting transaction next time.

There is no reason why not to inference optimal settings; just start with readonly and no automatic readtracking. If an update is done, the system knows that next time an update transaction is needed. The same goes for a retry; it a retry is done and automaticReadTracking is not enabled, the next transaction can run with this feature enabled. Perhaps similar techniques could be used to determine if readtracking is useful because a transaction runs often in read conflicts.

David Dice already wrote about this in his famous TL2 paper and I certainly think that it is going to save developers a lot of time. And if someone wants to fiddle with settings themselves, they can always override settings (or perhaps even deactivating the learning system). This is certainly something I want to integrate in one of the next Multiverse releases.


Developing software is like working with iron

January 7, 2010

When I talk to family about my job, it is always hard to explain what I’m doing. Yes.. I write software.. and no.. I can’t help you with your Windows problems. But some parts of software development can be compared to working with iron:

  1. When you start, you can create small parts and connect them to create bigger parts, just like with Lego or Meccano
  2. When you have accomplished a lot, in some cases it feels good to do fine tuning and put the dots on the i by getting a very fine sandpaper and polishing it: just feeling proud.
  3. If there already is existing software and some parts aren’t not behaving as you want it to behave, you get a crowbar and try to bend it in shape. In some cases a crowbar isn’t sufficient and you need to get something like a sledge hammer or blowtorch
  4. In some cases the quality is that bad, that a sledge hammer will not help. The best thing to do is to melt everything and start from scratch.

Greg Young at Devnology

September 17, 2009

Yesterday I attended a Distributed Domain Driven Design presentation given by Greg Young and organized by devnology at Xebia office. In the past I have been playing with DDD concepts but a lot of questions remained unanswered, even after asking them to Eric Evans when he was at the Xebia office a few years ago.

The information provided by Greg made a lot of sense and it was nice to see that there is a better way for writing domain driven applications instead of the current often awkward approach. Creating different models for different needs and storing the event chain instead of storing the new state of an object really made sense. So it is certainly something I want to experiment with in the future. The well known “Patterns of Enterprise Application Architecture” written by Martin Fowler and “Domain-Driven Design: Tackling Complexity in the Heart of Software” written by Eric Evans, made the first steps in the enterprise domain driven design possible, but I think that Greg could lead us to the next step.

Apart from the technical content (very very important to me) it was also nice to see an enthusiastic and professional speaker. I would certainly place him in the same league as Fowler and Evens and I think that we are going to hear more about Greg in the future (a book perhaps?).


Remapping a method with ASM

August 2, 2009

Last few weeks I have been working at home on Multiverse (summer vacation), a Java based STM implementation. I use ASM based instrumentation to transforms POJO’s (with some additional annotations) so that certain interfaces and method implementations are added.

With Multiverse 0.2, I did all the method generation by hand (manually written bytecode) and this is a very time consuming and error-prone task. That is why I came up with a different idea for 0.3: make an abstract class that contains (most of) the implementation, and move the code from that class to another (essentially the class has become a mixin). By copying the methods/fields instead of making the mixin a super class, it prevents imposing limitations on the class hierarchy. Luckily ASM has some functionality for this called the RemappingMethodAdapter. The problem is that this functionality is made to be used in the visitor api of ASM and not the Tree API and I’m using the latter one.

So I wrote function that iterates over the bytecode and transforms it. The problem is that this leads to more code to maintain and test. RĂ©mi Forax of the ASM discussion group made a suggestion that the RemappingMethodAdapter can be used with the Tree api because the MethodNode has an accept function.

So to make a long story short, this is the code I’m using to remap a method from one class to another. And I hope it helps other people struggling with the same problems:

public static MethodNode remap(MethodNode originalMethod, Remapper remapper) {
        String[] exceptions = getExceptions(originalMethod);

        MethodNode mappedMethod = new MethodNode(
                originalMethod.access,
                originalMethod.name,
                remapper.mapMethodDesc(originalMethod.desc),
                remapper.mapSignature(originalMethod.signature, false),
                remapper.mapTypes(exceptions));

        RemappingMethodAdapter remapVisitor = new RemappingMethodAdapter(
                mappedMethod.access,
                mappedMethod.desc,
                mappedMethod,
                remapper);
        originalMethod.accept(remapVisitor);
        return mappedMethod;
    }

    public static String[] getExceptions(MethodNode originalMethod) {
        if (originalMethod.exceptions == null) {
            return new String[]{};
        }

        String[] exceptions = new String[originalMethod.exceptions.size()];
        originalMethod.exceptions.toArray(exceptions);
        return exceptions;
    }

Buying some kickass hardware for Multiverse

April 23, 2009

This week I’m going to order my kick ass system for Multiverse, a STM implementation I’m working on. 2 Intel Xeon E5520 cpu’s (4 cores each + hyperthreading -> 16 virtual processors in total!). 12 Gig DDR3 1066 Mhz ram, a super fast Intel 32 gigabyte SSD and 2x 1.5 terabyte old school disk drives. And it is going to run Linux and perhaps even Solaris (I want to play with D-trace).

As soon as I have this up and running (and moved it somewhere so I don’t have to look at it) I can run some serious performance and scalability tests. At the moment I only have a dual core laptop for this purpose and that is not sufficient.

And if I’m lucky, I can upgrade the cpu’s in the near future to 8 cores each (resulting in 32 virtual processors!).


Speaking at JFall about the JMM

November 3, 2008

At 12 November I’ll be speaking at JFall about the Java Memory Model. This is my first presentation for a large crowd, but this year I have been speaking at least once a month at my employer about mostly concurrency related subjects (low level concurrency, java memory model , java.util.concurrent, databases, architecture and concurrency, fork join framework, mvcc, stm). A few days later I’ll be giving a similar presentation at the IB-groep. Eventually my goal is to give presentations on large international (Java) conferences to become a concurrency authority, and this should help me to get the cool assignments.

So if you want to know more about the JMM, go to my presentation and don’t hesitate to ask questions.

[edit]
If you want to see my presentation (Dutch), you can check it here:
http://www.bachelor-ict.nl/peter-veentjer


Letting the garbage collector do callbacks

July 14, 2008

Garbage collection in Java is great, but in some cases you want to listen when an object is garbage collected. In my case I had the following scenario:

  1. I have a throw away key (a composition of objects) and a value (statistics) in some map.
  2. When one of the components of the key is garbage collected, the key and value should be removed from the map.

The java.util.WeakHashMap was the first thing that came to mind, but there are some problems:

  1. It is not thread safe. And wrapping it in a synchronized block is not acceptable for scalability reasons.
  2. The key (a WeakReference containing the real key) is removed when the real key has been garbage collected. But in my case this isn’t going to work. I don’t want my throw-away key to be removed when it is garbage collected because it would be garbage collected immediately since nobody but the WeakReference is holding a reference to it.

So I needed to go a step deeper. How can I listen to the garbage collection on an object? This can be done using a java.lang.ref.Reference (java.lang.ref.WeakReference for example) and a java.lang.ref.ReferenceQueue. When the object inside the WeakReference is garbage collected, the WeakReference is put on a ReferenceQueue. This makes it possible to do some cleanup by taking items from this ReferenceQueue. This is the first big step to listen to garbage collection.

The other two constraints are now easy to solve:

  1. thread safety: use a ConcurrentHashMap implementation
  2. listen to an arbitrary object instead of the key: wrap the object inside a WeakReference and store the key & map inside. If you create a thread that consumes these references from the ReferenceQueue, this thread is now able to remove the key (and value) from the map

This weekend I created a proof of concept implementation and it can be found bellow.

public class GarbageCollectingConcurrentMap<K, V> {

    private final static ReferenceQueue referenceQueue = new ReferenceQueue();

    static {
        new CleanupThread().start();
    }

    private final ConcurrentMap<K, GarbageReference<K, V>> map = new ConcurrentHashMap<K, GarbageReference<K, V>>();

    public void clear() {
        map.clear();
    }

    public V get(K key) {
        GarbageReference<K, V> ref = map.get(key);
        return ref == null ? null : ref.value;
    }

    public Object getGarbageObject(K key){
        GarbageReference<K,V> ref=map.get(key);
        return ref == null ? null : ref.get();
    }

    public Collection<K> keySet() {
        return map.keySet();
    }

    public void put(K key, V value, Object garbageObject) {
        if (key == null || value == null || garbageObject == null) throw new NullPointerException();
        if (key == garbageObject)
            throw new IllegalArgumentException("key can't be equal to garbageObject for gc to work");
        if (value == garbageObject)
            throw new IllegalArgumentException("value can't be equal to garbageObject for gc to work");

        GarbageReference reference = new GarbageReference(garbageObject, key, value, map);
        map.put(key, reference);
    }

    static class GarbageReference<K, V> extends WeakReference {
        final K key;
        final V value;
        final ConcurrentMap<K, V> map;

        GarbageReference(Object referent, K key, V value, ConcurrentMap<K, V> map) {
            super(referent, referenceQueue);
            this.key = key;
            this.value = value;
            this.map = map;
        }
    }

    static class CleanupThread extends Thread {
        CleanupThread() {
            setPriority(Thread.MAX_PRIORITY);
            setName("GarbageCollectingConcurrentMap-cleanupthread");
            setDaemon(true);
        }

        public void run() {
            while (true) {
                try {
                    GarbageReference ref = (GarbageReference) referenceQueue.remove();
                    while (true) {
                        ref.map.remove(ref.key);
                        ref = (GarbageReference) referenceQueue.remove();
                    }
                } catch (InterruptedException e) {
                    //ignore
                }
            }
        }
    }
}

I am now able to run my concurrency detector on large projects like JBoss instead of running out of memory.


Now officially a Xitaan

June 30, 2008

Last 5 months I haven’t done a lot of software development for my work. I was given the chance to do more consultancy related tasks like performance/stability trouble shooting and audits. I could never have imagined it, but I lost my passion for ‘just’ writing software. I still do write software at home. At the moment I’m working on a tool that is able to detect which objects are accessed by multiple threads. One of the main challenges now is to replace AspectJ by a custom Java agent that gives me more flexibility and doesn’t interfere with other AspectJ configurations. But I’m not satisfied working on projects anymore; I was an unhappy developer and continuing on this road doesn’t provide a bright future.

A few weeks ago I was asked to make the switch officially from the development department (XSD) to the consultancy department (XITA) and I agreed:

  1. XSD (Xebia Software Development) does projects
  2. XITA (Xebia IT Architects): does more consultancy related tasks like audits, trouble shooting, advice etc.
  3. XBR (Xebia Business Requirements): does a lot on the business side of IT like changing the development process or giving business advice.

Within XITA I want to continue specializing in everything related to concurrency and distributed computing. Also I’m thinking about speaking at JFall (a Dutch Java conference) about the Java Memory Model and I’m planning to give an internal? course on this subject as well.