Multiverse: Non Blocking Transactions

December 21, 2009

If an application is using traditional locks, it it quite hard to use a different model then a thread per operation because once a thread enters a lock (a lock you don’t have any control over), the thread is ‘lost’. With IO there is a special extension (non blocking IO) that makes it possible to bypass this behaviour so you can have one thread serving multiple sockets. But if you use traditional concurrency control then you are on your own.

With Multiverse (a Java based STM implementation) it is not only possible to block on multiple resources (for more information see the retry/orelse primitives in the STM literature), but in the 0.4 release it also is going to be possible to use a single thread to run multiple blocking transactions. For some time the idea was in the back of my mind and I had the gut feeling that it should be possible with STM. But I never gave it any serious thought because there is so much else to do. Last Friday I had a day off and apparently it was a good day because in 30 minutes I had something up and running.

In Multiverse a latch is used to register transactions as ‘being interested’ in changes of transactional objects. A latch is a blocking structure that can be opened, but once it is opened it can’t be closed, and others threads can wait (block) on the opening of the latch. If a retry happens (so a retryerror is thrown) the transactionmanager catches it, creates a latch and registers it at the read objects. Once a change happens on one of the read objects in the transaction, the latch is opened and the transaction can continue (technically it is restarted). But because the transaction/transactionmanager controls which latch implementation is used, there is a lot of design freedom.

How do non blocking transactions work?
Instead of relying on a traditional latch (so based on an intrinsic lock or a java.util.concurrent.locks.Condition), I replaced it with a different implementation:


public class NonBlockingLatch implements Latch {

private final AtomicBoolean open = new AtomicBoolean();
private final Runnable task;

public NonBlockingLatch(Runnable task) {
this.task = task;
}

@Override
public void open() {
if (open.compareAndSet(false, true)) {
tasks.put(task);
}
}

@Override
public boolean isOpen() {
return open.get();
}

@Override
public void awaitUninterruptible() {
throw new UnsupportedOperationException();
}

.....
}

This latch is added to all the transactional objects in the transaction, and it does a callback once it is opened. In this case it does a callback to some taskqueue of an executor and stores the task in the taskqueue so that the transaction can restart. In the 0.4 release of Multiverse an initial version of this functionality will be added.

Why should you want it?
The cool thing about non blocking transactions is that you could create 100.000 transactions for example while using a few hundred threads. You could build a trading system e.g. where transactions wait on some stock price to go under or over some threshold and buy/sell the stocks using a transaction on one or more transactional resources.

But the mechanism isn’t completely perfect; there is a lot of room for fairness/starvation prevention and at the moment the transaction is restarted on every change. And with this implementation it also is possible that the same transaction is run in parallel if multiple transactional resources are notified used in the same transaction. It seems that I don’t need to be bored the coming weeks with my Christmas leave.

Advertisements

Plans for Multiverse 0.4

December 14, 2009

The 0.3 release of the software transactional memory implementation Multiverse is almost complete. I already started with the 0.4 release and it will get the following transaction execution models:

  1. readonly transaction with tracking reads; useful blocking operations and also to prevent unwanted load failures once a read has been executed
  2. readonly transaction without readtracking (already available in the 0.3 release)
  3. update transaction without read tracking (reduces stress on the transaction), but it could be subject to writeskews and can’t be used for blocking operations
  4. update transaction with read tracking (already available in the 0.3 release). This is useful for blocking operations but also for detecting write-skew problems. In the 0.4 detection for writeskew detection will be configurable

The 0.4 release also is going to get a mechanism that selects the optimal transaction implementation based on the number of reads/writes done on a transaction.

  1. tiny: optimized for a single atomic object (completely optimized to reduce object creation as much as possible)
  2. bound length: optimized for a maximum number of atomic objects e.g. 10 (based on an array)
  3. unbound length: optimized for bigger transaction (based on a map)

I have created some prototypes the show a big performance improvement for tiny transactions (2 a 3 times faster than growing-transactions). And based on the transaction familyname (if annotations are used, family name will be inferred automatically), the system will select the optimal implementation. The systems starts with tiny transactions, and if an implementation wasn’t ‘big’ enough, the transaction is aborted and a ‘larger’ implementation (or different settings) are used for the following transaction. This is completely invisible to the programmer apart from having some transaction failures in the beginning. But since transactions are retried automatically, this shouldn’t be a big problem.

The following features are also planned (or already partially implemented) for the 0.4 release:

  1. TransactionalTreeMap
  2. TransactionalTreeSet
  3. Transactional primitives (IntRef, BooleanRef etc)
  4. Support for subclassing atomic objects.
  5. Preventing unwanted object creation in transactions (Tranlocal objects are only cloned for local usage when they are written, not when read)
  6. Support for 2 phase commit to make distributed transactions possible. This was requested by Jonas Boner of the Akka project that used Multiverse as the STM implementation.

Am I too stupid for @Autowired?

December 2, 2009

When I started with Spring, I finally had the feeling that I could write enterprise application in the same powerful way as I could write normal systems. And I have written quite complex Java stuff like expertsystems, prolog compilers and various other compilers and interpreters and currently working on Multiverse; a software transactional memory implementation. I also have written quite a lot of traditional enterprisy Spring applications (frontend, backend/batch processing). So I consider myself at least as smart as most developers.

The first applicationcontext files felt a littlebit strange, finding the balance between the objects that need to be created inside the application context and objects created in the Java objects themselves. But it didn’t took me long to find this balance and realise how powerful the application context is:

  1. it acts like a executable piece of software documentation where I can see how a system works, just by looking at a few xml files. I don’t need to look in the source to see how stuff is wired.
    I compare it with looking at Lego instructions, by looking at the instructions I can see what is going to be build (not that I ever followed the design). To understand what is build, I don’t need to look at the individual parts.
  2. separation of interfaces and implementation, so testing and using a lot of cool oo design stuff is a dream
  3. having the freedom to inject different instances of the same interface on different locations. This makes it possible to do cool stuff like adding proxies, logging etc etc. I wasn’t forced to jump through hoops any more because of all kinds of container specific constraints

With the old versions of Spring (1.2 series) I had the feeling of total control and complete understanding.

The application context made me aware that I needed to work with 2 hats:

  1. ‘Object’ designer; so actually writing the Java code and documentation
  2. ‘Object’ integrator; where I assembly complete systems based on the objects

And I loved it.. I have more than 1.000 post on the Spring forum just because I believe in this approach.

But if I look at a lot of modern Spring applications, filled with @Component and @Autowired annotations, it feels like I have lost it all. It takes me a lot longer to realise how something works, even though I have a great ide (newest beta of IntelliJ) with perfect Spring integration that makes finding dependencies a lot easier. So I keep on jumping from file to file to understand the big picture. A lot of developers I meet/work with think that the auto-wiring functionality is great because it saves them a lot of time and prevents them from programming in XML (XML was a bad choice, but that is another discussion), but somehow my brain just doesn’t get it.

So my big questions are:

  1. am I too stupid?
  2. if I’m going to try it longer, am I going to see the light?
  3. do I expect too much? So because less characters need to be typed, is it inevitable that some redundant/explicit information is lost
  4. is this some hype and eventually people start to realise that it was a nice experiment, but doesn’t provide the value it promises (just like checked exceptions or EJB 2). And in 10 years we can all laugh about it.