4 June my first public presentation in Haren (Groningen)

May 26, 2008

My employer Xebia has organized a knowledge sharing session in Haren (Groningen, The Netherlands) on the 4th of June (starting at 17:00u) to get some exposure in the north of The Netherlands. The evening starts with a presentation about the Agile process (one of the main drivers within Xebia). But to show that we are not just about process, but also about hardcore Java, I’m giving a presentation about the Java Memory Model (JMM). After the presentations we finish with a bbq.

The last few months I have been increasing the frequency of doing internal presentations (twice a month we have a knowledge sharing evening) and in most (probably all) cases I have been talking about concurrency control in some form. In the autumn I want to speak at JFall, a Dutch Java conference, and I expect the subject to be the JMM as well. Speaking at this knowledge sharing evening is a good training for speaking to a larger public.

So if you want hear something about the JMM, the Agile process or do some networking, send me an email and I’ll try to get you on the guest list.

The concurrency detector

May 26, 2008

One of the things I love to do is to check code for concurrency issues (for example in audits). If a system is aware of concurrency from day one, checking for concurrency problems is less hard (although it still can be very complex) because it is much clearer which objects are and are not thread safe. But in most cases the systems I check are systems that already have been build and thread safety didn’t always have the highest priority. To make it even more problematic, if the system contains hundreds or even thousands of classes, figuring out which object could have problems is almost impossible.

So I started thinking about a way to assist me to find problems; to see which objects are used by multiple threads. That is how the ‘concurrency detector’ was born. At the moment it used AOP (AspectJ) to add the ‘concurrency detection aspect’ to classes that I’m interested in. This aspect stores which thread has touched which field of which object. And based on this information it is quite easy to infer which objects are used by multiple threads. From that point on it is manual work to see if there are really problems, but it is a lot better than having no assistance at all. From time to time it writes all information to some HTML report and the system can be influenced by JMX (turned off/on, reset etc).

In the near future I want to add the following stuff:

  1. more informative HTML reports: maximum numbers of writes/read by a thread, maximum total number of reads/writes etc
  2. sortable tables in the HTML reports
  3. problem ‘hotspots’: e.g. if an object has multiple reads and writes by multiple thread, it is more likely to have concurrency problems than a single write and a multiple reads. This helps getting ‘the biggest bang for the bug’
  4. dynamic AOP (dynamic pointcuts): to add advice on classes on the fly. At the moment I’m using load time weaving, but I’m not very happy with it. It could interfere with existing Java Agents, it only works on Java 5 (and higher) and I need to create a new jar every time I want to look at different packages/classes. It is not quite clear to me if or how AspectJ is able to deal with dynamic AOP.

If you are interested in this tool, please stay in touch with my blog. I still have to decide what I’m going to do with the product (probably make it Open Source). Without in depth concurrency knowledge, the tool is useless anyway.

HashMap is not a thread-safe structure.

May 26, 2008

Last few months I have seen too much code where a HashMap (without any extra synchronization) is used instead of a thread-safe alternative like the ConcurrentHashMap or the less concurrent but still thread-safe HashTable. This is an example of a HashMap used in a home grown cache (used in a multi-threaded environment):

interface ValueProvider{
	V retrieve(K key);

public class SomeCache{

	private Map map = new HashMap();
	private ValueProvider valueProvider;

	public SomeCache(ValueProvider valueProvider){
		this.valueProvider = valueProvider;

	public V getValue(K key){
		V value = map.get(key);
		if(value == null){
			value = valueProvider.get(key);
		return value;

There is much wrong with this innocent looking piece of code. There is no happens before relation between the put of the value in the map, and the get of the value. This means that a thread that receives the value from the cache, doesn’t need to see all fields if the value has publication problems (most non thread-safe structures have publication problems). The same goes for the value and the internals (the buckets for example) of the HashMap. This means that updates to the internals of the HashMap while putting, don’t need to be visible to a thread that does the get.
So it could be that the state of the cache in main memory is not in an allowed state (some of the changes maybe are stuck in the cpu-cache), and the cache could start behaving erroneous and if you are lucky, it starts throwing exceptions. And last, but certainly not least, there also is a classic race problem: if 2 threads do a interleaved map.put, the internals of the HashMap can get in an inconsistent state. In most cases an application reboot/redploy would be the only way to fix this problem.

There are other problems with the cache behavior of this code as well. The items don’t have a timeout, so once a value gets in the cache, it stays in the cache. In practice this could lead to a webpage that keeps displaying some value, even though in the main repository the value has been updated. An application reboot/redeploy also is the only way to solve this problem. Using a Common Of The Shelf (COTS) cache would be a much saver solution, even though a new library needs to be added.

It is important to realize that a HashMap can be used perfectly in a multi-threaded environment if extra synchronization is added. But without extra synchronization, it is a time-bomb waiting to go off.

JMM: Thank God or the Devil for Strong Cache Coherence?

May 17, 2008

The Java Memory Model is something that most developers don’t understand, even though this knowledge is mandatory to create concurrent code that works without problems on any Java platform. A Memory Model describes under what conditions a write done by one thread, will be visible to another thread that does a read. Most developers think that the write of normal variables without any synchronization actions, is going to be visible to any reading thread (this behavior is called sequential consistency). They don’t realize that Java doesn’t provide this behavior out of the box, because it would prevent a lot of optimizations from being used (out of order execution of instructions, use of caches/registers etc).

The ‘nice’ thing is that code with JMM problems is not going to behave badly on a lot of modern hardware in a lot of cases. This is because most hardware provides a very strong cache coherence (far stronger than the JMM requires), so a value written in one cache is probably going to be visible in the other caches, even though synchronization actions are missing. The fact that the problems are not ‘that bad’ or ‘have not occurred’ is an argument I hear quite often.

The question remains of course if relying on the hardware to ‘fix’ a faulty application is a good approach. If the application is going to run on hardware that provides a much weaker cache coherence, very hard to track down problems could happen. This means that the Java application is not platform independent anymore. Another problem is that not just the cache could cause Memory Model problems, but the compiler can cause it as well. Currently it already is possible that a compiler optimization could break an application, no matter how strong the cache coherence of the underlying hardware is. And no guarantees can be made that a future JVM is not going to optimize much more aggressive, so even using a different JVM could cause problems.

Maybe it would have been better if the hardware didn’t provide such a strong cache coherence, so that writing correct concurrent code would have a much higher priority.