By dave | June 30, 2012

Copy on write lists provide a very quick win in terms of removing synchronization. I believe that a significant use case is providing a thread safe list for java's Listener (AKA observer) pattern. Copy on write lists make the assumption that the list does not update frequently, and is mainly used for reading. If this is not the case, the overhead may be worse than synchronizing.

java.util.concurrent.CopyOnWriteArrayList implements the List interface and infact is also a random access container (implements RandomAccess). Once you've established that copy on write is the correct way to go, you need to do no more than change to using this new class. Thats it, really simple.

Copy on write provides thread safe access by copying the list every time it is changed, this means that when you call get or take an iterator, it is the snapshot at the time you took it. If there is a subsequent change while iterating the iterator will not see it.

import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;

public class Concurrent
{
    private List<ActionListener> listeners;
     public Concurrent()
    {
        listeners = new CopyOnWriteArrayList<ActionListener>();
    }

    public void addActionListener(ActionListener listener)
    {
        // note, no synchronized.
        listeners.add(listener);
    }

    public void removeActionListener(ActionListener listener)
    {
        // note, no synchronized.
        listeners.remove(listener);
    }

    public void fireActionEvent()
    {
        // now we fire the listener event, note again there is no need
        // to synchronize first.
        for (ActionListener listener : listeners)
        {
            listener.actionPerformed(new ActionEvent(this, 1, "HELLO"));
        }
    }
}

Using the ConcurrentMap to reduce synchronization

java.util.concurrent.ConcurrentHashMap is an implementation of the java.util.concurrent.ConcurrentMap interface that is backed by a hashmap. This implementation can reduce thread contention by allowing more than one thread to read and write to the map at the same time. Read operations do not lock unless instability is suspected, IE: the get method is about to return null. Write operations will only lock the section of the map that contains the item being updated. Again, locks only affect one segment of the map - this applies to both read and write operations.

Notice the word section in the previous paragraph. This is a very important difference from a normal map that gives it an edge in terms of performance. Instead of having one lock that operates on the whole map, the table is split into segments, and each segment has a lock. This means that in the event that locking is required only one segment of the map gets locked.

import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.ConcurrentHashMap;

public class Concurrent
{
    private ConcurrentMap<Integer, String> namesById;

    private Concurrent()
    {
        // use the constructor to adjust the capacity, by default
        // the load factor and concurrency level are usually OK.
        final int CAPACITY = 100;
        namesById = new ConcurrentHashMap<Integer, String>(CAPACITY);
    }

    private void changeNameForId(int id, String name)
    {
        // notice this is not synchronized
        String lastValue = namesById.put(id, name);
    }

    private String findNameForUser(int id)
    {
        // notice this is not synchronized
        return namesById.get(id);
    }
}

A note about segments and hashmap sizing

Just like a normal hashmap, the default size of the ConcurrentHashMap is rather small. In fact the default hash table is only 16 locations. Should you expect many items to be stored in the map, set the initial capacity to be about twice the number of expected items as a general rule. If you do not do this hashmap will keep growing using load factor to determine when it is full.

ConcurrentHashMap works by breaking the hash map into segments that have separate locks, so when a lock is required, only one segment of the hash table is locked. You can tweak the number of locking segments by setting the concurrency level at construction, this is defined in terms of number of reader threads. I believe that the default implementation is correctly sized for many applications.

In addition the concurrent map implementation tries to defend against poorly written hashcodes that are all in the same region, but it's still better to ensure a good implementation when you have control of the class being placed in the map.

Other pages within this category

comments powered by Disqus

This site uses cookies to analyse traffic, serve ads by Google AdSense (non-personalized in EEA/UK), and to record consent. We also embed Twitter, Youtube and Disqus content on some pages, these companies have their own privacy policies.

Our privacy policy applies to all pages on our site

Should you need further guidance on how to proceed: External link for information about cookie management.