The Java synchronized keyword

The Java synchronized keyword is an essential tool in concurrent programming in Java. Its overall purpose is to only allow one thread at a time into a particular section of code thus allowing us to protect, for example, variables or data from being corrupted by simultaneous modifications from different threads. This article looks at how to use synchronized in Java to produce correctly functioning multithreaded programs. Other articles in this section look at other Java 5 concurrency facilities which have in fact superseded synchronized for certain tasks.

Using a synchronized block

At its simplest level, a block of code that is marked as synchronized in Java tells the JVM: "only let one thread in here at a time".

Imagine, for example, that we have a counter that needs to be incremented at random points in time by different threads. Ordinarily, there would be a risk that two threads could simultaneously try and update the counter at the same time, and in so doing currpt the value of the counter (or at least, miss an increment, because one thread reads the present value unaware that another thread is just about to write a new, incremented value). But by wrapping the update code in a synchronized block, we avoid this risk:

public class Counter {
  private int count = 0;
  public void increment() {
    synchronized (this) {
  public int getCount() {
    synchronized (this) {
      return count;

That's the simple overview of the most common use of synchronized. However, it's worth understanding a little about what actually happens "under the hood" because there's actually a bit more to synchronized than that.

Every Java object created, including every Class loaded, has an associated lock or monitor. Putting code inside a synchronized block makes the compiler append instructions to acquire the lock on the specified object before executing the code, and release it afterwards (either because the code finishes normally or abnormally). Between acquiring the lock and releasing it, a thread is said to "own" the lock. At the point of Thread A wanting to acquire the lock, if Thread B already owns the it, then Thread A must wait for Thread B to release it.

Thus in the example above, simultaneous calls to increment() and getCount() will always behave as expected; a read could not "step in" while another thread was in the middle of incrementing.

Synchronization and data visibility

Synchronizing also performs another important– and often overlooked– function. Unless told otherwise— using a synchronized block or via the Java volatile keyword&mdash, threads may work on locally cached copies of variables such as count, updating the "main" copy when it suits them. For the reasons discussed in our overview of processor architecture, they may also re-order reads and writes, meaning that updating a variable may not mean that it is updated when otherwise expected. However, on entry to and exit from blocks synchronizeded on a particular object, the entering/exiting thread also effectively synchronizes copies of all variables with main memory1.

The next page deals with some of the gorier details of this synchronization of variables with main memory. Or you may prefer to skip it and go on to the following page which detals with methods declared as synchronized.

1. We'll discuss in a moment what that actually means, but fundamentally, locally cached copes if variables must be 'flushed' to main memory on exit, and accesses to variables accessed inside the synchronized block cannot be re-ordered to occur outisde the synchronized block.