What synchronized means to JVM

There are many articles on what synchronized accomplishes in Java, but I had hard time to find why JVM needs synchronized in the first place.

At it’s core, Java was born with intend, besides “write once, run anywhere”, to be concurrent.
Running a concurrent Program, where you have more the one thread running, is where unexpected conditions occur, like “race condition”.

Let’s take a look at an example, to illustrate possible scenario, and keep in mind the image below, so it’s easier to follow the illustration:


Let assume we have an instance of an object Counter:



Now when we have “counter” instance and we have thread1 which increments counter from 0 to 1, JVM might decide not to write the change to memory but instead of hold the value in one of the L caches.
There might be multiple reasons why that happened, one of then is that JVM decides that if figures you will do lots of increments in following operations and for performance optimization reasons L cache is the right move.

Let’s say when thread1 incremented counter to 1, L2 cache is holding this update.

Further let’s assume than we have thread2, which might happen to run on core2, and we want to increment counter from 1 to 2 with thread2. Now thread2 doesn’t know what the current value is, it look into L cache but the counter object isn’t there. Why not? Well, L2 cache is per core, that means each core has it’s own L2 cache.

Since thread2 can’t find the object in cache, it goes to memory. Now here is where it all starts.
Remember, when thread1 incremented counter from 0 to 1, it stored the value in L2 cache, it didn’t flash the update to memory!
That means when thread2 goes to memory it find the object in the last state in memory, which is counter=0.

From here are multiple scenarios possible, but I hope you see where it goes. To keep it simple, lets say that thread2 then increments the counter from 0 to 1 and writes it right to memory. Thereafter JVM decides to take the update from L2, that was made by thread1, and write it to memory too.

Exactly at this point we effectively have race condition. Whatever counter value was at this time in memory, it’s now overridden by the value from cache L2. Even though we incremented twice, once with thread1 and once with thread2, the counter is equal to 1.

Now coming back to the synchronized keyword, when JVM sees “synchronized”, it’s says like “all right, I gonna ignore L cache and write any update right to memory”, and with that and other thread will “see” the latest update of the object.