Shared Model-Memory

Java Memory Model (JMM)

JMM, or Java Memory Model, defines the concepts of main memory and working memory. These correspond to CPU registers, caches, hardware memory, CPU instruction optimizations, and more.

JMM is reflected in the following aspects:

  • Atomicity: Ensures that instructions are not affected by thread context switches.
  • Visibility: Ensures that instructions are not affected by CPU caching.
  • Ordering: Ensures that instructions are not affected by CPU instruction parallelization.

Visibility

To address visibility issues:

  • You can use volatile to declare variables. It prevents threads from fetching the variable’s value from their own cache and enforces reading it from the main memory directly.
  • Locking mechanisms can also address visibility problems. While a thread’s modification of a volatile variable is visible to other threads, it does not guarantee atomicity.
  • The synchronized keyword guarantees atomicity and visibility for code blocks but is heavyweight in terms of performance.

volatile is suitable when one thread writes, and multiple threads read a variable.

Balking

Balking is used when a thread discovers that another thread or itself has already performed a certain task, making it unnecessary to proceed. In such cases, the thread can exit immediately.

Ordering

JVM can reorder the execution sequence of statements without affecting correctness. This is known as instruction reordering, and it can impact the correctness of multithreaded programs.

To prevent instruction reordering, you can use the volatile keyword. Statements before a volatile variable are not reordered.

The underlying mechanism of volatile is memory barriers (Memory Barrier):

  • A write instruction to a volatile variable includes a barrier.
  • A read instruction from a volatile variable includes a barrier.

volatile cannot resolve instruction interleaving. It ensures that subsequent reads will see the latest value but doesn’t guarantee that a read won’t happen before a write.

Double-Checked Locking

Double-Checked Locking is a technique for lazy initialization of singleton objects. It aims to reduce synchronization overhead by acquiring a lock only when necessary. However, it can lead to issues related to ordering.

Adding volatile to the INSTANCE variable in the double-checked locking pattern can resolve ordering problems.

Happens-Before

Happens-Before is a set of rules that dictate that writes to shared variables must be visible to other threads reading those variables. It encompasses rules related to visibility and ordering.

In summary,

  • Use volatile for visibility.
  • Use synchronization mechanisms for both visibility and atomicity.
  • Be cautious of instruction reordering in multithreaded programs.