Post by Ernst, Matthias
Did anyone investigate why that would be the case? Is it due to the fact
that the VM has lock-enable *any* java.lang.Object through things like
header displacement, lock inflation/deflation? Or has GC an advantage
over manual management of lock records?
Other than that I cannot think of any edge ReentrantLock could have over
synchronized: both codes are generated inline by the Hotspot compiler,
both can and probably do use the same suspension/resumption methods, the
same atomic instruction sequences, ...
Performance is a moving target. In the first JVM, performance for
everything sucked (locking, garbage collection, allocation, you name it)
because the first JVM was a proof-of-concept and performance wasn't the
goal. Once the VM concept was proven, engineering resources were then
allocated to improve performance, and there is no shortage of good ideas
for making things faster, so performance in these areas improved and is
improving with each JVM version.
So, one factor in why ReentrantLock is faster than built-in
synchronization is that the JSR 166 team spent some effort building a
better lock -- not because the JVM folks didn't have access to the same
papers on lock performance, but because they had other priorities of
where to spend their efforts. But they will get around to it and the
scalability gap will surely close in future JVM versions.
Interestingly, the algorithm used under the hood of ReentrantLock is
easier to implement in Java than in C, because of garbage collection --
a C version of the same algorithm would be a lot more work and would
require more bookkeeping in the algorithm. As a result, the approach
taken by ReentrantLock makes more garbage and uses less locking than the
obvious C analogue, and it turns out that, given the current relative
cost between memory management and memory synchronization, an algorithm
that makes more garbage and uses less coordination is more scalable.
This week. Might be different next week. Performance is a moving target.