Date: Thu, 6 Oct 2022 19:20:43 +0300
I am trying to revive an old proposal of mine.
Fri, Aug 20, 2021 at 08:45:10AM +0300, Marko Mäkelä wrote:
>Hi all,
>
>For a database kernel, I needed a small mutex or rw-lock with minimal
>features: no tracking of ownership, no reentrancy a.k.a. recursion for
>the basic variant. The small size of the object (32 or 64 bits) allows
>millions of the simplest mutex or rw-lock to be instantiated. One use
>case is that a hash array contains a lock and a number of pointers in
>a single cache line, such that the lock will protect the data that is
>stored in that cache line.
>
>I created a GitHub repository to demonstrate this:
>
>https://github.com/dr-m/atomic_sync
Attached is a PDF rendering of the currently latest version of
https://github.com/dr-m/wg21/blob/master/trans_mutex.md which I would
like to propose for inclusion in the standard. Unlike what I initially
had in mind, the proposed wording does not say anything about the
storage size or a possibility to implement it by the C++21 std::atomic.
>I believe that the code works wherever std::atomic::wait() is available
>(currently, this includes MSVC on Windows, GCC or Clang on Linux with
>libstdc++-11). There is also a fallback implementation for C++11, using
>futex system calls on Linux or OpenBSD.
Meanwhile, the futex based implementation has been extended to FreeBSD
and DragonFly BSD.
The implementation has also been tested with transactional memory on
POWER 8 (HTM in POWER ISA v2.09) and on some Intel processors that
implement the TSX-NI a.k.a. RTM extension of the AMD64 ISA. I am not
proposing any changes to the standard regarding transactional memory.
My proposed predicates like is_locked() allow fine-grained control of
lock elision when using transactional memory.
Similar code (including lock elision for some critical sections of some
locks, based on performance testing) has been running in production as
part of a widely deployed database kernel since November 2021. A number
of concurrency bugs due to the incorrect use of the locks have been
found and fixed since then, but no bugs have been attributed to the lock
implementation itself.
With best regards,
Marko Mäkelä
Fri, Aug 20, 2021 at 08:45:10AM +0300, Marko Mäkelä wrote:
>Hi all,
>
>For a database kernel, I needed a small mutex or rw-lock with minimal
>features: no tracking of ownership, no reentrancy a.k.a. recursion for
>the basic variant. The small size of the object (32 or 64 bits) allows
>millions of the simplest mutex or rw-lock to be instantiated. One use
>case is that a hash array contains a lock and a number of pointers in
>a single cache line, such that the lock will protect the data that is
>stored in that cache line.
>
>I created a GitHub repository to demonstrate this:
>
>https://github.com/dr-m/atomic_sync
Attached is a PDF rendering of the currently latest version of
https://github.com/dr-m/wg21/blob/master/trans_mutex.md which I would
like to propose for inclusion in the standard. Unlike what I initially
had in mind, the proposed wording does not say anything about the
storage size or a possibility to implement it by the C++21 std::atomic.
>I believe that the code works wherever std::atomic::wait() is available
>(currently, this includes MSVC on Windows, GCC or Clang on Linux with
>libstdc++-11). There is also a fallback implementation for C++11, using
>futex system calls on Linux or OpenBSD.
Meanwhile, the futex based implementation has been extended to FreeBSD
and DragonFly BSD.
The implementation has also been tested with transactional memory on
POWER 8 (HTM in POWER ISA v2.09) and on some Intel processors that
implement the TSX-NI a.k.a. RTM extension of the AMD64 ISA. I am not
proposing any changes to the standard regarding transactional memory.
My proposed predicates like is_locked() allow fine-grained control of
lock elision when using transactional memory.
Similar code (including lock elision for some critical sections of some
locks, based on performance testing) has been running in production as
part of a widely deployed database kernel since November 2021. A number
of concurrency bugs due to the incorrect use of the locks have been
found and fixed since then, but no bugs have been attributed to the lock
implementation itself.
With best regards,
Marko Mäkelä
Received on 2022-10-06 16:20:49