Date: Fri, 26 May 2023 14:01:18 +0000
One feature that is missing from the C and C++ standards is to be able to do lock-free atomic accesses using a type that does not have the same size as the object (or subobject) being pointed to such as the following (at least in a portable way that does not involve undefined behavior):
// ptr is aligned on a std::atomic_ref<uint32_t>::required_alignment boundary
std::pair<std::uint32_t, bool> SomeFuncThatDoesU32AtomicAccess(std::uint8_t* ptr) {
std::uint32_t u32_bits = lock_free_atomic_load_32(ptr, std::memory_order_relaxed);
for(;;) {
if((u32_bits & 0xFFFFu) == 0xFFFFu)
return std::make_pair(u32_bits, false);
const auto new_u32_bits = static_cast<std::uint32_t>(u32_bits + 1u);
if(lock_free_compare_exchange_weak_32(ptr, u32_bits, new_u32_bits,
std::memory_order_seq_cst, std::memory_order_relaxed))
return std::make_pair(u32_bits, true);
}
}
Note that memcpy cannot be used in the above code as memcpy can perform non-atomic accesses, as memcpy does an unconditional write, and as there is the possibility of data races on platforms that support threading.
lock_free_atomic_load_32 is equivalent to the following, except that ptr can point to any valid memory location (including the case where any of the bytes accessed point to bytes of a volatile object and including the case where ptr points to read-only memory) where at least sizeof(std::uint32_t) bytes are readable and whose alignment is at least std::atomic_ref<uint32_t>::required_alignment):
std::uint32_t lock_free_atomic_load_32(const volatile void* ptr,
std::memory_order order) {
std::atomic_ref<std::uint32_t> u32_atomic_ref(
*reinterpret_cast<std::uint32_t*>(const_cast<void*>(ptr)));
return u32_atomic_ref.load(order);
}
lock_free_compare_exchange_weak_32 is equivalent to the following, except that ptr can point to any valid memory location (including the case where any of the bytes accessed point to bytes of a volatile object) where at least sizeof(std::uint32_t) bytes are writable and whose alignment is at least std::atomic_ref<uint32_t>::required_alignment):
bool lock_free_compare_exchange_weak_32(volatile void* ptr,
std::uint32_t& expected,
std::uint32_t desired,
std::memory_order success,
std::memory_order failure) {
std::atomic_ref<std::uint32_t> u32_atomic_ref(
*reinterpret_cast<std::uint32_t*>(const_cast<void*>(ptr)));
return u32_atomic_ref.compare_exchange_weak(expected, desired, success, failure);
}
Adding lock-free atomic operations that can operate on any valid pointer to a suitably-aligned type such as lock_free_atomic_load_32, lock_free_atomic_store_32, lock_free_compare_exchange_weak_32, and lock_free_compare_exchange_strong_32 addresses issues that are not addressed by the std::memcpy and std::bit_cast operations.
Having functions such as lock_free_atomic_load_32, lock_free_atomic_store_32, lock_free_compare_exchange_weak_32, and lock_free_compare_exchange_strong_32 would also allow C and C++ programmers to eliminate type punning in cases where std::memcpy cannot be used (due to the need to access memory that might be concurrently accessed in a lock-free atomic manner).
Received on 2023-05-26 14:01:23