C++ Logo

std-discussion

Advanced search

Re: Fwd: Synchronization of atomic notify and wait

From: language.lawyer_at <language.lawyer_at_[hidden]>
Date: Wed, 20 Jul 2022 19:59:32 +0500
On 20/07/2022 15:08, Marcin Jaczewski via Std-Discussion wrote:
> śr., 20 lip 2022 o 10:46 <language.lawyer_at_[hidden]> napisał(a):
>>
>> On 20/07/2022 13:17, Marcin Jaczewski wrote:
>>> How could it be possible that after unblocking it loads old value?
>>
>> Do you want me to describe a hypothetical implementation? Ok.
>> Weak cache model. The main thread is on Core 0, the non-main one on Core 1. Each having own L1 cache. The main thread loads false and blocks by putting Core 0 into sleep state.
>> The non-main thread writes true, which only lives in its L1 cache, calls `notify`, which sends IPI to Core 0, the main thread wakes up, reads false from its L1 cache and blocks again.
>>
>> Ofc, the non-main thread could "publish" its L1 cache or the main thread could ask for update, but why do they have to do it, from the C++ abstract machine POV? What they violate if they don't do it?
>>
>
> Hold on, could I even do any atomic operation on this system?

Why not? If there are ways to flush/update the caches.

> Whole point of the atomic was to bypass caches.

Don't think this was a point.

> And different flavors of atomic operation are for reducing the size of invalidated cache.

Like, relaxed flush L1 to L2 only, ack-rel — also L2 to L3, and seq-cst — L3 to RAM?

> Simply by analogy, if we replace this load by increment. Even relaxed operations should be consistent.

RMW operations have special properties: https://timsong-cpp.github.io/cppwp/n4861/atomics.order#10
You can't just replace load with RMW, draw some conclusions, and replace RMW with load back leaving the conclusions.

Received on 2022-07-20 14:59:36