Date: Wed, 20 Jul 2022 12:08:53 +0200
śr., 20 lip 2022 o 10:46 <language.lawyer_at_[hidden]> napisał(a):
>
> On 20/07/2022 13:17, Marcin Jaczewski wrote:
> > How could it be possible that after unblocking it loads old value?
>
> Do you want me to describe a hypothetical implementation? Ok.
> Weak cache model. The main thread is on Core 0, the non-main one on Core 1. Each having own L1 cache. The main thread loads false and blocks by putting Core 0 into sleep state.
> The non-main thread writes true, which only lives in its L1 cache, calls `notify`, which sends IPI to Core 0, the main thread wakes up, reads false from its L1 cache and blocks again.
>
> Ofc, the non-main thread could "publish" its L1 cache or the main thread could ask for update, but why do they have to do it, from the C++ abstract machine POV? What they violate if they don't do it?
>
Hold on, could I even do any atomic operation on this system? Whole
point of the atomic was to bypass caches. And different flavors of
atomic operation are for reducing the size of invalidated cache.
If this machine would use C++ memory model then it would need to put
memory barriers to guarantee actual value to be loaded.
Even relaxed access will need something like this.
> > load is atomic and value is already changed
>
> What does "value already changed" mean? That there is assignment of "true" in the modification order of the atomic object& Yes, there is. But each thread has right to move through the modification order of an atomic object at their own pace, unless it is constrained by some coherence relations.
>
I am not expert in C++ atomic but intulive it need to see new value as
load is atomic, even if relaxed.
Simply by analogy, if we replace this load by increment. Even relaxed
operations should be consistent.
Take 100 threads that 10 times each increment atomic value. after all
threads ends final value should be exactly 1000.
How could this work if any load get old value? Result will be 999 or
less. And this is not atomic at all.
> > as it "happens before" a call to `notify`.
>
> And how the unblock of the waiting thread is related to the call to `notify`? Does it synchronize with it, like mutex unlock with lock (https://timsong-cpp.github.io/cppwp/n4861/thread.mutex.requirements.mutex#11)?
Synchronization is more with value in atomic variables than between
`notify` and `wait`, technically `notify` could be "noop" and `wait`
bussy loop.
>
> On 20/07/2022 13:17, Marcin Jaczewski wrote:
> > How could it be possible that after unblocking it loads old value?
>
> Do you want me to describe a hypothetical implementation? Ok.
> Weak cache model. The main thread is on Core 0, the non-main one on Core 1. Each having own L1 cache. The main thread loads false and blocks by putting Core 0 into sleep state.
> The non-main thread writes true, which only lives in its L1 cache, calls `notify`, which sends IPI to Core 0, the main thread wakes up, reads false from its L1 cache and blocks again.
>
> Ofc, the non-main thread could "publish" its L1 cache or the main thread could ask for update, but why do they have to do it, from the C++ abstract machine POV? What they violate if they don't do it?
>
Hold on, could I even do any atomic operation on this system? Whole
point of the atomic was to bypass caches. And different flavors of
atomic operation are for reducing the size of invalidated cache.
If this machine would use C++ memory model then it would need to put
memory barriers to guarantee actual value to be loaded.
Even relaxed access will need something like this.
> > load is atomic and value is already changed
>
> What does "value already changed" mean? That there is assignment of "true" in the modification order of the atomic object& Yes, there is. But each thread has right to move through the modification order of an atomic object at their own pace, unless it is constrained by some coherence relations.
>
I am not expert in C++ atomic but intulive it need to see new value as
load is atomic, even if relaxed.
Simply by analogy, if we replace this load by increment. Even relaxed
operations should be consistent.
Take 100 threads that 10 times each increment atomic value. after all
threads ends final value should be exactly 1000.
How could this work if any load get old value? Result will be 999 or
less. And this is not atomic at all.
> > as it "happens before" a call to `notify`.
>
> And how the unblock of the waiting thread is related to the call to `notify`? Does it synchronize with it, like mutex unlock with lock (https://timsong-cpp.github.io/cppwp/n4861/thread.mutex.requirements.mutex#11)?
Synchronization is more with value in atomic variables than between
`notify` and `wait`, technically `notify` could be "noop" and `wait`
bussy loop.
Received on 2022-07-20 10:09:05