Date: Mon, 25 Nov 2024 16:02:46 -0500
It would help if the optimizationists hadn't decided that they hated
volatile and done their best to both make it useless and necessary. In
particular, they have used the implementation-defined behavior allowed for
volatile by the standard meant for variables sitting too close to each
other in memory (that is, given volatile char v[3];, it may not be possible
to implement reads or writes to v[1] without also touching v[0] or v[2], so
the standard just says implementation-defined and expects compilers to deal
with it as best they can) to eliminate volatile access altogether - for
example, in void f() { static volatile bool b; if (b) { do_something(); } },
there are compilers (Microsoft's is one, I believe) that will eliminate the
test of b and the call to do_something() altogether from the compiled code,
which is extremely annoying when that test is meant to be activated from
inside a debugger by writing to b. Similarly, the optimizationists working
on gcc decided that obeying the standard with respect to writing
floating-point values to variables should normally be ignored, so it is
necessary to make those variables volatile double.
On Mon, Nov 25, 2024 at 1:27 AM Tiago Freire via Std-Discussion <
std-discussion_at_[hidden]> wrote:
> Well, I have good news for you.
>
> Volatile isn’t deprecated anymore, they went back on that decision.
>
>
>
> The bad news is, this still will not work for what you want to do with it.
>
> This really has been studied to death, “volatile” is really useless as
> multithreaded synchronization object, mostly misused by developers who
> don’t understand what “volatile” actually does and how modern CPUs work,
> and valid use cases are so rare that “volatile” causes a disproportional
> amount of burden that you would be better off with an alternative solution,
> reason why it had been deprecated to begin with.
>
> The problem is that in the rare cases where you need it, you really need
> it, and no alternative solution exists that would properly replace it,
> hence why they went back to not deprecating it anymore. But I suspect when
> someone starts to work on “special types with volatile operations” instead
> of “volatile variables”, those cases will eventually be covered and
> volatile will still go the way of the dodo.
>
>
>
> In a world where you have multiple CPUs with dedicated cache and
> instruction re-ordering, the days where “your code is executed predictively
> instruction after instruction and have the observable side-effects as one
> would naively read it from the source code” are long gone and they are
> likely never coming back.
>
>
>
>
>
> *From:* Std-Discussion <std-discussion-bounces_at_[hidden]> *On
> Behalf Of *J Decker via Std-Discussion
> *Sent:* Sunday, November 24, 2024 8:35 PM
> *To:* std-discussion_at_[hidden]
> *Cc:* J Decker <d3ck0r_at_[hidden]>
> *Subject:* Re: [std-discussion] Deprecation of volatile - Implications
> for exsiting code
>
>
>
>
>
>
>
> On Sun, Nov 24, 2024 at 9:52 AM Nate Eldredge <nate_at_[hidden]>
> wrote:
>
> I'm afraid your ideas about `volatile` with respect to multi-threading are
> about 14 years out of date. You're reading an object in one thread while
> it's being concurrently written in another, and it's not `std::atomic`.
> Since C++11, that's a data race and your program has undefined behavior.
> Under the standard, `volatile` doesn't save you; `std::atomic` is the only
> conformant way to do it. (And `std::atomic` does in fact achieve
> everything you want here; for instance, it inhibits optimizations like
> "read the value once and then never read it again.")
>
>
>
>
>
> I'm reading a pointer, not an object, so when is a register sized
> load/store/move not atomic?
>
>
>
> "It is unclear how to use such weak guarantees in portable code, except
> in a few rare instances, which we list below." from
>
> "In case you wonder why volatile does not work for multi-threading, see
>
> https://wg21.link/N2016 "
>
>
>
> ... 1) I am one of those instances 2) sorry it wasn't clear, it's quite
> clear to me, but then in the 90's I had the advantage of a logic analyzer
> connected to every signal pin of a 386 processor (which was before there
> was in-cpu cache even), and intimately understand word tear if such
> register sized values were not on a register-sized boundary. can't see how
> processors would devolve into a stream of bits or something that volatile
> for 'read always' and 'execute in the order I specified in the code'
> wouldn't cover... the first bit would either bit the previous or the next
> and all subsequently - maybe there's a grey-code possibility, but then
> that's something the hardware is doing to you, and maybe you should lock at
> a higher level and use a different object for the list of data.
>
>
>
> In many cases, the thread-safety of my atomics implementation is often
> unused - on reflection in regards to this issue, the lock is really just a
> write-lock, with lock-free reads; and in many cases isn't strictly needed
> to be volatile, because there's a code lock around it; but then volatile
> still need to be specified in the type because... PLIST is a simil.ar
> type, but stores only pointers to things, or pointer-sized values.
>
>
>
> #define ever ;;
>
> CRITICALSECTION cs; // assume inited...
>
> void f( PLIST **outList ) {
>
>
>
> PLIST list = NULL;
>
> outList[0] = &list;
>
> EnterCriticalSection( &cs );
>
> for( ever ) {
>
> if( !list ) { // there's nothing in this code path that would change
> this value
>
> // so it's a candidate for read-never optimization;
> even without any locks on the object.
>
> LeaveCriticalSection( &cs ); sleep( forever ); EnterCriticalSection(
> &cs ); continue;
>
> }
>
> /* do something with the data in the list */
>
> // assuming you CAN get here, because the read-again of list didn't
> happen.
>
> }
>
>
>
> }
>
>
>
>
>
> void g( void ) {
>
> PLIST *useList;
>
> uinitptr_t data = 3;
>
> PTHREAD thread = ThreadTo( f, &useList );
>
> AddLink( useList, &data );
>
> WakeThread( thread );
>
> // and later doing other work with this list... which in many cases
> ends up being an address of the volatile
>
> // but not ALL cases...
>
> }
>
>
>
> It's possible that your implementation provides some guarantees about
> `volatile` in multithreading beyond what the standard gives, but if so,
> that's between you and them; nothing to do with ISO C++.
>
>
>
> Now, this doesn't directly answer your question about `volatile` on
> parameters and return types, but it does rather invalidate the use case you
> proposed.
>
>
>
> The intended uses for `volatile` in this day and age are things like
> memory-mapped I/O, where your goal is to just make the machine perform a
> single load or store instruction. (Again, you may *think* that's exactly
> what you want for concurrent multithread access, but the standard doesn't
> promise that it will work.) So if you can propose an example of that kind,
> we could get back on track. However, for memory-mapped I/O, you almost
> never want to define an actual object of `volatile`-qualified type. You
> would work through pointers to such types, but the pointers themselves
> would not be `volatile`.
>
>
>
> 'but the standard doesn't promise that it will work' interpretations of
> the standard that mixed a bunch of correlated but maybe not causally
> related things together to compilcate the issue.... maybe better wording
> would make it less subject to interpretation so you don't have to argue
> with each implementation that decides A is tied to B, and decides to break
> (well, they didn't always have volatile in the type either, but as
> optimizers got too eager and broke too much, they just inherited them
> probably about 15 years ago so right when volatile became unusable for all
> the things that do work and did work. But apparently this is around the
> time this is when they start telling me that 'the sky is green, not blue'
> what YOU(me, from you) see and have working, isn't actually working.
>
>
>
>
>
>
>
>
>
> On Nov 23, 2024, at 19:03, J Decker via Std-Discussion <
> std-discussion_at_[hidden]> wrote:
>
>
>
>
> https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1152r0.html#parret
>
>
>
>
>
> I have some datatype, which is a volatile list. What makes it volatile is
> that other threads can change the values associated with the list without
> any other threads being aware of the change. Primarily, what happens, when
> the objects are not defined as volatile, the optimizers can read the value
> early in a function, and then never read it again. The only meaning of
> volatile *I* knew of was 'read always'. Writes and locks and other
> unrelated things mentioned in the paper just confuse the issue. What I
> need is just a way to tell compiler optimizers 'hey don't be smart, if the
> code needs the value, get the value from the place specified.' Writes
> could be cached - but it's something the programmer would do. It(volatile)
> really has nothing to do atomics.
>
> --
> Std-Discussion mailing list
> Std-Discussion_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-discussion
>
volatile and done their best to both make it useless and necessary. In
particular, they have used the implementation-defined behavior allowed for
volatile by the standard meant for variables sitting too close to each
other in memory (that is, given volatile char v[3];, it may not be possible
to implement reads or writes to v[1] without also touching v[0] or v[2], so
the standard just says implementation-defined and expects compilers to deal
with it as best they can) to eliminate volatile access altogether - for
example, in void f() { static volatile bool b; if (b) { do_something(); } },
there are compilers (Microsoft's is one, I believe) that will eliminate the
test of b and the call to do_something() altogether from the compiled code,
which is extremely annoying when that test is meant to be activated from
inside a debugger by writing to b. Similarly, the optimizationists working
on gcc decided that obeying the standard with respect to writing
floating-point values to variables should normally be ignored, so it is
necessary to make those variables volatile double.
On Mon, Nov 25, 2024 at 1:27 AM Tiago Freire via Std-Discussion <
std-discussion_at_[hidden]> wrote:
> Well, I have good news for you.
>
> Volatile isn’t deprecated anymore, they went back on that decision.
>
>
>
> The bad news is, this still will not work for what you want to do with it.
>
> This really has been studied to death, “volatile” is really useless as
> multithreaded synchronization object, mostly misused by developers who
> don’t understand what “volatile” actually does and how modern CPUs work,
> and valid use cases are so rare that “volatile” causes a disproportional
> amount of burden that you would be better off with an alternative solution,
> reason why it had been deprecated to begin with.
>
> The problem is that in the rare cases where you need it, you really need
> it, and no alternative solution exists that would properly replace it,
> hence why they went back to not deprecating it anymore. But I suspect when
> someone starts to work on “special types with volatile operations” instead
> of “volatile variables”, those cases will eventually be covered and
> volatile will still go the way of the dodo.
>
>
>
> In a world where you have multiple CPUs with dedicated cache and
> instruction re-ordering, the days where “your code is executed predictively
> instruction after instruction and have the observable side-effects as one
> would naively read it from the source code” are long gone and they are
> likely never coming back.
>
>
>
>
>
> *From:* Std-Discussion <std-discussion-bounces_at_[hidden]> *On
> Behalf Of *J Decker via Std-Discussion
> *Sent:* Sunday, November 24, 2024 8:35 PM
> *To:* std-discussion_at_[hidden]
> *Cc:* J Decker <d3ck0r_at_[hidden]>
> *Subject:* Re: [std-discussion] Deprecation of volatile - Implications
> for exsiting code
>
>
>
>
>
>
>
> On Sun, Nov 24, 2024 at 9:52 AM Nate Eldredge <nate_at_[hidden]>
> wrote:
>
> I'm afraid your ideas about `volatile` with respect to multi-threading are
> about 14 years out of date. You're reading an object in one thread while
> it's being concurrently written in another, and it's not `std::atomic`.
> Since C++11, that's a data race and your program has undefined behavior.
> Under the standard, `volatile` doesn't save you; `std::atomic` is the only
> conformant way to do it. (And `std::atomic` does in fact achieve
> everything you want here; for instance, it inhibits optimizations like
> "read the value once and then never read it again.")
>
>
>
>
>
> I'm reading a pointer, not an object, so when is a register sized
> load/store/move not atomic?
>
>
>
> "It is unclear how to use such weak guarantees in portable code, except
> in a few rare instances, which we list below." from
>
> "In case you wonder why volatile does not work for multi-threading, see
>
> https://wg21.link/N2016 "
>
>
>
> ... 1) I am one of those instances 2) sorry it wasn't clear, it's quite
> clear to me, but then in the 90's I had the advantage of a logic analyzer
> connected to every signal pin of a 386 processor (which was before there
> was in-cpu cache even), and intimately understand word tear if such
> register sized values were not on a register-sized boundary. can't see how
> processors would devolve into a stream of bits or something that volatile
> for 'read always' and 'execute in the order I specified in the code'
> wouldn't cover... the first bit would either bit the previous or the next
> and all subsequently - maybe there's a grey-code possibility, but then
> that's something the hardware is doing to you, and maybe you should lock at
> a higher level and use a different object for the list of data.
>
>
>
> In many cases, the thread-safety of my atomics implementation is often
> unused - on reflection in regards to this issue, the lock is really just a
> write-lock, with lock-free reads; and in many cases isn't strictly needed
> to be volatile, because there's a code lock around it; but then volatile
> still need to be specified in the type because... PLIST is a simil.ar
> type, but stores only pointers to things, or pointer-sized values.
>
>
>
> #define ever ;;
>
> CRITICALSECTION cs; // assume inited...
>
> void f( PLIST **outList ) {
>
>
>
> PLIST list = NULL;
>
> outList[0] = &list;
>
> EnterCriticalSection( &cs );
>
> for( ever ) {
>
> if( !list ) { // there's nothing in this code path that would change
> this value
>
> // so it's a candidate for read-never optimization;
> even without any locks on the object.
>
> LeaveCriticalSection( &cs ); sleep( forever ); EnterCriticalSection(
> &cs ); continue;
>
> }
>
> /* do something with the data in the list */
>
> // assuming you CAN get here, because the read-again of list didn't
> happen.
>
> }
>
>
>
> }
>
>
>
>
>
> void g( void ) {
>
> PLIST *useList;
>
> uinitptr_t data = 3;
>
> PTHREAD thread = ThreadTo( f, &useList );
>
> AddLink( useList, &data );
>
> WakeThread( thread );
>
> // and later doing other work with this list... which in many cases
> ends up being an address of the volatile
>
> // but not ALL cases...
>
> }
>
>
>
> It's possible that your implementation provides some guarantees about
> `volatile` in multithreading beyond what the standard gives, but if so,
> that's between you and them; nothing to do with ISO C++.
>
>
>
> Now, this doesn't directly answer your question about `volatile` on
> parameters and return types, but it does rather invalidate the use case you
> proposed.
>
>
>
> The intended uses for `volatile` in this day and age are things like
> memory-mapped I/O, where your goal is to just make the machine perform a
> single load or store instruction. (Again, you may *think* that's exactly
> what you want for concurrent multithread access, but the standard doesn't
> promise that it will work.) So if you can propose an example of that kind,
> we could get back on track. However, for memory-mapped I/O, you almost
> never want to define an actual object of `volatile`-qualified type. You
> would work through pointers to such types, but the pointers themselves
> would not be `volatile`.
>
>
>
> 'but the standard doesn't promise that it will work' interpretations of
> the standard that mixed a bunch of correlated but maybe not causally
> related things together to compilcate the issue.... maybe better wording
> would make it less subject to interpretation so you don't have to argue
> with each implementation that decides A is tied to B, and decides to break
> (well, they didn't always have volatile in the type either, but as
> optimizers got too eager and broke too much, they just inherited them
> probably about 15 years ago so right when volatile became unusable for all
> the things that do work and did work. But apparently this is around the
> time this is when they start telling me that 'the sky is green, not blue'
> what YOU(me, from you) see and have working, isn't actually working.
>
>
>
>
>
>
>
>
>
> On Nov 23, 2024, at 19:03, J Decker via Std-Discussion <
> std-discussion_at_[hidden]> wrote:
>
>
>
>
> https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1152r0.html#parret
>
>
>
>
>
> I have some datatype, which is a volatile list. What makes it volatile is
> that other threads can change the values associated with the list without
> any other threads being aware of the change. Primarily, what happens, when
> the objects are not defined as volatile, the optimizers can read the value
> early in a function, and then never read it again. The only meaning of
> volatile *I* knew of was 'read always'. Writes and locks and other
> unrelated things mentioned in the paper just confuse the issue. What I
> need is just a way to tell compiler optimizers 'hey don't be smart, if the
> code needs the value, get the value from the place specified.' Writes
> could be cached - but it's something the programmer would do. It(volatile)
> really has nothing to do atomics.
>
> --
> Std-Discussion mailing list
> Std-Discussion_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-discussion
>
Received on 2024-11-25 21:02:53