C++ Logo

std-proposals

Advanced search

Re: [std-proposals] Fwd: set_new_handler extension

From: Phil Bouchard <boost_at_[hidden]>
Date: Wed, 31 May 2023 10:49:36 -0400
Yes I understand what you are saying but the race conditions you get with current containers are a treacherous sense of security as well, depending on the frequency.


-- 
Logo
Phil Bouchardfacebook icon
Founder & CEO
T: (819) 328-4743
E: phil_at_[hidden] | www.fornux.com
8 rue de la Baie | Gatineau (Qc), J8T 3H3 Canada
BannerLe message ci-dessus, ainsi que les documents l'accompagnant, sont destinés uniquement aux personnes identifiées et peuvent contenir des informations privilégiées, confidentielles ou ne pouvant être divulguées. Si vous avez reçu ce message par erreur, veuillez le détruire.
This communication (and/or the attachments) is intended for named recipients only and may contain privileged or confidential information which is not to be disclosed. If you received this communication by mistake please destroy all copies.

On May 30, 2023, at 10:22 PM, Sebastian Wittmeier via Std-Proposals <std-proposals_at_[hidden]> wrote:

 AW: [std-proposals] Fwd: set_new_handler extension

I am not sure you understood the point of most of the replies: The level of detail for thread synchronization is the wrong one, if you do it on container level.

 

Compare it to database transactions. Why should the user of a relational database deal with them, if you could have every SQL command implicitly span a transaction and be done with it?

 

Because you want and need to combine a number of commands into one transaction. And those are application-specific. Should the flight reservation be automatically canceled if the payment fails? Different tables are involved, but the C++ standard committee would not know beforehand.

 

C++ could be extended with ways to simplify creating those middle layers, but you cannot spare the programmer from writing the actual application in the end.

 

Just having a container, where every call is thread-safe by itself, is the solution, which is nearly never (without proof of that by statistical data) needed. So it probably does not warrant to have classes for it in the standard library. Especially to also not give a false sense of security that the code would be thread-safe.

 

It is simple to generate your own wrapper or access classes or functions, if you need that feature.

 

Sebastian


 

-----Ursprüngliche Nachricht-----
Von: Phil Bouchard via Std-Proposals <std-proposals_at_[hidden]>
Gesendet: Mi 31.05.2023 03:40
Betreff: Re: [std-proposals] Fwd: set_new_handler extension
An: Tony V E <tvaneerd_at_[hidden]>; std-proposals_at_[hidden];
CC: Phil Bouchard <boost_at_[hidden]>;


On 5/29/23 16:18, Tony V E wrote:
> Step one for fixing the thread safety problems of sharing data between
> threads,
> is to not share data between threads.

Easier said than done and you can't cover all cases this way. My goal is
not to reinvent the wheel which is the most counter-productive approach
possible.

Nothing prevents the committee from creating a new thread-safe namespace
having all these nano-seconds slower but safer utilities.

> This also works great for GPUs.
>
> Adding thread-safe containers just encourages the wrong approach.
>
>
>
> On Sun, May 28, 2023 at 3:00 PM Phil Bouchard via Std-Proposals
> <std-proposals_at_[hidden] <mailto:std-proposals_at_[hidden]>>
> wrote:
>
>
>
>     On 5/28/23 14:33, Jason McKesson via Std-Proposals wrote:
>      > On Sun, May 28, 2023 at 12:19 PM Phil Bouchard <boost_at_[hidden]
>     <mailto:boost_at_[hidden]>> wrote:
>      >>
>      >>
>      >>
>      >> On 5/28/23 12:08, Jason McKesson via Std-Proposals wrote:
>      >>
>      >>> Remember: the problem is that users *forget*. But your solution
>      >>> doesn't make it impossible to forget. There are many circumstances
>      >>> where common access patterns *require* the user to take special
>     action
>      >>> in order to avoid data races. Because this requires manual
>      >>> intervention, users must identify these scenarios and *remember* to
>      >>> take that intervention.
>      >>>
>      >>> Therefore, people can still forget to do this. And therefore,
>     all the
>      >>> bugs are still there.
>      >>>
>      >>> Your proposed solution provides only the illusion of safety, not
>      >>> actual safety. A solution that works only 60% of the time without
>      >>> *correct* manual intervention is, for most users, no better than a
>      >>> solution that works 0% of the time. It still requires the user
>     to be
>      >>> mindful of their interactions with the container, to be heavily
>     aware
>      >>> that the container is shared and to treat it specially. One code
>      >>> change can still turn previously functional code into broken code.
>      >>>
>      >>> So your solution is not only inefficient, it is ineffective.
>      >>
>      >> Well first I would increase the 60% up to 90%.
>      >
>      > What's your basis for that "90%" guess? I have no data to back up
>      > "60%", but I could believe a number as low as 30% depending on what
>      > the code is doing with the container. The thing you're not getting is
>      > that these kinds of issues crop up in a myriad of ways. Something as
>      > simple as:
>      >
>      > ```
>      > container.push_back(20);
>      > container.push_back(40);
>      > ```
>      >
>      > May need a lock.
>      >
>      > And what's worse is... it may *not*. You cannot know, by inspection,
>      > if this code is doing the intended thing or not. This can only be
>      > determined by examining the context surrounding it.
>      >
>      > The best advice I was ever given about thread-safe code boils down to
>      > this: if your "thread-safe code" is not obviously, *proveably*
>      > thread-safe, then it isn't thread-safe code.
>      >
>      > The fact that we do not know if the above code is wrong means that it
>      > *is wrong*. And "solutions" to "thread-safety" that do not always
>      > force you to write correct code... are not solutions to thread
>     safety.
>      >
>      >> If 90% is insufficient for you and 0% is your preference then
>     that is
>      >> your personal choice, no everybody else's.
>      >
>      > Even if we were to believe your 90% number, that's still 10% of every
>      > interaction with any container that's broken. That makes debugging
>      > *harder*, not easier. The problem will manifest itself less
>     often, and
>      > thus you are more likely to ship broken code to users.
>      >
>      > And then misplace the blame on other code due to being told that the
>      > container you're using is "thread safe" and therefore the bug you
>      > wrote is actually someone else's problem.
>      >
>      > Put a different way, 90% of a cat is not 90% as good as a whole cat.
>      > It's a bloody, Godawful mess. The problem with "mostly" solutions is
>      > that "sometimes" happens way too often to just pretend that it
>      > doesn't.
>      >
>      > There's a reason why five-9s reliability requires *five* 9s.
>
>     Yes you're right about the 100%. It's just like astrophysics, either
>     you're fully 100% right otherwise no one will use your 90% right theory.
>
>     That's with C++ Superset will fix possible thread-unsafe crashes and
>     conditional expressions. The rest will be up to the programmer.
>
>     But all military systems cannot afford crashes, so they use static
>     memory but it's still not fully bulletproof. So the C++ Superset
>     solution is already better to a certain degree, now what you just
>     mentioned are purely at the algorithmic level, and are not possible
>     thread-unsafe crashes.
>
>
>     --
>     Logo <https://www.fornux.com/ <https://www.fornux.com/>>
>     *Phil Bouchard*         facebook icon
>     <https://www.linkedin.com/in/phil-bouchard-5723a910/
>     <https://www.linkedin.com/in/phil-bouchard-5723a910/>>
>     Founder & CEO
>     T: (819) 328-4743
>     E: phil_at_[hidden] <mailto:phil_at_[hidden]>| www.fornux.com
>     <http://www.fornux.com> <http://www.fornux.com <http://www.fornux.com>>
>     8 rue de la Baie| Gatineau (Qc), J8T 3H3 Canada
>
>     Banner <https://goglobalawards.org/ <https://goglobalawards.org/>>
>     Le message ci-dessus, ainsi que les
>     documents l'accompagnant, sont destinés uniquement aux personnes
>     identifiées et peuvent contenir des informations privilégiées,
>     confidentielles ou ne pouvant être divulguées. Si vous avez reçu ce
>     message par erreur, veuillez le détruire.
>     This communication (and/or the attachments) is intended for named
>     recipients only and may contain privileged or confidential information
>     which is not to be disclosed. If you received this communication by
>     mistake please destroy all copies.
>
>     --
>     Std-Proposals mailing list
>     Std-Proposals_at_[hidden] <mailto:Std-Proposals_at_[hidden]>
>     https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
>     <https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals>
>
>
>
> --
> Be seeing you,
> Tony

--
Logo <https://www.fornux.com/>
*Phil Bouchard* facebook icon
<https://www.linkedin.com/in/phil-bouchard-5723a910/>
Founder & CEO
T: (819) 328-4743
E: phil_at_[hidden]| www.fornux.com <http://www.fornux.com>
8 rue de la Baie| Gatineau (Qc), J8T 3H3 Canada

Banner <https://goglobalawards.org/> Le message ci-dessus, ainsi que les
documents l'accompagnant, sont destinés uniquement aux personnes
identifiées et peuvent contenir des informations privilégiées,
confidentielles ou ne pouvant être divulguées. Si vous avez reçu ce
message par erreur, veuillez le détruire.
This communication (and/or the attachments) is intended for named
recipients only and may contain privileged or confidential information
which is not to be disclosed. If you received this communication by
mistake please destroy all copies.

--
Std-Proposals mailing list
Std-Proposals_at_[hidden]
https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
--
Std-Proposals mailing list
Std-Proposals_at_[hidden]
https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals

Received on 2023-05-31 14:49:54