Date: Sun, 28 May 2023 20:11:19 +0200
On 28/05/2023 19.40, Phil Bouchard wrote:
>
>
> On 5/28/23 13:30, Federico Kircheis via Std-Proposals wrote:
>> On 28 May 2023 16:51:37 UTC, Phil Bouchard via Std-Proposals
>> <std-proposals_at_[hidden]> wrote:
>>>
>>>
>>> On 5/28/23 12:22, Phil Bouchard via Std-Proposals wrote:
>>>>
>>>> Besides, the aforementioned condition just involves a non-const
>>>> member call in a compound statement so that can easily be automated
>>>> with C++ Superset.
>>>
>>> Correction: it's a simple matter of automating the addition of an
>>> explicit lock() mechanism for each conditions using a container
>>> instance:
>>>
>>> if(std::lock_guard<std::mutex> lock(container.mutex());
>>> container.empty())
>>> {
>>> container.insert(....)
>>> }
>>>
>>>
>>
>> But then, what advantage does the mutex in the container provide?
>
> Well I would keep the header I wrote the way it is so that the layer and
> all dependencies stay thread-safe. The user-defined conditions on top of
> that would require explicit locks of the recursive mutex.
>
> Yes you might loose random nano seconds along the way but it's really
> not important compared to the 100% thread safety benefits you gain.
You might have missed what I and other wrote.
(and in my experience you do not only loose time waiting and
synchronizing threads, which depending on the environment are not
nanoseconds, but also create a lot of code-bloat/gigantic binaries with
this approach)
You might have prevented data races, but you did not gain thread safety.
On the contrary, thread issues are more difficult to reproduce and diagnose.
Actually you did not even fix all data races, as I mentioned in the
previous mail with the IMHO superior approach.
If you get a reference to an element in the container, and the container
changes, oops.
Coincidentally you example does not have any function for getting the
data...
>> Why not use an external mutex and drop at least half of the containers
>> to choose from, and compose independent features (being a container
>> and being thread safe)
Federico
>
>
> On 5/28/23 13:30, Federico Kircheis via Std-Proposals wrote:
>> On 28 May 2023 16:51:37 UTC, Phil Bouchard via Std-Proposals
>> <std-proposals_at_[hidden]> wrote:
>>>
>>>
>>> On 5/28/23 12:22, Phil Bouchard via Std-Proposals wrote:
>>>>
>>>> Besides, the aforementioned condition just involves a non-const
>>>> member call in a compound statement so that can easily be automated
>>>> with C++ Superset.
>>>
>>> Correction: it's a simple matter of automating the addition of an
>>> explicit lock() mechanism for each conditions using a container
>>> instance:
>>>
>>> if(std::lock_guard<std::mutex> lock(container.mutex());
>>> container.empty())
>>> {
>>> container.insert(....)
>>> }
>>>
>>>
>>
>> But then, what advantage does the mutex in the container provide?
>
> Well I would keep the header I wrote the way it is so that the layer and
> all dependencies stay thread-safe. The user-defined conditions on top of
> that would require explicit locks of the recursive mutex.
>
> Yes you might loose random nano seconds along the way but it's really
> not important compared to the 100% thread safety benefits you gain.
You might have missed what I and other wrote.
(and in my experience you do not only loose time waiting and
synchronizing threads, which depending on the environment are not
nanoseconds, but also create a lot of code-bloat/gigantic binaries with
this approach)
You might have prevented data races, but you did not gain thread safety.
On the contrary, thread issues are more difficult to reproduce and diagnose.
Actually you did not even fix all data races, as I mentioned in the
previous mail with the IMHO superior approach.
If you get a reference to an element in the container, and the container
changes, oops.
Coincidentally you example does not have any function for getting the
data...
>> Why not use an external mutex and drop at least half of the containers
>> to choose from, and compose independent features (being a container
>> and being thread safe)
Federico
Received on 2023-05-28 18:11:29