Date: Mon, 25 Aug 2025 17:22:04 +0200
2025-08-25 06:54, Jan Schultke:
>> I wonder, are there any inherent reasons why we shouldn't have such
>> function templates in the standard library? If so, I won't pursure
>> writing a proposal. If it's just a case of "noone wanted them bad enough
>> to write a proposal", I may go ahead.
>
> I don't know the history, but how would those even be implemented?
The way I've done it is very similar to how gcc's `std::lock` is
implemented.
* gcc's std::lock locks the first lockable then try_lock's the rest.
* if try_lock fails, it returns the index of the lockable that
couldn't be locked. It then unlocks the first and rotates the
lockables so that the failing lockable becomes the first and
starts over.
My implementation uses try_lock_until on the first lockable instead. If
that fails, the whole algorithm has failed, otherwise it uses try_lock
on the rest and if that fails it rotates like gcc's std::lock and starts
with a new try_lock_until.
> The obvious issue is that if you try_lock_for one second, and spend
> 0.99s on the first of three locks, you may just run out of time before
> having locked them all. Should try_lock_for be per-lock, or should it
> pre-compute and act like try_lock_until for each of the locks?
The way I've implemented it, there's only one try_lock_until at a time,
then a try_lock on the rest.
> It would also seem much better if you could begin a try_lock_until
> request for all mutexes simultaneously instead of blocking on one
> mutex since that makes it more likely you will "make it in time", but
> that would require the underlying mutexes to all have some
> asynchronous locking API. Perhaps that would be the more important
> first step in this process, not the try_lock_until wrapper.
Interesting idea but it feels like this could be prone to temporary
deadlocks while the algorithm waits for the timeout. If std::lock used
this approach, the unlocking+try_lock part, that is crucial for that
algorithm to avoid deadlocks, would not be there.
Best regards,
Ted Lyngmo
>> I wonder, are there any inherent reasons why we shouldn't have such
>> function templates in the standard library? If so, I won't pursure
>> writing a proposal. If it's just a case of "noone wanted them bad enough
>> to write a proposal", I may go ahead.
>
> I don't know the history, but how would those even be implemented?
The way I've done it is very similar to how gcc's `std::lock` is
implemented.
* gcc's std::lock locks the first lockable then try_lock's the rest.
* if try_lock fails, it returns the index of the lockable that
couldn't be locked. It then unlocks the first and rotates the
lockables so that the failing lockable becomes the first and
starts over.
My implementation uses try_lock_until on the first lockable instead. If
that fails, the whole algorithm has failed, otherwise it uses try_lock
on the rest and if that fails it rotates like gcc's std::lock and starts
with a new try_lock_until.
> The obvious issue is that if you try_lock_for one second, and spend
> 0.99s on the first of three locks, you may just run out of time before
> having locked them all. Should try_lock_for be per-lock, or should it
> pre-compute and act like try_lock_until for each of the locks?
The way I've implemented it, there's only one try_lock_until at a time,
then a try_lock on the rest.
> It would also seem much better if you could begin a try_lock_until
> request for all mutexes simultaneously instead of blocking on one
> mutex since that makes it more likely you will "make it in time", but
> that would require the underlying mutexes to all have some
> asynchronous locking API. Perhaps that would be the more important
> first step in this process, not the try_lock_until wrapper.
Interesting idea but it feels like this could be prone to temporary
deadlocks while the algorithm waits for the timeout. If std::lock used
this approach, the unlocking+try_lock part, that is crucial for that
algorithm to avoid deadlocks, would not be there.
Best regards,
Ted Lyngmo
Received on 2025-08-25 15:22:09