Date: Mon, 30 Mar 2026 13:07:08 +0200
On 29/03/2026 12:42, Jonathan Wakely via Std-Proposals wrote:
>
>
> The appropriate uses of assume are different from the appropriate uses
> of assertions. They are not two sides of the same coin.
>
> See https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2064r0.pdf
> <https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2064r0.pdf>
>
> Assertions should be used liberally and at API boundaries. Assumptions
> should be used carefully based on profiling and measurement of where
> they improve codegen, not just spammed in place of all assertions.
>
>
I don't really agree with the distinctions made in that paper. But
perhaps that is just my background and the kind of programming I do.
The standards must be general, and must err on the side of conservative
choices. However, I think it is important to understand there are
different viewpoints here, and different needs.
To me, it is a natural thing that a function has a pre-condition and a
post-condition. The caller guarantees that the pre-condition is
fulfilled. The function can assume that the pre-condition is fulfilled,
and uses that to ensure the post-condition is fulfilled. The caller can
then assume that the post-condition is fulfilled. That is the whole
point of the process - it is what gives meaning to programming. A
"function" is something that, given suitable inputs, produces suitable
outputs. (To handle side-effects, we can pretend that "the world before
the call" is an input and "the world after the call" is an output. That
is not very practical for writing pre and post conditions in C++, but
it's fine for talking about the theory.)
I am hugely in favour of being able to have the language (compiler,
run-time support, etc.) check for correctness at compile time, as much
as practically possible. For run-time checks, a balance must always be
found - it is good that it is easy for the programmer to enable checks,
but also important that checks can be disabled easily. For a lot of C++
code, efficiency is not particularly important - but for some code it is
vital. So having compiler options (standardised where possible and
practical) to modify these balances is a good idea. Even the best
programmers make mistakes, and tools that help find these mistakes are
always helpful.
However, I feel that a lot of the discussions about contracts is putting
things in the wrong place, and misunderstanding what such function
specifications mean in different places. In particular, it is about
responsibilities.
The pre-condition of a function specifies the caller's responsibility.
The p2064 paper says the function cannot assume the pre-condition is
correct, because it is code written by someone else that determines if
it holds. It is, IMHO, precisely because the code is written by someone
else that the function author should be able to assume the pre-condition
holds. It is not their job to get the function inputs correct. It is
not the function author's job to hand-hold the caller, or figure out if
the caller has done a good job or not. It is not the job of the hammer
to determine if the user is likely to hit their thumb rather than the
nail - or if the user is trying to hammer in a screw. The function
author should be free to assume the pre-condition holds - likewise, the
compiler optimiser can assume it holds true.
On the caller side, it is the caller author's job to make sure the
pre-condition is fulfilled. If it needs to be checked at run-time (and
such checks can be vital in development and debugging, and often worth
the cost even in final releases) then it should be done on the caller
side. After all, if the pre-condition is not satisfied, it is the
caller code that is wrong - not the function implementation.
The inverse applies to the post-condition. The caller code can assume
the post-condition is true (unless the caller has messed up and not
satisfied the pre-condition). The function implementation is
responsible for satisfying the post-condition, and therefore any checks
should be done at that point.
Getting this wrong is a waste of everyone's time. It is a waste of the
developer's time, whether they are implementing the caller or the
callee. It is a waste of run-time at both sides. It can ruin the
analysability of code. Suppose you have this function :
double square_root(double x)
pre (x >= 0)
post (y : abs(y * y - x) < 0.001);
When treated correctly, this is a pure function. There are no
side-effects. It is a complete function - it gives a correct result for
any valid input. There are no exceptions. Implementations can be
efficient, calls can be optimised (such as moving it around other code,
eliminating duplicates, compile-time pre-calculation, etc.).
Correctness analysis by tools or humans is straightforward, both for the
function itself and for caller code. There is no undefined behaviour in
the function - a call to "square_root(-1)" is undefined behaviour in the
caller.
But if the implementation cannot assume the pre-condition is true, this
is all gone. At best, you now have UB in the function, because you have
declared that it is possible to call the function with a negative input.
At worst, the function implementation now comes with a check leading
to a logging message, program termination, a thrown exception, or some
other such effect. Now the function implementer has to think about how
to handle incompetent callers. Callers have to think about how the
function interacts with other aspects of the code - the function may
crash the program, or interact badly with threading.
If the function implementer cannot trust code to call it correctly, and
function callers cannot trust function implementers to code correctly,
then the whole concept of programming falls apart. Every function
becomes a paranoid code snippet that must double-check and triple-check
everything, including the function calls used to check the function calls.
There are, of course, points in code where you do not trust others. You
don't trust data coming in from outside. You don't trust caller inputs
at API boundaries, at least for "major" functions or where the
consequences of errors can be significant. But if you can't trust code
internal code and function calls, everything falls apart.
And if "pre" and "post", along with contract assertions, cannot be
assumed to be satisfied (without optional checks to aid debugging), then
they are IMHO pointless in almost all code. I would prefer to be able
to add these freely in all sorts of code, even when I control both
caller and callee - specifications written in C++ are preferable to
specifications written as comments. I had hoped that C++26 contracts
would let me write clearer code, have better static checking, have
optional run-time checking on chosen functions while debugging, and lead
to more efficient generated code. I had hoped it would lead to
programmers being clearer about their responsibilities - focusing more
on getting their own code right, rather than how they should deal with
other people's mistakes.
C++ is, of course, a language designed for the needs of a huge number of
programmers with a huge variety of needs and wants - any single
developer is going to have things they like and things they dislike
about it. But I do wonder if contracts, assertions and assumptions have
hit the best balance here.
>
>
> The appropriate uses of assume are different from the appropriate uses
> of assertions. They are not two sides of the same coin.
>
> See https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2064r0.pdf
> <https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2064r0.pdf>
>
> Assertions should be used liberally and at API boundaries. Assumptions
> should be used carefully based on profiling and measurement of where
> they improve codegen, not just spammed in place of all assertions.
>
>
I don't really agree with the distinctions made in that paper. But
perhaps that is just my background and the kind of programming I do.
The standards must be general, and must err on the side of conservative
choices. However, I think it is important to understand there are
different viewpoints here, and different needs.
To me, it is a natural thing that a function has a pre-condition and a
post-condition. The caller guarantees that the pre-condition is
fulfilled. The function can assume that the pre-condition is fulfilled,
and uses that to ensure the post-condition is fulfilled. The caller can
then assume that the post-condition is fulfilled. That is the whole
point of the process - it is what gives meaning to programming. A
"function" is something that, given suitable inputs, produces suitable
outputs. (To handle side-effects, we can pretend that "the world before
the call" is an input and "the world after the call" is an output. That
is not very practical for writing pre and post conditions in C++, but
it's fine for talking about the theory.)
I am hugely in favour of being able to have the language (compiler,
run-time support, etc.) check for correctness at compile time, as much
as practically possible. For run-time checks, a balance must always be
found - it is good that it is easy for the programmer to enable checks,
but also important that checks can be disabled easily. For a lot of C++
code, efficiency is not particularly important - but for some code it is
vital. So having compiler options (standardised where possible and
practical) to modify these balances is a good idea. Even the best
programmers make mistakes, and tools that help find these mistakes are
always helpful.
However, I feel that a lot of the discussions about contracts is putting
things in the wrong place, and misunderstanding what such function
specifications mean in different places. In particular, it is about
responsibilities.
The pre-condition of a function specifies the caller's responsibility.
The p2064 paper says the function cannot assume the pre-condition is
correct, because it is code written by someone else that determines if
it holds. It is, IMHO, precisely because the code is written by someone
else that the function author should be able to assume the pre-condition
holds. It is not their job to get the function inputs correct. It is
not the function author's job to hand-hold the caller, or figure out if
the caller has done a good job or not. It is not the job of the hammer
to determine if the user is likely to hit their thumb rather than the
nail - or if the user is trying to hammer in a screw. The function
author should be free to assume the pre-condition holds - likewise, the
compiler optimiser can assume it holds true.
On the caller side, it is the caller author's job to make sure the
pre-condition is fulfilled. If it needs to be checked at run-time (and
such checks can be vital in development and debugging, and often worth
the cost even in final releases) then it should be done on the caller
side. After all, if the pre-condition is not satisfied, it is the
caller code that is wrong - not the function implementation.
The inverse applies to the post-condition. The caller code can assume
the post-condition is true (unless the caller has messed up and not
satisfied the pre-condition). The function implementation is
responsible for satisfying the post-condition, and therefore any checks
should be done at that point.
Getting this wrong is a waste of everyone's time. It is a waste of the
developer's time, whether they are implementing the caller or the
callee. It is a waste of run-time at both sides. It can ruin the
analysability of code. Suppose you have this function :
double square_root(double x)
pre (x >= 0)
post (y : abs(y * y - x) < 0.001);
When treated correctly, this is a pure function. There are no
side-effects. It is a complete function - it gives a correct result for
any valid input. There are no exceptions. Implementations can be
efficient, calls can be optimised (such as moving it around other code,
eliminating duplicates, compile-time pre-calculation, etc.).
Correctness analysis by tools or humans is straightforward, both for the
function itself and for caller code. There is no undefined behaviour in
the function - a call to "square_root(-1)" is undefined behaviour in the
caller.
But if the implementation cannot assume the pre-condition is true, this
is all gone. At best, you now have UB in the function, because you have
declared that it is possible to call the function with a negative input.
At worst, the function implementation now comes with a check leading
to a logging message, program termination, a thrown exception, or some
other such effect. Now the function implementer has to think about how
to handle incompetent callers. Callers have to think about how the
function interacts with other aspects of the code - the function may
crash the program, or interact badly with threading.
If the function implementer cannot trust code to call it correctly, and
function callers cannot trust function implementers to code correctly,
then the whole concept of programming falls apart. Every function
becomes a paranoid code snippet that must double-check and triple-check
everything, including the function calls used to check the function calls.
There are, of course, points in code where you do not trust others. You
don't trust data coming in from outside. You don't trust caller inputs
at API boundaries, at least for "major" functions or where the
consequences of errors can be significant. But if you can't trust code
internal code and function calls, everything falls apart.
And if "pre" and "post", along with contract assertions, cannot be
assumed to be satisfied (without optional checks to aid debugging), then
they are IMHO pointless in almost all code. I would prefer to be able
to add these freely in all sorts of code, even when I control both
caller and callee - specifications written in C++ are preferable to
specifications written as comments. I had hoped that C++26 contracts
would let me write clearer code, have better static checking, have
optional run-time checking on chosen functions while debugging, and lead
to more efficient generated code. I had hoped it would lead to
programmers being clearer about their responsibilities - focusing more
on getting their own code right, rather than how they should deal with
other people's mistakes.
C++ is, of course, a language designed for the needs of a huge number of
programmers with a huge variety of needs and wants - any single
developer is going to have things they like and things they dislike
about it. But I do wonder if contracts, assertions and assumptions have
hit the best balance here.
Received on 2026-03-30 11:07:18
