C++ Logo


Advanced search

Re: [SG12] p1315 secure_clear

From: Jens Maurer <Jens.Maurer_at_[hidden]>
Date: Sat, 25 Apr 2020 09:40:42 +0200
On 25/04/2020 06.08, JF Bastien wrote:
> On Fri, Apr 24, 2020 at 4:21 PM Jens Maurer <Jens.Maurer_at_[hidden] <mailto:Jens.Maurer_at_[hidden]>> wrote:
> On 25/04/2020 00.12, JF Bastien via SG12 wrote:
> > Hello SG12/UB folks,
> >
> > I'd like to start a discussion about p1315 <http://wg21.link/p1315> secure_clear. Please see the paper's history on github <https://github.com/cplusplus/papers/issues/67>.
> >
> > Here's what I'd want SG12's help on: assume that there's a need for some sort of "secure clearing of memory", how do we fit this into the abstract machine? What behavior do we specify, what do we leave open, while meeting the stated security goals?
> Over the past years, there have been occasional proposals
> that want to escape the abstract machine, for various reasons:
> N4534: Data-Invariant Functions (revision 3)
> http://open-std.org/JTC1/SC22/WG21/docs/papers/2015/n4534.html
> P0928R1 Mitigating Spectre v1 Attacks in C++
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p0928r1.pdf
> P1315 secure_clear is another such proposal.
> All the proposals above are silent on the vital question
> how to change the as-if rule of the abstract machine such
> that the desired semantics / restrictions are realized,
> optimizations can continue to be applied widely,
> conformance of a particular implementation can be decided,
> and we don't give up the power of the generality of the
> as-if rule. (The standard is silent on particular
> optimization techniques, which is good.)
> I'm strongly opposed to adding such facilities without
> changing the abstract machine description in the core
> language section. Some hand-waving in the library
> section is not enough.
> Until we have a viable specification approach for such
> matters, my view is such proposals are dead-on-arrival.
> (As a side note, we've been occasionally struggling with
> the semantics of volatile in the context of the abstract
> machine, too.)
> I agree with the above. This is the author's first proposal, and I wouldn't expect most seasoned WG21 members to be able to do what you describe. I'm hoping that folks would be able to help Miguel out here, not necessarily by writing the wording but by offering guidance. The papers you provide are a good start on what's been missing in the past. I think Miguel's paper should mention them. It would be useful to go back to their discussions to understand what was suggested there. Are there concrete ideas of what would work?

In order to solve these issues satisfactorily, we probably need a
research effort approximately on the same level as improving our
memory model for multi-threaded operations. This was a multi-year
effort involving top-notch experts in the trade. (Thank you very
much for your help!) This is not something that happens on the
level of or during processing by EWG, but way before that, for
example in a Study Group. (I understand that's why this comes
before SG12 now.)

Note that I'm the author of "N4534: Data-Invariant Functions
(revision 3)", and if I had viable ideas for addressing this,
I would have at least mentioned them in the paper.

The status quo is that the abstract machine specification is clear that
secure_clear can be eliminated by a compiler everywhere, because the
values it writes are indeterminate values, thus any read yields
undefined behavior. Thus, you can't observe whether secure_clear
was actually executed or not. If you can't observe something, the
compiler is free to nix it.

I understand this is not constructive input, but such is life.
Vaguely similar situation: I can argue and convince people that
a given mathematical proof is flawed without offering a working
proof for the theorem on the table.

> Hubert suggests that relying on volatile would be acceptable handwaving, do you think so as well?

If *all* accesses to the secret data (including accesses before
the secure_clear invocation) are volatile, I believe volatile
would be good enough.

I suspect, however, that users want all the optimizations they
can get for the operations preceding the secure_clear call,
and that takes volatile mostly off the table.

> > For example:
> >
> > * If we clear "memory" then we're not clearing registers, stack copies, caches, etc. What, if anything, should we say?
> Since secure_clear takes trivially copyable arguments,
> the compiler is free to make arbitrary additional copies
> on the stack, in registers, or elsewhere. Clearing
> just one of the instances is not enough to achieve the
> stated use-cases of this function. A security feature
> that doesn't reliably deliver should not exist in the
> first place.
> This is strongly debated. When I talk to compiler / language folks I hear exactly what you say. When I speak to security folks I hear something completely different: they agree with your point, yet they say that clearing the memory location is the primary source of leaks, and they have (or want) other means to address the other secondary leaks. An outcome of this severe disagreement is that security folks often deride C++ for being insecure and not wanting to fix "obvious" issues. I'd like to run this paper to its natural conclusion, whether that's some inclusion in C++, or its death with strong justification as to why it died (so we can stop having this particular debate).

I suspect there are different engineering views at work here.

From a standardization perspective, ISO requires us to clearly
state the criteria to judge whether an implementation conforms
to the standard. And that's good: If you can't tell whether a
screw conforms to ISO 261 or not, ISO 261 serves no purpose.

I'm arguing that, under the status quo of the abstract machine,
an implementation replacing secure_clear with memset is conforming.
I believe the paper strives to make such an implementation
non-conforming. So, from a standardization perspective, what
would be the words to cause such replacement to be non-conforming?

The other view is the practicioner who says "but look, the
following code does what I want on my implementation" --- and
he might actually be right. There are lots of things that
work in practice, but have unspecified or undefined behavior
in the standard. And that's totally fine; people doing this
(ideally) buy into the non-portability knowingly.
(Non-portability applies to both "different compilers"
as well as "next version of your compiler". Example: Type-based
aliasing violations were fairly widespread in open-source
software until gcc introduced type-based alias optimizations,
at which point the software simply broke.)

Bridging the gap between these two views (both in mindset and
in wording) is exactly what makes writing a proposal hard,

Obviously, when in a WG21 context, I have the standardization
viewpoint. I'm happy to (and do) take the other viewpoint
when writing software in my daytime life.

Coming back to your original question: An unsatisfactory,
but nonetheless valid, response would be "we understand the
need, but we weren't smart enough to figure out a way how
to specify what you wanted in the context of C++".
This also was the answer to multi-threading in C++ for years:
It sort-of worked on lots of individual platforms, but it
took substantial effort to actually specify it in the standard.

> I don't want to argue this, I'm trying to state facts on various perceptions, and the effects that come out of this. Maybe the paper can discuss this dichotomy in more details to save everyone some time?

Certainly. It's a bit sad to see the paper not mention previous
discussions that happened in various forums.

> > * How do we talk about calling secure_clear right before deallocation functions in such a way that memory is still cleared?
> There is a need to prevent certain compiler optimizations
> from happening, preferably within an abstract model instead
> of naming them individually.
> That's an interesting approach. Miguel, would you be able to do this?
> > * The current paper doesn't say what value is stored (unlike memset_s). What's the best way to do this?
> I don't think this matters, since reading the cleared memory
> afterwards would read indeterminate values and thus be
> undefined behavior.
> > How should we talk about the feature so it best fits in C++? What should we change about the abstract machine to make it happen?
> Ideas welcome.
> Hey that's what *my* email was about! :-p

First of all, we need actual wording in the paper. The current not-quite
wording under "proposal" is not specific enough for poking holes.

Then, maybe something like this would focus the attention a bit:

 - Add a bullet in [intro.abstract] p6 to say secure_clear is
"strictly according to the rules of the abstract machine".
(This raises secure_clear to the level of a system call or similar,
removing the "can be replaced with memset" argument.)

 - Add a LWG-level specification somewhere in [utilities] or so,
with copious wording-level notes why "secure_clear" is not secure
(see below).

The remaining issues then are at least the following:

 - The compiler can move secret writes to after the secure_clear:

  secure_data = x_secret;

  // The compiler can add arbitrary writes to secure_data here, including
  // using values containing x_secret.

  // destructor on secure_data runs here

 - The compiler can store secret data on the side:

  f(secure_data); // might make a copy into the "parameter area" (which I just invented)
  secure_clear(secure_data); // leaves the parameter area untouched


Received on 2020-04-25 02:43:51