On Fri, Apr 24, 2020 at 11:11 PM JF Bastien <cxx@jfbastien.com> wrote:
On Fri, Apr 24, 2020 at 4:52 PM Nevin Liber via SG12 <sg12@lists.isocpp.org> wrote:
On Fri, Apr 24, 2020 at 6:21 PM Jens Maurer via SG12 <sg12@lists.isocpp.org> wrote:
Since secure_clear takes trivially copyable arguments,
the compiler is free to make arbitrary additional copies
on the stack, in registers, or elsewhere.  Clearing
just one of the instances is not enough to achieve the
stated use-cases of this function.  A security feature
that doesn't reliably deliver should not exist in the
first place.

Taking a step back and ignoring performance concerns, if secure_clear only worked on trivially copyable volatile objects, would that be sufficient?  If so, would some kind of volatile_ref<T> class (similar to atomic_ref<T> vs. atomic<T>) work?  Just brainstorming here.  Without some way to indicate that we have memory that we want to eventually securely clear, I don't see a way to solve this.

An earlier iteration of this paper had "secure_val". Maybe a survey of what types of values memset_s is used for would help answer your question?

Yes.

Also, if memset_s were always available, why is it not sufficient to mitigate the threat?  Why do indeterminate values (instead of, say, zeros or some other bit pattern) need to be written?

My other question:  even if we can someone guarantee no additional copies inside a C++ program (whatever that means), there are always things outside of our control (demand paging by the OS, debuggers, etc.).  Given that we cannot cover all data leakage scenarios, do we still want this?

I've tried to answer this in my previous email. The discussions in the room also covered this extensively. Do you think the paper needs to explain this better? I'm guessing so since you ask.

Yes.

More importantly, the threat this is supposed to mitigate needs to be described in much greater detail.  Right now, it looks like the only threat it mitigates is one on a single core machine with a von Neumann architecture and a not too aggressive compiler, and that to me is not a solution worth standardizing.

Now, if we can guarantee that all copies of the data that the program can see are obliterated, across all cores, registers and caches, that could be a solution worth standardizing.  And if not, what can we reasonably guarantee, and is that sufficient to mitigate the threat if the developer takes other reasonable precautions (such as turning off paging so that the memory is never written to the backing store)?

While I think volatile is a good start, I don't think it is sufficient, because we really want this effect to be seen across all cores and caches.  My vision would be that if we were to stop the world after this function returns, we could not see the old contents anywhere (ignoring things like the user copied part or all of the data before clearing it).

I believe the fear that people will use secure_clear naively are well justified, and if we are going to provide it, we need to make it as robust as we can within our constraints (we cannot dictate what hardware and OS vendors will do), and point out exactly what we aren't covering.
--
 Nevin ":-)" Liber  <mailto:nevin@cplusplusguy.com>  +1-847-691-1404