C++ Logo

std-proposals

Advanced search

Re: Cache Alloc - 1.3x to 13.4x faster

From: Jason McKesson <jmckesson_at_[hidden]>
Date: Tue, 27 Jul 2021 11:53:33 -0400
On Tue, Jul 27, 2021 at 11:21 AM Phil Bouchard via Std-Proposals
<std-proposals_at_[hidden]> wrote:
>
> So did you guys have a chance to try it out? Any questions?

My main question is this. If I care about performance, and I'm in some
hot loop code where I need a stack/queue, and that stack/queue needs
to be so large that a stack buffer or fixed-size globally allocated
buffer won't do (a case with so many qualifiers that makes this a
rarified circumstance)... why would I *ever* use `std::list` for this?
`std::list` is almost always the *wrong* type if performance matters
to you.

Consider this case. Each "page" allocation for a simple
`std::list<int>` will be *much* larger than it needs to be. That
substantially damages the cache locality of accessing the elements.
This is more important for a queue than a stack, but it still matters
when you're actually *doing something* in the hot loop.

`std::list` is good for maybe one or two use cases, all of which are
primarily about insertion/removal from the *middle*, not from the
ends.

A dedicated stack or queue object that knows what it's doing will beat
it every single time.

Why is an allocator-based solution to this problem (which has only
nebulously been specified, BTW) a good idea? That's the question you
haven't answered as of yet.

Received on 2021-07-27 10:53:46