Date: Mon, 6 Apr 2026 17:17:33 -0700
> Why are you using lower_bound then insert, why not just insert the node
handle directly?
At this point you are optimizing a very synthetic benchmark. As much as I
appreciate all the effort and feedback, the use of integers in my sample
code is entirely representative of much more complex objects and more
complex use cases.
Programmers don't always know if a node will be constructed or not, and may
want to do different things depending on whether it has been. The example I
posted doesn't actually do that, but this pathway was the one I was trying
to exercise.
> Why are you using vector::reserve(n) followed by vector::resize(n), why
not just resize?
Habit. I am used to reserving arrays before I use them. Sorry.
> You can replace your entire StackArena and Map and pool of Map::node_type
with this:
>
> alignas(std::max_align_t) char buf_[kArenaCapacity];
> std::pmr::monotonic_buffer_resource bufres_{buf_, sizeof(buf_),
>
std::pmr::null_memory_resource()};
> std::pmr::unsynchronized_pool_resource res_{&bufres_};
> std::flat_map<int, int, std::less<int>, std::pmr::vector<int>,
std::pmr::vector<int>> map_{&res_};
I appreciate the usage advice. That certainly looks a lot saner than what I
came up with.
In practice I suspect I would still be using a custom allocator because I
would want telemetry and would be trying to merge allocations into larger
blocks to avoid hitting individual allocator limits. Hitting individual
allocator limits for fine allocators tends to make your program crash
constantly.
In all honesty what I am getting from this conversation is that I should
delete the red-black tree code from the research project I was working on
because it sucks for real-time use.
And humorously, as it happens, Ubuntu 24.04.4 LTS doesn't even have
std::flat_map yet.
> This will use a stack buffer (not malloc) and maintain a free list of
deallocated
> nodes in an object pool.
Again, this is a synthetic example. A lot of embedded projects run with an
8-32 KB stack, so dumping all of your allocations on the stack will not
work.
handle directly?
At this point you are optimizing a very synthetic benchmark. As much as I
appreciate all the effort and feedback, the use of integers in my sample
code is entirely representative of much more complex objects and more
complex use cases.
Programmers don't always know if a node will be constructed or not, and may
want to do different things depending on whether it has been. The example I
posted doesn't actually do that, but this pathway was the one I was trying
to exercise.
> Why are you using vector::reserve(n) followed by vector::resize(n), why
not just resize?
Habit. I am used to reserving arrays before I use them. Sorry.
> You can replace your entire StackArena and Map and pool of Map::node_type
with this:
>
> alignas(std::max_align_t) char buf_[kArenaCapacity];
> std::pmr::monotonic_buffer_resource bufres_{buf_, sizeof(buf_),
>
std::pmr::null_memory_resource()};
> std::pmr::unsynchronized_pool_resource res_{&bufres_};
> std::flat_map<int, int, std::less<int>, std::pmr::vector<int>,
std::pmr::vector<int>> map_{&res_};
I appreciate the usage advice. That certainly looks a lot saner than what I
came up with.
In practice I suspect I would still be using a custom allocator because I
would want telemetry and would be trying to merge allocations into larger
blocks to avoid hitting individual allocator limits. Hitting individual
allocator limits for fine allocators tends to make your program crash
constantly.
In all honesty what I am getting from this conversation is that I should
delete the red-black tree code from the research project I was working on
because it sucks for real-time use.
And humorously, as it happens, Ubuntu 24.04.4 LTS doesn't even have
std::flat_map yet.
> This will use a stack buffer (not malloc) and maintain a free list of
deallocated
> nodes in an object pool.
Again, this is a synthetic example. A lot of embedded projects run with an
8-32 KB stack, so dumping all of your allocations on the stack will not
work.
Received on 2026-04-07 00:17:47
