Date: Sun, 21 Jun 2020 08:32:36 -0700
On Sun, Jun 21, 2020 at 7:16 AM JeanHeyd Meneide via Boost
<boost_at_[hidden]> wrote:
> ...neither have allocators built in so I
> can't really customize how this works without hijacking global new and
> delete, but Zach has made clear his distaste for the allocator world
> ...
> - Allocators are shit!
> ...
> "who died and made you King of my string...memory allocation?
> ...
The inability to control how and where memory is allocated is a
significant obstacle to using ANY container in a concurrent network
server. In every case, the first step of improving the performance of
such a program is to reduce the number of calls to allocate memory.
Often, simply reducing memory allocations is sufficient to give the
program desired performance.
A typical asynchronous network program follows a predictable cycle of
operations in a thread:
1. Data is received from the remote peer
2. Memory is allocated to perform the operation
3. The operation is performed
4. Allocated memory is freed
5. Control of the thread is returned to the I/O subsystem
It is in steps 2 and 4 where a custom allocator is used to optimize
the I/O cycle. The techniques are simple:
1. Reuse a previously allocated block of memory
2. Utilize the stack for storage (preferred when the storage
requirements for the operation are bounded)
Without the ability to control the allocator, these optimizations are
not possible with the containers in Boost.Text.
I have not looked deeply enough into Boost.Text, nor am I
knowledgeable enough about Unicode to write a competent review.
However, were I to write such a review I would REJECT the library for
the sole reason that it does not support allocators and thus cannot be
used optimally in a network program.
If the author decides to add allocator support, I suggest rather than
using an Allocator template parameter, to instead use
`boost::container::memory_resource&` as a constructor parameter (or
`std::pmr::memory_resource&` if available). This allows the container
to be implemented as an ordinary class instead of a class template.
Thanks
<boost_at_[hidden]> wrote:
> ...neither have allocators built in so I
> can't really customize how this works without hijacking global new and
> delete, but Zach has made clear his distaste for the allocator world
> ...
> - Allocators are shit!
> ...
> "who died and made you King of my string...memory allocation?
> ...
The inability to control how and where memory is allocated is a
significant obstacle to using ANY container in a concurrent network
server. In every case, the first step of improving the performance of
such a program is to reduce the number of calls to allocate memory.
Often, simply reducing memory allocations is sufficient to give the
program desired performance.
A typical asynchronous network program follows a predictable cycle of
operations in a thread:
1. Data is received from the remote peer
2. Memory is allocated to perform the operation
3. The operation is performed
4. Allocated memory is freed
5. Control of the thread is returned to the I/O subsystem
It is in steps 2 and 4 where a custom allocator is used to optimize
the I/O cycle. The techniques are simple:
1. Reuse a previously allocated block of memory
2. Utilize the stack for storage (preferred when the storage
requirements for the operation are bounded)
Without the ability to control the allocator, these optimizations are
not possible with the containers in Boost.Text.
I have not looked deeply enough into Boost.Text, nor am I
knowledgeable enough about Unicode to write a competent review.
However, were I to write such a review I would REJECT the library for
the sole reason that it does not support allocators and thus cannot be
used optimally in a network program.
If the author decides to add allocator support, I suggest rather than
using an Allocator template parameter, to instead use
`boost::container::memory_resource&` as a constructor parameter (or
`std::pmr::memory_resource&` if available). This allows the container
to be implemented as an ordinary class instead of a class template.
Thanks
Received on 2020-06-21 10:35:58