C++ Logo

std-proposals

Advanced search

Re: Cache Alloc - 1.3x to 13.4x faster

From: Phil Bouchard <boost_at_[hidden]>
Date: Fri, 30 Jul 2021 01:09:05 -0400
On 7/27/21 11:25 PM, Phil Bouchard via Std-Proposals wrote:
>
>
> On 7/27/21 2:46 PM, Jason McKesson wrote:
>
>> There is nothing especially "compile-time" about your allocator. And I
>> don't see why I would use it when
>> `std::pmr::unsycnhronized_pool_resource` exists. And if the virtual
>> overhead of the type really bothers you, you can easily wrap it in a
>> proper allocator type that hard-codes the resource type, so that
>> devirtualization will take place.
>>
>> Also, shouldn't the goal be to improve performance? Beating the
>> generic memory allocator could be a means to achieve that, in certain
>> cases, but wouldn't it make more sense to identify specific poorly
>> performing cases and target them? Your proposal is a solution in
>> search of a problem.
>
> So upon further investigations, by comparing it with
> boost::fast_pool_allocator, cache_alloc:
>
> - wins the 1st race by far (straight allocations);
>
> - equals the 2nd one (allocs + deallocs);
>
> - fails the 3rd one because of the destructor.
>
> https://github.com/philippeb8/cache_alloc/blob/main/README.md
>
>
> I'm not sure yet what's wrong with the destructor yet but if I find
> anything valuable down the road I'll let you know.
>

I've got the correct implementation now as I'm using my own list, and
removed the unrealistic >= 10 MB buffers so the complexity remains
constant but the allocation routines are more cpu cycle intensive so
there is no big difference with boost::fast_pool_allocator anymore as
the results got "sift":

https://github.com/philippeb8/cache_alloc/blob/main/README.md

So anyway if I can get anything better I'll let you know.


Regards,

-- 
*Phil Bouchard*
Founder & CTO
C.: (819) 328-4743
Fornux Logo <http://www.fornux.com>

Received on 2021-07-30 00:09:10