C++ Logo

std-proposals

Advanced search

Re: Flash Alloc - 3x faster

From: Phil Bouchard <boost_at_[hidden]>
Date: Wed, 28 Jul 2021 09:56:59 -0400
On 7/27/21 11:40 PM, Phil Bouchard via Std-Proposals wrote:
>
> Like I was saying in another post, cache_alloc is better than 2/3
> cases but fails the last one.
>
> I'll review all that when I have time and if I find something valuable
> then I'll come back here.
>
Actually I know exactly what I did wrong. I'll fix that tonight.


>
> Thanks for your feedback,
>
> --
>
> *Phil Bouchard*
> Founder & CTO
> C.: (819) 328-4743
>
> Fornux Logo <http://www.fornux.com>
>
>
> On 7/27/21 4:18 PM, DBJ wrote:
>> Phill, please add your allocator here https://godbolt.org/z/drEnYera1
>> <https://godbolt.org/z/drEnYera1> and report back, if that is OK to
>> ask of you.
>>
>> Please note the benchmarking infrastructure in use and allocator
>> interface so that it can be used as a template argument to
>> std::vector. I might dare to call that code: "realistic benchmarking".
>>
>> Kind regards...
>>
>> On Sat, 24 Jul 2021 at 20:44, Phil Bouchard <boost_at_[hidden]
>> <mailto:boost_at_[hidden]>> wrote:
>>
>> Yeah I updated it again (disabled page_t initialization) so in
>> general it's more like 3x faster. Which is good if you require
>> low-latency (finance, gaming industry, ...). That's why we all
>> use C++ after all, no?
>>
>>
>> --
>>
>> *Phil Bouchard*
>> Founder & CTO
>> C.: (819) 328-4743
>>
>> Fornux Logo <http://www.fornux.com>
>>
>>
>> On 7/24/21 12:18 PM, Phil Bouchard via Std-Proposals wrote:
>>>
>>> Interestingly, if I increase the LOOP_SIZE the overall time
>>> taken is less thus is faster. Also please keep DATASET_SIZE to 1
>>> because I didn't test it with other sizes.
>>>
>>> I'll follow later this weekend, meanwhile I've put the code here:
>>>
>>> https://github.com/philippeb8/Flash-Alloc
>>> <https://github.com/philippeb8/Flash-Alloc>
>>>
>>>
>>> --
>>>
>>> *Phil Bouchard*
>>> Founder & CTO
>>> C.: (819) 328-4743
>>>
>>> Fornux Logo <http://www.fornux.com>
>>>
>>>
>>> On 7/24/21 6:19 AM, DBJ wrote:
>>>> https://godbolt.org/z/T4qc5o8Mb <https://godbolt.org/z/T4qc5o8Mb>
>>>>
>>>> that turns out to be many times slower vs. std::allocator<> ...
>>>>
>>>> I must be doing something wrong?
>>>>
>>>> On Sat, 24 Jul 2021 at 09:40, Phil Bouchard via Std-Proposals
>>>> <std-proposals_at_[hidden]
>>>> <mailto:std-proposals_at_[hidden]>> wrote:
>>>>
>>>> And here's a more generic one that is 10x faster for
>>>> straight allocations.
>>>>
>>>> Anyway my point being that apparently the rebind oddity has
>>>> been removed from the C++20 standards but not from my
>>>> system headers... So perhaps adding a similar ultra fast
>>>> allocator such as this one into the stdlib would be
>>>> constructive.
>>>>
>>>>
>>>> Regards,
>>>>
>>>> --
>>>>
>>>> *Phil Bouchard*
>>>> Founder & CTO
>>>> C.: (819) 328-4743
>>>>
>>>> Fornux Logo <http://www.fornux.com>
>>>>
>>>>
>>>> On 7/23/21 10:23 PM, Phil Bouchard via Std-Proposals wrote:
>>>>>
>>>>> Greetings,
>>>>>
>>>>> Given the default memory allocator is known to be slow, it
>>>>> came to my attention that if we collect more information
>>>>> at compile-time regarding not only the type being
>>>>> allocated but the container type and the usage frequency
>>>>> then we can have much higher performance.
>>>>>
>>>>> In the attached example, if we use a queue then we can
>>>>> speed up the overall allocation time by 7x!
>>>>>
>>>>>
>>>>> Regards,
>>>>>
>>>>> --
>>>>>
>>>>> *Phil Bouchard*
>>>>> Founder & CTO
>>>>> C.: (819) 328-4743
>>>>>
>>>>> Fornux Logo <http://www.fornux.com>
>>>>>
>>>> --
>>>> Std-Proposals mailing list
>>>> Std-Proposals_at_[hidden]
>>>> <mailto:Std-Proposals_at_[hidden]>
>>>> https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
>>>> <https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals>
>>>>
>>>
>
-- 
*Phil Bouchard*
Founder & CTO
C.: (819) 328-4743
Fornux Logo <http://www.fornux.com>

Received on 2021-07-28 08:57:05