C++ Logo

std-proposals

Advanced search

Re: [std-proposals] Fwd: set_new_handler extension

From: Phil Bouchard <boost_at_[hidden]>
Date: Sun, 9 Apr 2023 14:57:38 -0400
On 4/9/23 10:34, Thiago Macieira wrote:
> On Saturday, 8 April 2023 22:01:08 -03 Phil Bouchard wrote:
>> It is license-based so I can easily activate a demo license for a few
>> months in Linux, MacOS or Windows. Right now it's based on Clang 12 and
>> I'll need to update it but for a demo it's good enough.
>
> That's not the point. Since I do C++ in my day job, this would count as work.
> I can't accept any licensing terms on behalf of the company other than Open
> Source ones. A free-as-in-beer but non-standard licence wouldn't work because
> the corporate bureaucracy would need to get involved.

Well a demo license is as free as anything Open Source.

I am already in touch with Intel executives but there's a huge gap with
software engineers / architects and executives. So if I can get your
approval then I can escalate it myself.

I respect Intel a lot as they are not only hardware oriented but
pro-software as well. When I presented the Python -> C++
source-to-source compiler I noticed afterwards they already have their
Python optimizer (not compiler though).

What people don't understand is if it's not fully compiled then you
won't be able to prevent reverse-engineering the code, crack it or
whatever. So the code can easily be pirated.

>>>> It is 3.6 times faster under Linux so I expect an even greater standard
>>>> deviation under Windows.
>>>
>>> Faster than what? What workloads? Under what conditions? Compared to what
>>> other solutions?
>>
>> Faster than the standard malloc() for consecutive allocations
>> specialized for thread-local FIFO containers.
>
> How about other replacement malloc()s? How does it compare to an obstack?

I'll investigate and let you know this week.

> And what's the trade-off? Look for it, because it must be there. For example,
> is it thread-safe? How well does it scale over to hundreds of threads? Our
> data shows that the typical cloud VM size is under 48 vCPUs, but your work may
> not be "typical cloud" and may very well be one of those that use all 240+
> logical processors of modern CPUs. In fact, one of my To-Do items for the next
> quarter is to deal with scaling an application to 960 logical processors.

It is thread safe because it is thread-local. So it depends on the use
case but if you only need threads-local FIFO containers then you can
easily make use of it.

It could be used for non thread-local algorithms as well. I did study
parallel programming back in school so I can refresh my memory on the
subject if you want. I'm surprised not a lot of us know real parallel
programming algorithms using hypercubes and all those.

>>> That isn't to say he's wrong; he does have some valid points. But he says
>>> isn't gospel. Heck, what Bjarne says isn't gospel either.
>>
>> That is why I hope to contribute to help improve C++ from my own
>> experience. I seriously believe we need to turn the page on memory bugs
>> in 2023 and scale up the framework and design of C++ so we can write
>> more complex code at a higher level more easily.
>
> No dispute there. It's the *how* to go about it that is the issue. If it were
> easy, it would have been done. Other, memory-safer languages have some other
> trade-offs that have kept C++ and even C with their lower-level memory
> management relevant.

If I can get some business partnership with some corporate then we can
move forward. My initial goal was to Open Source it back in 2005 but the
source-to-source compiler was not trivial to implement so I changed my
mind and patented the Open Source part at the same time.

I did talk to the Carbon language people at the same time but honestly I
think it'll be yet another programming language if they think everything
must be free and spit on any commercial integrations:
https://github.com/carbon-language/carbon-lang/issues/2693

I don't understand why they have to change the syntax of the language
again. They are fixing problems with no value again.

>> Python offers that but its core design is hopeless and is much slower
>> than C++ by far. Nonetheless the AI industry chose Python because of that.
>
> A lot of Python AI code is a Python wrapper around C++, C and even assembly
> core. NumPy's core vector math is written in direct assembly.

Just the Python front-end and the glue between Python and the libraries
slow everything down as well. And I won't even talk about using Python
for parallel programming because I already tried:

- Take a few minutes to process:
https://github.com/philippeb8/grc/blob/main/main.py

- Versus the following C++ code that takes a few seconds to process:
https://github.com/philippeb8/grc/blob/main/src/main.cpp


-- 
Logo <https://www.fornux.com/>  
*Phil Bouchard*  facebook icon
<https://www.linkedin.com/in/phil-bouchard-5723a910/> 
CTO
T: (819) 328-4743
E: phil_at_[hidden]| www.fornux.com <http://www.fornux.com>
8 rue de la Baie| Gatineau (Qc), J8T 3H3 Canada
Banner <https://goglobalawards.org/> Le message ci-dessus, ainsi que les
documents l'accompagnant, sont destinés uniquement aux personnes
identifiées et peuvent contenir des informations privilégiées,
confidentielles ou ne pouvant être divulguées. Si vous avez reçu ce
message par erreur, veuillez le détruire.
This communication (and/or the attachments) is intended for named
recipients only and may contain privileged or confidential information
which is not to be disclosed. If you received this communication by
mistake please destroy all copies.

Received on 2023-04-09 18:57:40