Date: Sat, 10 Aug 2024 09:32:16 -0400
On 8/10/24 06:05, Sebastian Wittmeier via Std-Proposals wrote:
>
> Understood so far and at least partly agree.
>
> Why not use a library feature providing those functions?
>
> What part needs to be extended in C++ to be able to write code running
> on a single node of a hypercube?
>
> What abstractions are needed in the core language?
>
> Also are the same abstractions needed for local data processing (32 to
> 10^3 to 10^5 threads in Cuda world) and for huge data centers? From a
> theoretical standpoint it is simpler to formulate, but e.g. Cuda was
> quite successful to have a separation into grid, block, warp and thread
> over many years. Perhaps one should keep some separation of the
> hierarchies instead of unifying everything into one framework loosing
> any way the algorithms could specifically profit from the parallel
> hardware features.
>
> Not all problems can be translated into a hypercube.
No but sorting algorithms are used so often and hypercubes handle it so
well that they could be dedicated for a few of these tasks only.
Your framework also
> would have to allow the combination of different topologies within one
> algorithm for different scale levels. E.g. for locally calculating a
> FFT-like function within a hypercube.
The basic algorithms like sort can simply have the respective template
parameters to represent the hardcoded topology.
> To be efficient those topologies would have to be analyzed at
> compile-time to generate the optimal code, or with some kind of JIT
> compiler.
>
> IMHO I would assume that there is of course general interest to extend
> C++ better for parallel programming, but before standardization there
> should be a proven implementation out there. This seems much to complex
> to be introduced theoretically by committee only.
>
> Best,
>
> Sebastian
>
> Understood so far and at least partly agree.
>
> Why not use a library feature providing those functions?
>
> What part needs to be extended in C++ to be able to write code running
> on a single node of a hypercube?
>
> What abstractions are needed in the core language?
>
> Also are the same abstractions needed for local data processing (32 to
> 10^3 to 10^5 threads in Cuda world) and for huge data centers? From a
> theoretical standpoint it is simpler to formulate, but e.g. Cuda was
> quite successful to have a separation into grid, block, warp and thread
> over many years. Perhaps one should keep some separation of the
> hierarchies instead of unifying everything into one framework loosing
> any way the algorithms could specifically profit from the parallel
> hardware features.
>
> Not all problems can be translated into a hypercube.
No but sorting algorithms are used so often and hypercubes handle it so
well that they could be dedicated for a few of these tasks only.
Your framework also
> would have to allow the combination of different topologies within one
> algorithm for different scale levels. E.g. for locally calculating a
> FFT-like function within a hypercube.
The basic algorithms like sort can simply have the respective template
parameters to represent the hardcoded topology.
> To be efficient those topologies would have to be analyzed at
> compile-time to generate the optimal code, or with some kind of JIT
> compiler.
>
> IMHO I would assume that there is of course general interest to extend
> C++ better for parallel programming, but before standardization there
> should be a proven implementation out there. This seems much to complex
> to be introduced theoretically by committee only.
>
> Best,
>
> Sebastian
-- Fornux Logo <https://www.fornux.com/> *Phil Bouchard* LinkedIn Icon <https://www.linkedin.com/in/phil-bouchard-5723a910/> Founder & CEO T: (819) 328-4743 E: phil_at_[hidden]| www.fornux.com <http://www.fornux.com> 320-345 de la Gauchetière Ouest| Montréal (Qc), H2Z 0A2 Canada The Best Predictable C++ Memory Manager <https://static.fornux.com/c-superset/> Le message ci-dessus, ainsi que les documents l'accompagnant, sont destinés uniquement aux personnes identifiées et peuvent contenir des informations privilégiées, confidentielles ou ne pouvant être divulguées. Si vous avez reçu ce message par erreur, veuillez le détruire. This communication (and/or the attachments) is intended for named recipients only and may contain privileged or confidential information which is not to be disclosed. If you received this communication by mistake please destroy all copies.
Received on 2024-08-10 13:32:18