C++ Logo

std-proposals

Advanced search

Re: [std-proposals] : Re: [PXXXXR0] Add a New Keyword ¡®undecl¡¯

From: Kim Eloy <eloy.kim_at_[hidden]>
Date: Tue, 16 Dec 2025 14:01:59 +0000
Thank you for the insightful discussion on performance, object lifetimes, and compiler optimisations. Reading the exchange prompted a thought regarding higher-level abstractions that could benefit from, and provide concrete use cases for, the low-level mechanisms under consideration.

The conversation centres on enabling more efficient memory reuse and locality by giving programmers finer control over an object's storage duration. This is a powerful capability. It strikes me that one family of abstractions that could leverage such control effectively is the domain of stateful stream processing¡ªcommon in financial systems, IoT, and event-driven applications. In these domains, data arrives as a continuous flow, and processing often involves windowing, temporal joins, and stateful aggregation. The performance there depends not just on algorithmic efficiency but critically on managing the lifecycle of intermediate state and buffers to minimise allocations and maximise cache locality.

My own work in this area led to
"semantic-cpp", a header-only library for C++17. Its design was driven by the need to process ordered sequences¡ªlike market data feeds or sensor readings¡ªwith a focus on temporal awareness and minimal overhead. A core concept is that every element carries a signed temporal index, enabling native support for sliding/tumbling windows and time-aware operations. More pertinent to this discussion, its streams are lazy until materialised (e.g., via
".toWindow()" or
".toStatistics()"), and it is deliberately "post-terminal," allowing operations to continue after a reduction. This model naturally encourages patterns where intermediate state has clear, scoped lifetimes, which could align well with proposals for more explicit lifetime control.

While the library currently achieves performance through standard C++17 means, the kind of optimisations you are discussing¡ªwhere overlapping or complex lifetimes hinder memory reuse¡ªcould significantly benefit such abstractions. For instance, if a compiler could more aggressively reuse memory for the intermediate state in a chain of windowed operations, based on clear programmer hints or scope annotations, it would elevate the efficiency ceiling for these high-level patterns.

Perhaps the broader question is whether the standard library's future might encompass more sophisticated data-flow or stream-composition abstractions. If so, the low-level mechanisms for lifetime optimisation being debated now would be a critical foundation. Libraries like
"semantic-cpp" (or similar concepts in proposals like
"std::execution") could serve as concrete test cases for how well those foundations support real-world, stateful streaming workloads.

I appreciate your time and the valuable technical discussion.

»ñÈ¡Outlook for Android<https://aka.ms/AAb9ysg>
________________________________
From: Kim Eloy <eloy.kim_at_[hidden]>
Sent: Tuesday, December 16, 2025 9:50:43 PM
To: std-proposals_at_[hidden] <std-proposals_at_[hidden]>; SD SH <Z5515zwy_at_[hidden]>
Cc: Sebastian Wittmeier <wittmeier_at_[hidden]>
Subject: Re: [std-proposals] : Re: [PXXXXR0] Add a New Keyword ¡®undecl¡¯

Don't you all consider to add a semantic data stream processor?
semantic-cpp is a header-only, high-performance stream processing library for C++17 that combines the fluency of Java Streams, the laziness of JavaScript generators, the order mapping of MySQL indexes, and the temporal awareness required for financial, IoT, and event-driven systems.
https://github.com/eloyhere/semantic-cpp

»ñÈ¡Outlook for Android<https://aka.ms/AAb9ysg>
________________________________
From: Std-Proposals <std-proposals-bounces_at_[hidden]> on behalf of Sebastian Wittmeier via Std-Proposals <std-proposals_at_[hidden]>
Sent: Monday, December 15, 2025 11:20:43 PM
To: std-proposals_at_[hidden] <std-proposals_at_[hidden]>; SD SH <Z5515zwy_at_[hidden]>
Cc: Sebastian Wittmeier <wittmeier_at_[hidden]>
Subject: Re: [std-proposals] : Re: [PXXXXR0] Add a New Keyword ¡®undecl¡¯


Some performance comes from more localized usage which has better cache performance.



For simple types the optimizer can already do it.

For complex types that optimization often is not the bottle neck.


If the lifetimes are serial or LIFO it can often directly handled with scopes now instead of a new keyword.

If the scopes are overlapping the memory reuse does not work as well.





For the feature to make a performance difference the lifetimes have to be ended prematurely.



Why can't we use local std::optional types?

An optimizer could optimize it to be as efficient as direct types?



Or would there be UB cases (which the optimizer would know can't happen), which the optimizer could not take use of with std::optional (because there it would be possible without UB), but with direct types?



-----Urspr¨¹ngliche Nachricht-----
Von: SD SH <Z5515zwy_at_[hidden]>
Gesendet: Mo 15.12.2025 16:08
Betreff: Re: [std-proposals]: Re: [PXXXXR0] Add a New Keyword ¡®undecl¡¯
An: std-proposals_at_[hidden];
CC: Sebastian Wittmeier <wittmeier_at_[hidden]>;
> But then (if drop or undecl is implemented), the stack would fragment.
The restriction of scope we have talked (on Friday) can avoid this problem. By this way, the addresses of objects on stack is known and fixed.

> (if it is just one, there is no performance gain in freeing before the end of the scope)
Assume `undecl` ends the object in this example:

```
alignas(64) std::byte arr[64];
// do something about using it
// it usually stays in cache when we used it just now
undecl arr; // free the former 'arr'
int arr[8]; // then we can overwrite the storage of former 'arr' and access the latter 'arr' without waiting for memory
// if the former 'arr' isn't ended when using `undecl`, sometimes the processor has to wait for main memory and gets worse performance.
```
The fact is that sometimes it brings better performance.

Received on 2025-12-16 14:02:07