On Thu, Sep 26, 2019, 4:02 AM Jefferson Carpenter via Std-Proposals <std-proposals@lists.isocpp.org> wrote:
As tempting as it is to add new and powerful features to the language,
for various reasons from simplifying existing code to 'keeping up' with
other languages out in the field, it's a dangerous and bloody art to do so.

That's true to some extent - from my point of view C# is adding far too much syntactic sugar from time to time (like ??, ??=, ?. and ?[] operators that are just monkey patching the underlying issue of having null values all over the place - while it shortened the code a friend of mine really didn't like it as he like you argues for simplicity and after a chat with him I had to agree). On the other hand I would never place C++ features like templates (instead of generics), concepts, reflection/metaclasses in the same bucket (while programming in C# I missed value semantics the most and templates right after that) - while the latter adds quite alot of new things it is fixing missing peaces of meta programming that simplify my reading and writing of code (you could argue that generating code can be done with outside tools ant that it is simpler but I'd argue back that for each generator two people would use a different technique so learning it for me would be harder - same as "here's a simpler home made container that you should learn while at the new job" compared to "we're using standard containers that you already know from your previous job"). Going back to C# for comparison while they were adding syntactic sugar they missed the await version of foreach loop which caused us to while loop over all database result containers, with our custom async iterators, making code harder to read - while you were able to write code with existing tools it was way longer and convoluted.

The C++ spec may define the syntax and evaluation semantics of the
language, but what breathes life into it is the compiler.  The more
complicated the spec is, the more complicated the compiler must become.
We're fortunate to have a diverse set of c++ compilers from the
proprietary to the open source, but the more features get added, the
higher the initial cost to creating a new viable c++ compiler project.

Agree.

Slow and steady wins the race.  There does not exist a language feature
(except maybe value semantics) that will not, at some point in the near
or far future, become invalidated by cleaner ways of doing the same
thing.  Adding too many features kills languages.

Not adding them does the same (it's just harder to see due to all the legacy code that is still being maintained, giving a false feeling of a living language). While I do agree that not every feature should be added I do feel that the rate of adding them and what to add is balanced quite nicely bi C++ standard committee.

Optimally, the rate
at which features are added should be equal to the rate at which they
are deprecated and removed, over long time spans.

This is hard for at least two reasons. 1) old code doesn't always get a face lift while new code is added and C++ backward compatibility is one of its strenghts. 2) Quite often in C++ one feature builds on top of others and one of the biggest advantages of C++ for me is that there is rarely magic big feature that I can't reason about and build up from smaller building blocks. Admittedly my biased opinion but so is the entire what is slow/fast enough for features issue.

This ensures that
compiler writers will not have to do too much up-front work to learn to
maintain a compiler or to write a new one.

To pick on coroutines (although they certainly can be useful), co_await,
co_yield, and co_return can be implemented as library functions
abstracting and making platform-independent the existing functions
setjmp and longjmp, with only the overhead of keeping a reference to
some state holding the continuation in the calling code and the
continuation in the coroutine.  Additionally, while coroutines make
serial asynchronous operations easier to write, they cannot do the same
for parallel asynchronous operations without potentially pessimizing
performance.  In "x = co_await y; x2 = co_await y2;" x2 cannot be
assigned before x has been assigned - even if y2 completes before y -
unless the compiler can come up with a proof that such re-ordering does
not change the visible behavior of the program.

When I came from C# back to C++ the thing that I missed the most was async/await so I consider this feature to be a bad thing to be picking on.

For what you wrote I doubt that that's the case but for some crazy reason most people give such examples for C# as well...

Task<X> x = y();
auto x2 = y2();
co_await wait_all(x, x2);

Nobody forces you to await imediately... You are awaiting X but nobody forces you to throw away the promise (Task is just a packed std::future/promise which was to some extent for me harder to understand in C# as they ignore the building blocks and go straight to the Task).

This is C++ified version of how you'd write it in C# and there most people that I worked with also didn't know/think about that - and when I introduced it some people no longer understood what it did as tutorials didn't teach them... But I had to do it  in order to get a big performance boost in one part of the code (meaning going from 20min to 2min as resource requests were parallelized - some resources were being fetched multiple times which was achieved with linq/ranges like api but custom written due to missing async foreach amongst other things). 

Instruction reordering
and the as-if rule already indicate that the semantics of C++ place too
much emphasis on the sequencing of operations over the dependency graph
of data.

As excited as I am for new C++20 features including concepts and
coroutines, I'm more excited for the deprecation of most of volatile.  I
want the language to be well understood as-is so that it can be reasoned
about, the real pain points identified, and the spec slowly changed to
meet new needs with the patience of a community that deserves to
continue to beat until the last program is written and the last rock is
dust.

New is a house on top of building blocks so throwing away the building blocks and placing the house on magic is for me harder to learn and use (my mental model builds from parts and doesn't like situations where they are missing/hidden so much that I can't connect them). I'm OK with deprecating/removing std::auto_ptr in favour of better replacements, I would not be thrilled if somebody just decided to deprecate classes just because functional programming is "the way to go" or future/promise/thread because task/async/await is the way to go (or functors because lambdas exist...).

I agree that mostly features that change the way you think should be preferred (and standard commettee foes a great job of that) but would not want to have new magic that deprecates primitive buidling blocks as "this way is better, you shouldn't use the building blocks anymore".

Just my biased perspective on the topic.

Regards,
Domen