C++ Logo

std-proposals

Advanced search

Re: [std-proposals] Labelled parameters

From: Alejandro Colomar <une+cxx_std-proposals_at_[hidden]>
Date: Sat, 3 Jan 2026 21:51:14 +0100
Hi Frederick,

On Sat, Jan 03, 2026 at 07:09:29PM +0000, Frederick Virchanza Gotham via Std-Proposals wrote:
> On Saturday, January 3, 2026, Jason McKesson wrote:
>
> >
> >
> > I find it both curious and apropos that you frame using LLMs as equivalent
> > to an *addiction.*
>
>
>
>
> That's not the comparison I was trying to make. The comparison I was trying
> to make is that in some situations, when you disallow something, the result
> is that it still happens just as much as beforehand, but just that it's
> done more secretly. The exceptions to this are:
> a) You have a way of enforcing it
> or:
> b) You hit a moral nerve

I don't think this is necessarily true. People that contribute here
have some reputation, and lying unnecessarily can cause all of that
reputation to vanish.

In a project I maintain, the Linux man-pages project, we've agreed on a
no-AI policy. Someone came with the same argument as you on the mailing
list, and I warned that anyone caught doing that would not be allowed to
contribute ever again, and I would make sure their reputation burns,
just like I'd do with any contributor that is dishonest in any other
way.

So far, nobody has attempted to use it dishonestly.

And WG21 has a way of enforcing it: ISO will probably kick anyone out if
it is ever found out using AI inappropriately.

One may consider gambling with the fact that it might be difficult to
prove it. In a project that I maintain, not being a democracy, I'll ban
someone with high suspicion. One may also consider gambling on how much
ISO would ban anyone only with high suspicion.

But why would anyone capable of writing a proposal by hand risk their
reputation? Of course, you'll have people that only write one proposal,
and which are unknown, or even use a false name. But I also guess that
use of AI in such proposals will be evident enough, and the quality low
enough, that they won't pass.

> I'm not denying that some people will stop using an LLM the moment they
> learn that they're not allowed to -- these people are a small minority
> though.
>
> The point I'm making is that the disallowing of using LLM's to write papers
> has had little to no effect on whether or not LLM's are being used to write
> papers.
>
>
>
> You never try to argue that LLM usage makes for better output.
>
>
> > I wonder why that is.
> >
>
>
>
> I don't see the relevance of this. Honestly if there's someone out there
> who writes their idea on a cave wall at sunset on the first full moon
> following the spring equinox, and comes back at dawn to collect the answers
> etched anonymously in ancient Greek on the cave walls in bat dung, I really
> don't care where the information came from. I take a paper at 'face value',
> so to speak. If a paper is poorly written then it's poorly written,
> irrespective of who or what wrote it.
>
> Even though I don't see the relevance of your above
> question-formed-as-a-statement, I'll answer it like this:
>
> I currently pay monthly for ChatGPT version 5 and I have it configured in
> "Long Thinking" mode, and in the profile settings l configure it to assume
> that I'm a very intellectual very intelligent programming expert so that it
> can give me very technical answers with very minimal explanation. In
> general the quality of the answers I get from it is very very high, nowhere
> near what I would call 'slop'.

I find it interesting when people admit trusting the quality of AI (or
other humans, FWIW). To me, it tells a lot where their threshold for
quality is. "Tell me who you go with and I'll tell you who you are."

> I was able to implement the chimeric pointer in the GNU g++ compiler
> because ChatGPT taught me how to . . . it was a long conversation spanning
> many hours over many days, with me hitting Rebuild dozens of times and
> waiting a half-hour for each rebuild, but I got it working in the end, and
> I wouldn't have pulled it off without the assistance of an LLM.
>
> Let me be clear though, I'm not actually advocating for the use of LLM's,
> nor am I saying that they're lack-lustre. I'm simply just pointing out that
> people all over the world are using them for all sorts of stuff -- in many
> cases secretly because they're disallowed. So I will always take it with a
> pinch of salt if someone presents a paper and says that they didn't use an
> LLM. It's easy for me to accept the pinch of salt because it doesn't bother
> me where the information came from. I can at least half-understand though
> why it would bother some other people -- people who are averse to LLM's --
> even though I myself don't share in the aversion.
>
> I think the No. 1 main reason to disallow LLM's is to prevent the committee
> becoming overwhelmed with hundreds and thousands of papers. I mean if I
> paid a few grand a month for an online LLM, and instructed it to constantly
> crawl internet forums and mailing lists and newsgroups and chat rooms, and
> to write a few papers a day and email them to Nevin to get a paper number,
> then I could quickly clog up the system. That's probably the No. 1 reason
> to disallow LLM usage.


Have a lovely night!
Alex

-- 
<https://www.alejandro-colomar.es>

Received on 2026-01-03 20:51:36