Date: Sat, 3 Jan 2026 19:09:29 +0000
On Saturday, January 3, 2026, Jason McKesson wrote:
>
>
> I find it both curious and apropos that you frame using LLMs as equivalent
> to an *addiction.*
That's not the comparison I was trying to make. The comparison I was trying
to make is that in some situations, when you disallow something, the result
is that it still happens just as much as beforehand, but just that it's
done more secretly. The exceptions to this are:
a) You have a way of enforcing it
or:
b) You hit a moral nerve
I'm not denying that some people will stop using an LLM the moment they
learn that they're not allowed to -- these people are a small minority
though.
The point I'm making is that the disallowing of using LLM's to write papers
has had little to no effect on whether or not LLM's are being used to write
papers.
You never try to argue that LLM usage makes for better output.
> I wonder why that is.
>
I don't see the relevance of this. Honestly if there's someone out there
who writes their idea on a cave wall at sunset on the first full moon
following the spring equinox, and comes back at dawn to collect the answers
etched anonymously in ancient Greek on the cave walls in bat dung, I really
don't care where the information came from. I take a paper at 'face value',
so to speak. If a paper is poorly written then it's poorly written,
irrespective of who or what wrote it.
Even though I don't see the relevance of your above
question-formed-as-a-statement, I'll answer it like this:
I currently pay monthly for ChatGPT version 5 and I have it configured in
"Long Thinking" mode, and in the profile settings l configure it to assume
that I'm a very intellectual very intelligent programming expert so that it
can give me very technical answers with very minimal explanation. In
general the quality of the answers I get from it is very very high, nowhere
near what I would call 'slop'.
I was able to implement the chimeric pointer in the GNU g++ compiler
because ChatGPT taught me how to . . . it was a long conversation spanning
many hours over many days, with me hitting Rebuild dozens of times and
waiting a half-hour for each rebuild, but I got it working in the end, and
I wouldn't have pulled it off without the assistance of an LLM.
Let me be clear though, I'm not actually advocating for the use of LLM's,
nor am I saying that they're lack-lustre. I'm simply just pointing out that
people all over the world are using them for all sorts of stuff -- in many
cases secretly because they're disallowed. So I will always take it with a
pinch of salt if someone presents a paper and says that they didn't use an
LLM. It's easy for me to accept the pinch of salt because it doesn't bother
me where the information came from. I can at least half-understand though
why it would bother some other people -- people who are averse to LLM's --
even though I myself don't share in the aversion.
I think the No. 1 main reason to disallow LLM's is to prevent the committee
becoming overwhelmed with hundreds and thousands of papers. I mean if I
paid a few grand a month for an online LLM, and instructed it to constantly
crawl internet forums and mailing lists and newsgroups and chat rooms, and
to write a few papers a day and email them to Nevin to get a paper number,
then I could quickly clog up the system. That's probably the No. 1 reason
to disallow LLM usage.
>
>
> I find it both curious and apropos that you frame using LLMs as equivalent
> to an *addiction.*
That's not the comparison I was trying to make. The comparison I was trying
to make is that in some situations, when you disallow something, the result
is that it still happens just as much as beforehand, but just that it's
done more secretly. The exceptions to this are:
a) You have a way of enforcing it
or:
b) You hit a moral nerve
I'm not denying that some people will stop using an LLM the moment they
learn that they're not allowed to -- these people are a small minority
though.
The point I'm making is that the disallowing of using LLM's to write papers
has had little to no effect on whether or not LLM's are being used to write
papers.
You never try to argue that LLM usage makes for better output.
> I wonder why that is.
>
I don't see the relevance of this. Honestly if there's someone out there
who writes their idea on a cave wall at sunset on the first full moon
following the spring equinox, and comes back at dawn to collect the answers
etched anonymously in ancient Greek on the cave walls in bat dung, I really
don't care where the information came from. I take a paper at 'face value',
so to speak. If a paper is poorly written then it's poorly written,
irrespective of who or what wrote it.
Even though I don't see the relevance of your above
question-formed-as-a-statement, I'll answer it like this:
I currently pay monthly for ChatGPT version 5 and I have it configured in
"Long Thinking" mode, and in the profile settings l configure it to assume
that I'm a very intellectual very intelligent programming expert so that it
can give me very technical answers with very minimal explanation. In
general the quality of the answers I get from it is very very high, nowhere
near what I would call 'slop'.
I was able to implement the chimeric pointer in the GNU g++ compiler
because ChatGPT taught me how to . . . it was a long conversation spanning
many hours over many days, with me hitting Rebuild dozens of times and
waiting a half-hour for each rebuild, but I got it working in the end, and
I wouldn't have pulled it off without the assistance of an LLM.
Let me be clear though, I'm not actually advocating for the use of LLM's,
nor am I saying that they're lack-lustre. I'm simply just pointing out that
people all over the world are using them for all sorts of stuff -- in many
cases secretly because they're disallowed. So I will always take it with a
pinch of salt if someone presents a paper and says that they didn't use an
LLM. It's easy for me to accept the pinch of salt because it doesn't bother
me where the information came from. I can at least half-understand though
why it would bother some other people -- people who are averse to LLM's --
even though I myself don't share in the aversion.
I think the No. 1 main reason to disallow LLM's is to prevent the committee
becoming overwhelmed with hundreds and thousands of papers. I mean if I
paid a few grand a month for an online LLM, and instructed it to constantly
crawl internet forums and mailing lists and newsgroups and chat rooms, and
to write a few papers a day and email them to Nevin to get a paper number,
then I could quickly clog up the system. That's probably the No. 1 reason
to disallow LLM usage.
Received on 2026-01-03 19:09:32
