Date: Sat, 3 Jan 2026 11:44:06 -0500
On Sat, Jan 3, 2026 at 6:57 AM Frederick Virchanza Gotham via
Std-Proposals <std-proposals_at_[hidden]> wrote:
>
>
>
> On Friday, January 2, 2026, Sebastian Wittmeier wrote:
>
>> But to fill the cracks = to directly incorporate AI output into parts of the proposal - is definitely forbidden by ISO.
>
>
> I'm interpreting your use of the verb, 'to incorporate', here as "to copy and paste".
>
> This past month I was calmly arguing with a meditation teacher that opiate dependants should be allowed to bring their opiate substitute medication with them on residential silent meditation courses. I made several points which I feel were quite convincing, but I stressed my final point which I feel was the most meritorious:
> "If you disallow them to bring these medications, it means that instead of declaring these medications, they will sneak them in and hide them in their pillow cases."
>
> The situation we have here with large language models is very similar:
> "If you disallow people to use large language models in preparing their papers, then they'll conceal the fact that they used a large language model in preparing their paper."
I find it both curious and apropos that you frame using LLMs as
equivalent to an *addiction.* People just can't stop themselves from
making slop, so better to let them slop stuff out.
> Now I know that there are a lot of honest, forthright people here on the mailing list who will follow the rules and won't use a large language model if it is disallowed -- death before dishonour and all that.
Or just, you know, having respect for the time of others by not
feeding them garbage.
> And I'm not doubting that there are members of society who always follow all the rules and never put a foot wrong -- I count myself as one of them -- I mean if I was a taxi driver, I'd declare every single fare on my income tax forms.
>
> But for those of us who live in the real world, who weren't born yesterday, and who didn't land on this planet yesterday, I reckon that less than 10% of papers produced this past year were composed without some copy-pasting from a web browser tab navigated to openai.com. Personally I've never done it myself and I never will, because my integrity forbids me to do so, but I reckon that many people do it.
Your purely anecdotal assertion that everyone else is doing it is not
the "meritorious" argument you think that it is.
It's so interesting how you argue for this. It's not about whether it
will make proposals actually better than before. It's not even about
how much time it would save people when writing proposals. It's
*purely* the fact that people will do it even if you don't let them.
That it should be allowed solely on the basis that we cannot
guarantee-ably stop them from doing it. You never try to argue that
LLM usage makes for better output.
I wonder why that is.
Std-Proposals <std-proposals_at_[hidden]> wrote:
>
>
>
> On Friday, January 2, 2026, Sebastian Wittmeier wrote:
>
>> But to fill the cracks = to directly incorporate AI output into parts of the proposal - is definitely forbidden by ISO.
>
>
> I'm interpreting your use of the verb, 'to incorporate', here as "to copy and paste".
>
> This past month I was calmly arguing with a meditation teacher that opiate dependants should be allowed to bring their opiate substitute medication with them on residential silent meditation courses. I made several points which I feel were quite convincing, but I stressed my final point which I feel was the most meritorious:
> "If you disallow them to bring these medications, it means that instead of declaring these medications, they will sneak them in and hide them in their pillow cases."
>
> The situation we have here with large language models is very similar:
> "If you disallow people to use large language models in preparing their papers, then they'll conceal the fact that they used a large language model in preparing their paper."
I find it both curious and apropos that you frame using LLMs as
equivalent to an *addiction.* People just can't stop themselves from
making slop, so better to let them slop stuff out.
> Now I know that there are a lot of honest, forthright people here on the mailing list who will follow the rules and won't use a large language model if it is disallowed -- death before dishonour and all that.
Or just, you know, having respect for the time of others by not
feeding them garbage.
> And I'm not doubting that there are members of society who always follow all the rules and never put a foot wrong -- I count myself as one of them -- I mean if I was a taxi driver, I'd declare every single fare on my income tax forms.
>
> But for those of us who live in the real world, who weren't born yesterday, and who didn't land on this planet yesterday, I reckon that less than 10% of papers produced this past year were composed without some copy-pasting from a web browser tab navigated to openai.com. Personally I've never done it myself and I never will, because my integrity forbids me to do so, but I reckon that many people do it.
Your purely anecdotal assertion that everyone else is doing it is not
the "meritorious" argument you think that it is.
It's so interesting how you argue for this. It's not about whether it
will make proposals actually better than before. It's not even about
how much time it would save people when writing proposals. It's
*purely* the fact that people will do it even if you don't let them.
That it should be allowed solely on the basis that we cannot
guarantee-ably stop them from doing it. You never try to argue that
LLM usage makes for better output.
I wonder why that is.
Received on 2026-01-03 16:44:21
