Date: Sat, 3 Jan 2026 11:57:03 +0000
On Friday, January 2, 2026, Sebastian Wittmeier wrote:
But to fill the cracks = to directly incorporate AI output into parts of
> the proposal - is definitely forbidden by ISO.
>
I'm interpreting your use of the verb, 'to incorporate', here as "to copy
and paste".
This past month I was calmly arguing with a meditation teacher that opiate
dependants should be allowed to bring their opiate substitute medication
with them on residential silent meditation courses. I made several points
which I feel were quite convincing, but I stressed my final point which I
feel was the most meritorious:
"If you disallow them to bring these medications, it means that instead
of declaring these medications, they will sneak them in and hide them in
their pillow cases."
The situation we have here with large language models is very similar:
"If you disallow people to use large language models in preparing their
papers, then they'll conceal the fact that they used a large language model
in preparing their paper."
Now I know that there are a lot of honest, forthright people here on the
mailing list who will follow the rules and won't use a large language model
if it is disallowed -- death before dishonour and all that. And I'm not
doubting that there are members of society who always follow all the rules
and never put a foot wrong -- I count myself as one of them -- I mean if I
was a taxi driver, I'd declare every single fare on my income tax forms.
But for those of us who live in the real world, who weren't born yesterday,
and who didn't land on this planet yesterday, I reckon that less than 10%
of papers produced this past year were composed without some copy-pasting
from a web browser tab navigated to openai.com. Personally I've never done
it myself and I never will, because my integrity forbids me to do so, but I
reckon that many people do it.
And so a new human skill is born: The skill of providing the best input
into a large language model in order to get the optimal output. Over the
coming years, individual humans will hone these skills. And who knows,
similar to how we can buy "organic vegetables" at the supermarket, maybe
one day we'll be able to download or buy "organic software" which was
written entirely by humans. Similar to the situation with the vegetables
though, these 'organic' products will probably be smaller and have less
features (features being analogous to nutrients here).
I'm not saying that the ISO should change its policy about the use of large
language models -- I'm just saying that we should occasionally glance at
the elephant in the room. People are using large language models for
_e_v_e_r_y_t_h_i_n_g_ nowadays, including their marital vows and their
loved one's obituaries. I was at a public event recently for a bereavement
charity and they had trouble with the projector so I went over to try help,
and I saw that the web browser had loads of tabs opening asking ChatGPT
what to say at the event, and specifically which terms and phrases to avoid
using. Everyone's doing it.
But to fill the cracks = to directly incorporate AI output into parts of
> the proposal - is definitely forbidden by ISO.
>
I'm interpreting your use of the verb, 'to incorporate', here as "to copy
and paste".
This past month I was calmly arguing with a meditation teacher that opiate
dependants should be allowed to bring their opiate substitute medication
with them on residential silent meditation courses. I made several points
which I feel were quite convincing, but I stressed my final point which I
feel was the most meritorious:
"If you disallow them to bring these medications, it means that instead
of declaring these medications, they will sneak them in and hide them in
their pillow cases."
The situation we have here with large language models is very similar:
"If you disallow people to use large language models in preparing their
papers, then they'll conceal the fact that they used a large language model
in preparing their paper."
Now I know that there are a lot of honest, forthright people here on the
mailing list who will follow the rules and won't use a large language model
if it is disallowed -- death before dishonour and all that. And I'm not
doubting that there are members of society who always follow all the rules
and never put a foot wrong -- I count myself as one of them -- I mean if I
was a taxi driver, I'd declare every single fare on my income tax forms.
But for those of us who live in the real world, who weren't born yesterday,
and who didn't land on this planet yesterday, I reckon that less than 10%
of papers produced this past year were composed without some copy-pasting
from a web browser tab navigated to openai.com. Personally I've never done
it myself and I never will, because my integrity forbids me to do so, but I
reckon that many people do it.
And so a new human skill is born: The skill of providing the best input
into a large language model in order to get the optimal output. Over the
coming years, individual humans will hone these skills. And who knows,
similar to how we can buy "organic vegetables" at the supermarket, maybe
one day we'll be able to download or buy "organic software" which was
written entirely by humans. Similar to the situation with the vegetables
though, these 'organic' products will probably be smaller and have less
features (features being analogous to nutrients here).
I'm not saying that the ISO should change its policy about the use of large
language models -- I'm just saying that we should occasionally glance at
the elephant in the room. People are using large language models for
_e_v_e_r_y_t_h_i_n_g_ nowadays, including their marital vows and their
loved one's obituaries. I was at a public event recently for a bereavement
charity and they had trouble with the projector so I went over to try help,
and I saw that the web browser had loads of tabs opening asking ChatGPT
what to say at the event, and specifically which terms and phrases to avoid
using. Everyone's doing it.
Received on 2026-01-03 11:57:07
