upvote
This would take about 1 day for some student to realize you can instruct one of the LLMs to operate the computer screen for you and have it type and fake edit a document for you. The tip would spread among the cheaters and the metric would become harder to judge by itself.
reply
Typing as a service is a whole cottage industry on Etsy.
reply
That's certainly one way to abstractly automate a task: Just pay someone else to do it. (This is a concept that regular people employ every day in the real world.)

Another way to automate this particular task is that some typewriters have (serial/parallel) ports to connect to a computer. It's not a daunting task at all for a student who is skilled in the art of using the bot to have one of these typewrites be the output target.

Like this: https://chatgpt.com/share/69e405db-1b44-83ea-baf3-6af41fe577...

reply
Even Microsoft Word stores revision history inside .docx files, and that’s been used to expose plagiarism. I heard about one case where a student took an existing paper (I believe from a previous year/student) and pasted it into Word. They then edited it just enough to make it look different.

However, they didn’t remove the embedded revision history in the .docx file they submitted, so that went about as well as you can expect.

reply
Are you sure about that? I could easily see this happen with a web document link, but for a docx file the change tracking is off by default and pretty obtrusive. Basic metadata would be fine, formatting might be quirky but that's not exactly a smoking gun...
reply
Hmm, I have some old daisy-wheel printers in the closet that I've been meaning to strip down for stepper motors, maybe I should refurb them instead :-)
reply
In general I love the idea of turning printers into typewriters. I've been thinking about how to do it with an inkjet printer.
reply
arms race....

oh look there is a llm trained on key loggers to spew slop at your personally predicted error rate; bonus if it identifies to USB as keyboard.

reply
You should look up the history of the Loebner Prize [1]. There’s a shocking amount of technological development in some chatbots that went toward simulating mistakes and typing patterns to make them seem more human-like.

In some of the later Loebner competitions, when text was transmitted to the human character by character, the bot would even simulate typos followed by backspacing on screen to make it look more realistic.

https://en.wikipedia.org/wiki/Loebner_Prize

reply
Wow it feels like the Loebner prize went away right at the dawn of the LLM. Is it correlated?
reply
Yeah I definitely think LLMs contributed to its demise. To be honest, nobody in academic AI circles took it very seriously, because it kind of devolved into a contest over who could create the most convincing illusion of intelligence.

Participants spent more time polishing up the natural language parsing aspects in conjunction with pre‑programming elaborate backstories for their chatbot's bios among other psychological tricks. In the end, the whole competition was more impressive as a social engineering exercise, since the real goal kinda became: how can I trick people into thinking my chatbot is a human?

But reading through some of the previous competition chatbot transcripts still makes for fascinating reading.

reply
Goodhart's Law vs the Turing Test! Can our humans accurately evaluate intelligence, or will they be fooled by fakes? Live this Sunday!
reply
I think it would be great to be revived with a different premise.
reply
>because it kind of devolved into a contest over who could create the most convincing illusion of intelligence.

Isn't that really what all these AI companies are doing too? It sure seems like it is.

reply
[dead]
reply