I didn't expect AI to write 95%+ of my code, but here we are.
I can't say whether I feel worried or not. I am trying now to gamify manual coding by having to review and edit a random file from my work codebases a day and still do occasional leetcodes and katas.
But overall I enjoy coding less for sure. I can't think of the last time I spent heavy time focusing on a refactor or lower level design abstractions.
I don't think I will be still coding 5 years from now. The joy is just not there.
Why would I put effort into reading something that had no effort put in by the author?
This guy needs an editor, AI or otherwise.
These people's writing is usually incoherent and they are very proud of it. If you've ever read a bad new-age self-help book you've probably encountered writing like this.
Good writers understand that writing is about communication. The initial act of writing (ie, word puke) is worthless. What matters most is a piece of writing's ability to communicate clearly.
This writing is usually pleasant, concise, and clear.
There's something to the idea that if the writer is writing with the intention of publishing it, that should be edited. But if you're writing for yourself, and happen to simply keep your writings somewhere public, some other person's desire for you to edit more is a measurement of that other person's feeling of entitlement.
I have about as much desire to read some publisher's edited version of Anne Frank's diary as you appear to have to read the original.
> But the manuscript that Otto Frank pitched to Dutch editors didn’t contain his daughter’s entire diary. Anne herself had begun editing large swathes of her diary with publication in mind after hearing a radio broadcast that called on Dutch people to preserve diaries and other war documents. Otto respected some of those editorial decisions, but overlooked others – for example, he included material about Anne’s crush on annexe dweller Peter van Pels.
https://www.history.com/articles/anne-frank-diary-hidden-pag...
> Frank’s candid words on sex didn’t make it into the first published diary, which appeared in English in 1952. Though Anne herself edited her diary with an eye to publication, the book—released eight years after her death from typhus in the Bergen-Belsen concentration camp at age 15—contained additional cuts. These were only partially restored in 1986, when a critical edition of her diary was published. Then, in 1995, an even less censored version, including a passage on Frank’s own body previously withheld by her father, was published.
https://research.annefrank.org/en/gebeurtenissen/b0725097-67...
> In response to Minister Bolkestein's appeal on 28 March 1944 on Radio Oranje to keep wartime diaries and letters, Anne Frank decided to rewrite her diary into a novel: "Imagine how interesting it would be if I published a novel of the Secret Annex, from the title alone people would think it was a detective novel."
> Anne rewrote and edited her diary on loose sheets of duplicator paper. On Saturday 20 May 1944, she wrote: "Dear Kitty, At last after much contemplation I have begun my 'the Secret Annex', in my head it is already as finished as it can be, but in reality it will be a lot slower, if it ever gets finished at all." Anne's rewritten version, known as Version B, ends with the diary entry of 29 March 1944.
- the article, clearly expressing the intent of its own mistakes and contextualizing them in the era of LLM-borne "perfect" text
This is not the beauty of writing. Everyone's writing needs editing. The "raw unedited emotions" are not something anyone wants to read, and this article is no exception.
The author tells us that English is their fourth language, which is certainly impressive. However their writing is messy and poorly constructed. It's difficult to read, and not at all enjoyable. The choice is not between doggerel like this, and LLM empty perfection.
I guess it's OK if you enjoy reading someone expressing himself without communicating anything valuable and well produced. It's kind of like people who enjoy stream-of-consciousness poetry or unhinged personal blog posts. It's fine.
But most of us (I think) read for our own gain, expecting substantial / stimulating text that is ideally well researched and serves a clear purpose.
Something like that needs an editor, effective proofreading, and quite some time of work and rework.
Five years ago, I probably would have been annoyed by the same.
I have nothing against LLMs for proofreading. I'm actually using one now to fix my grammar because English is my second language. I won't let it change my points, though... it's just for cleaning up without having to spend 3x the time on a comment, editing out minor mistakes.
I'm aware this might make my posts feel less natural, but I think it's a good middle ground.
This is a specifically funny question because every Masaokis video is better than every MrBeast video
Compare thoughts on this notion of AI and authenticity in writing to the way things like auto-tuners and sequencers have been perceived in the music world.
Like there are some esoteric corners of the Jazz space where musicians seem to try to emulate a sequencing machine and play perfect notes, will there be writers trying to emulate the clean AI performance? :-/
I kind of hope the anti-AI-writing stuff passes and we can focus on what makes writing good or bad again instead of “this is clearly AI” posted in response to every blog. I actually don’t care if it’s AI but I do care if it’s worth reading and pleasant to read.
I do care if it's AI. It makes it automatically not worth reading imo
Honestly not sure anymore.
> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
https://news.ycombinator.com/newsguidelines.html
Clearly doesn't really move the needle much but sometimes it helps to tap the sign at people.
Charitably, we are all on our own timelines of getting to HTML zen, and its hard not to shout from the rooftops when it clicks for you and you have your plain text RSS setup on Gnus all chugging along nicely.
The fussiest part of the whole site setup was getting light/dark mode to work in what I thought was the most obvious way. To me, if a website has light and dark modes, it should default the user's device's preferred color scheme, and as an added bonus for users with JS enabled, you might also have a toggle button. But by default, the theme just started in light mode no matter what until you clicked the toggle button, and they also didn't bother to make the button hidden if JS is disabled.
Same with the button for the built-in search feature; it would be visible even if it couldn't work. It's not that it was terribly hard to modify the theme and fix this – add `class="nojs"` to the body HTML, add a JS one-liner to remove nojs, and add a CSS rule to hide the buttons if they're inside `body.nojs`. It was just disheartening to see that this was the theme's default. Anyone making a website these days has to make extra effort to support what should be considered normal browser behavior.
Had to?
I would go a step further, in fact, and when I’m writing something creative, I may choose to avoid whatever the autocomplete is suggesting as the next word (although I have it disabled in most contexts). People have a tendency to fall into grooves in their writing/speaking and this kind of acts as a reminder to not do that,³ although I’m far from immune myself (looking at my comment history, it’s upsetting to see the same verbal tics repeated when I have something to say).
⸻
1. If you don’t know a word well enough for it to come to mind when you’re looking for a word for something, you may not know it well enough to use it in your writing.²
2. Cue the people who will disagree. Suffice it to say that I occasionally will use a thesaurus to pull up a word that’s just out of reach, especially as my brain gets older and weaker, but even that I try to avoid.
3. When I got my MFA, there was a visiting writer who had published a creative writing book which was largely based on his former students’ transcriptions of his lectures. During the lecture he gave, even though he was speaking extemporaneously, he would speak word-for-word whole paragraphs from the book.
I don't think cheating is the right word here (ironically), which I think you are kind of acknowledging by putting it in quotes.
Based on your footnote, it sounds like you are more concerned that using a thesaurus is more likely to end with a worse result, since you are likely to use the incorrect word, or to use the word incorrectly.
This sounds more like the opposite of cheating; cheating is about unfairly getting a better result, but this concern is more about accidentally getting a worse result.
- this is subjective and evidence seems to point to the opposite in my view. In reality most people who think they communicate better with AI don't actually read what the AI has written for them and just puke it out on the world, expecting their readers to do the work.
The Ai almost always writes boring, repetitive garbage and very, very often includes redundant information. But saying it creates more efficiant communication is a great excuse for being sloppy and lazy.
I have a deep knowledge of the information, have done the process we’re doing on two previous projects, but organizing all the stories would have been an absolute nightmare. I still spent half a day on this, I’d guess the fatigue from the boring parts would have made this take a week or maybe two, just because I was doing the parts I enjoy (knowing things and describing them) and I was able to offload the parts I’m not great at (using a lot of boilerplate language to organize the info I knew into scrum stories). Then I had a meeting, reviewed the stories with my coworkers, we had a discussion, deleted two or three of them that we determined weren’t necessary, and fixed up one or two where I’d provided insufficient information about some context surrounding coloring of a page.
It burned through a ton of Opus 4.6 tokens, looked through a ton of code (mostly that I’d written, pre-LLM), but has been amazing for helping me move into a lead position where grooming stories and being organized has always been my weakest point.
Also, when I wrote a postmortem for a deploy that had some issues, I wrote it all by hand. You have to know when the tools help and when they will hinder.
Can you please share what and how gets degraded? Sometimes I don't like a phrase it selects, but it's not common
Yes, LLMs reduce the individual charm of prose, but the critique itself carries a romantic notion that we all loved the idiosyncratic failures of convention and meaning which went into highly identifiable personal styles, and which often go missing from LLM-edited work.
I'd contend this is not true. Even professional authors go to an editor who identifies things that need to be fixed. As the author of the text and knowing what it should be, it can be difficult to read what you wrote to find those mistakes.
> In exchange for that your audience gets 3-4x length normalized bullshit to read instead.
This is not at all what is implied by having an AI act as an editor. Identifying misplaced commas, incorrect subject verb agreement (e.g. counts), and incomplete ideas left in as sentence fragments.
You appear to be implying that the author is giving agency to create the content to the AI rather than using it as a tool to act as a super-charged grammerly.
Yes, and these people are good at it. What’s your point?
If you need grammar checking, there are thousands of apps including word processors, web browsers and even most mobile devices that will check your inputs for grammar and spelling mistakes as you type. All of that without burning down the rainforests or neutering your thesis.
There are plenty of pre-LLM tools that can fix grammar issues.
> Can you please share what and how gets degraded?
I'm not the person you asked, but IMO LLMs suck the style and voice out of the written word. It is the verbal equivalent of photos that show you an average of what people look like, see for example:
https://www.artfido.com/this-is-what-the-average-person-look...
As definitionally average the results are not bad but they are also entirely unremarkable, bland, milquetoast. Whether or not this result is a degradation will vary, of course, as some people write a lot worse than bland.
(Of course, that may well be exactly what you're looking for if you're writing an audit report or something.)
This sounds like an ESL issue. LLMs are good at proof reading ESL-written English text. They are not as good at proof reading experience English writers.
- spelling - grammar or weird grammar as English is not my native language - read proofing and finding things that do not make sense in terms of sentence structure
I do not use it for ideas, discussing the writing, or anything else because that beats the purpose of writing it myself (creative writing).
> "..but maybe it's a good thing that most of us don't allow this technology to reframe our thoughts."
No, you're not the only one experiencing this: I too had the same concerns as you: with every new thought, every new creation, I had to ask the AI's opinion, as if I were no longer able to judge, to decide, without consulting the AI (...just to be safe, you never know...).
The only way to regain your creative ability is to write down your thoughts yourself, read, reread, rewrite, correct, express your opinion...
What AI can't do is convey emotions.
"the Whispering Earring" – https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...
point being: it depends on how you use it. if you offload critical thinking to ai, you will probably (slowly) atrophy your critical thinking muscles. if you offload some bullshit boilerplate or repetitive tasks or whatever, giving you more time overall to do the critical thinking part, you will be fine.
What I mean is as someone with lots of experience, I don't care about me not learning about the basics anymore as much as someone in their 20s and 30s maybe should.
Not sure what you mean by quickly. Back when I was in racing shape, if I stopped my training plan for as little as two weeks, (probably less actually, but I'm being conservative here) I would have a measurable drop in fitness.
Now, as someone who regularly walks the dog and bikes to work, I've got "less to lose" and probably wouldn't deteriorate as much.
And that's not really a hard bar to clear if you look at how people write comments online (including places like GitHub).
Anyone that uses punctuation, and capitalises words, probably automatically gets past the 70% confidence line.
It’s not nondeterministic
you can probably do the shannon entropy calculation yourself if you understand what the evaluation algorithm is
That said…if the evaluator is non-deterministic, then there’s no value in the estimate anyway
FWIW, your comment history here does not look like AI at all to me, and I think I have a very (maybe too?) high sensitivity to AI slop.
I really don't see how this can be possible unless they're accepting abysmal recall? Perhaps I'm missing something fundamental here, but the idea that AI and non-AI assisted text can be separated with "nearly 0 false positives" just says to me that it's really just a filter for the weakest, most obvious AI generated text. Is that valuable?
I really doubt those tools are good for anything
the amount of "that is obvious ai slop" comments i see on mine or other people's genuine non-ai writing has discouraged me from sharing anything more than roughly a paragraph for probably the rest of my life.
Both magazines and books are valid forms of information consumption and books are not the only way to improve your writing, reading, and understanding of the world.
If you limit yourself to stuff from maybe five years ago or older, yeah it's going to be human-written and human-edited (ghostwriting still possible).
“AI is one possible reference for my actual writing”. Generate info and perspectives, but only ever write stuff yourself. Something about this for me forces me to stay in my own “”writing voice”, at least personally, for the various places I use AI tech in. I think of the tech as a chess engine; they are better than any human player but I use them to help me gain perspective rather than cheat. Otherwise, why bother playing chess?
For now, I just keep scrolling until I find something from before 2020, which is much more likely to be purely human-made and edited.
As English is not my first language, I do run into problem where the line between fix my clumsy sentence and rewrite my thought is very thin. Same with writing "boring" technical explanation and more approachable content. I'm getting pushed back for both.
Any native English speaker who doesn’t live under a rock is very accustomed to reading and hearing English from non-native speakers and familiar with the common quirks and mistakes. English is quite forgiving as a language, we understand you. When in doubt, simplify it.
it's a couple mutually-conflicting languages in a trenchcoat; forgiveness and flexibility are perhaps its defining properties.
To the broader issue: "polish" (in any language) is only valuable insofar as it makes the ideas clearer, attests to innate qualities of the author and/or the investment of their time, or carries its own aesthetic value. As LLMs make (a certain kind of polish) cheap to produce, the value of the middle category attenuates to nothing.
this work is paramount. Without clear evidence of human filtering, a long, well formatted message/PR/doc is likely to reduce my estimate of the value/veracity/relevance of its content.
For years, even before LLMs, there have been trends of varied popularity to, for lack of a better word, regress - intentionally omitting capitalization, punctuation, or other important details which convey meaning. I rejected those, and likewise I reject the call to omit the emdash or otherwise alter my own manner of speaking - a manner cultivated through 30+ years of reading and writing English text.
If content is intellectually lacking, call that out, but I am absolutely sick of people calling out writing because they "think it's LLM-written". I'm sick of review tools giving false positives and calling students' work "AI written" because they used eloquent words instead of Up Goer Five[0] vocabulary.
I am just as afraid of a society where we all dumb ourselves down to not appear as machines as I am of one where machine-generated spam overtakes all human messaging.
That should leave you with media sources like nyt and your local library, which seems healthier to me. And maybe it might encourage a new type of forum to emerge where there is some decentralized vetting that you are a human, like verifying by inputting the random hash posted outside the local maker space.
I hope editorial departments everywhere are taking careful notes on the ars technica fiasco. Agree there's room for some kind of quick "verified human" checkmark. It would at least give readers the ability to quickly filter, and eliminate all the spurious "this sounds like vibeslop" accusations.
It does not resembles that. It is usually grammatically correct writing, but it is also pretty ineffective writing bad writing with good gramar.
Let's grab a few books off the shelf (literally).
Douglas Adams' The Hitchhiker's Guide to the Galaxy has four emdashes on the very first page:
> It is also the story of a book, a book called THGTTG - not an Earth book, never...
Isaac Asimov's classic The Last Question: three emdashes on the first page (as printed in The Complete Stories, Volume I)
> ...they knew what lay behind the cold, clicking, flashing face -- miles and miles of face -- of that giant computer.
Mark Z. Danielewski, House of Leaves: Three emdashes on page 1
> Much like its subject, The Navidson Record itself is also uneasily contained -- whether by category or lection.
Robert Caro, Master of the Senate: Five emdashes on page one
> Its drab tan damask walls...were unrelieved by even a single touch of color -- no painting, no mural -- or, seemingly, by any other ornament
Other pages 1s:
* Murakami - 1Q84: 1
* Murray/Cox - Apollo: 1
* Meadows - Thinking in Systems: 1
* Dostoyevsky - The Brothers Karamazov (Pevear/Volokhonsky translation): 4
* Caro - The Power Broker: 5
* Hofstadter - Godel, Escher, Bach - 3
Honestly, when I started this post I expected to have to dig deeper than page 1. The emdash is an important part of English-language literature and I reject the claim that we should ignore all writing that contains it.
Secondarily, I think there's a part of the discourse missing: the presence of a syntactic emdash in a sentence on the internet is not itself a strong signal of LLM-writing - but the presence of an actual emdash glyph (—) should raise some eyebrows, esp. in fora that aren't commonly authored in rich text editors (here, twitter, ...)
(option-underscore, or option-shift-dash if you prefer to think of it that way)
On iOS, you can type it by simply holding down on the "dash" button then selecting the em-dash from the list of options it presents. It may also correct double-dash to em-dash a lot of the time, not sure.
I have used the correct em-dash everywhere I can for over a decade, which amounts to nearly everywhere.
And I've definitely used it when I can't remember that one stinking word that I know exists and is perfect for this occasion.
"hey robot give me every word even mildly related to $SOME_SENSE_ON_THE_TIP_OF_MY_TOUNGE" is a wildly satisfying and underrated experience.
So much content is just straight copy/pasted from the LLM now. Articles, blog posts, linked in posts, reddit comments, etc. Even just using the LLM for 'editing' tends to shift the voice to an obvious LLM voice when used naively. It is getting worse too. Last week a co-worker sent me a screenshot of Claude for me to review their "work", which was just whatever Claude made up.
Usually, if something is very obviously unfiltered LLM output, I just stop reading.
I do use LLMs for writing myself. They are useful, but are poor authors.
You're trading ability and competence for convenience.
The structured thing you mention is the one that bugs me most. I genuinely think that most human writing would be improved by having more of the "signposts" that LLMs overuse. Headings, context-setting sentences, bullet points where appropriate, etc. I was doing "list of bullet points with boldfaced intro for each one" before the LLMs were. But because the LLMs are saturating their writing with it, we'll all learn to take it as a sign of glib superficiality and inauthenticity, and typical good human writing will start avoiding everything of that kind, and therefore get that little bit harder to read. Alas.
And I was just noticing that my home-built blog render pipeline produces dumb quotes and that was embarrassing to me. Needs to be fixed.
(Counterpoint, dumb quotes are 7-bit clean and paste nicely... Hmm.)
I wrote a plugin for my blog that converts all hyphens (surrounded by whitespace) into em-dashes.
https://blog.nawaz.org/posts/2025/Dec/a-proclamation-regardi...
(That Wikipedia table shows that too by the way.)
> And though the operation was done in secret, a new fashion sweeps the court: Bandages wrapped around everyone’s buttocks.
Just like hand made items are popular for their imperfections.
Personally, I'll just continue to use my own voice. I try to correct spelling and grammar mistakes, and proof-read my writing before posting.
It's not perfect, and my writing can at times be idiosyncratic, but it's my voice and it's all I've got left.
But don't be mistaken in thinking that those mistakes make it better, it just makes it mine.
eg: https://ids.si.edu/ids/deliveryService?id=SAAM-2011.6_1
from: https://americanart.si.edu/artwork/mandara-79001 https://www.museumofglass.org/ltlg
I want real humans giving real human opinions not ai giving their best opinion on what is the most "rewarding" weighted opinion
should be:
>Although 80% of the content was my own writing, the fact that it was run through an LLM engine for grammar and vocabulary cross-checking meant that it failed the "probably written by AI" metric, and it was rejected.
1. 80 % -> 80%
2. in -> through
3. a LLM -> an LLM
4. enginee -> engine
5. cross-check -> cross-checking
6. cross-checking, -> cross-checking (removed the comma)
7. made it failed -> meant that it failed, (or "made it fail" depending on whether you want to preserve the past tense or preserve the word "made")
8. probable -> probably
9. by AI " -> by AI"
10. ; and it was -> , and it was (no need for a semicolon when linking with a conjunction like "and", and I would consider another word or phrase such as ", and, as a result, it was rejected" to emphasize the causal relationship between the clauses)
That's ten corrections that are fixing straightforward typos and/or grammar and vocab mistakes in one sentence. Most are fairly objective, though I can understand different opinions on 2, 7, or maybe 10.Relying on AI for editing seems to have atrophied the author's writing if that is what he or she thinks is worth publishing on a blog like this. I would suggest practicing editing your own work and not even thinking about passing it through AI (especially when you were told not to use any AI!) to edit for a while. Given that English is not your first (or even second or third) language, I would also suggest having a native speaker with some demonstrable writing skill review your writing and give feedback on how to make it more idiomatic. For example, writing being "run through an LLM" rather than "run in an LLM" is a relatively subtle difference compared to the others, and it's very very common for preposition mistakes like this to show up when writing in another language than your first. I am still hopeless with French prepositions.
1. There was a lot of slop pre-AI. In fact I’d say the majority of published writing was bad, formulaic, and just written to manipulate your emotions. So in some sense, I don’t really think pre-AI slop had more value. It’s just cheaper to make now.
2. AI has prompted me to study more off-beat writers that followed the rules of language a little less frequently. This includes a lot of people from circa 1890-1970, when experimenting with form was really in vogue.
3. Which brings me to my third point, which is that no matter how much the AI actually knows about writing, the person prompting it is limited by their own education and knowledge of writers. You can’t say, “make me a post in the style of Burroughs” if you don’t know who Burroughs was, or what his writing style was. So in a sense there is an increased importance to being educated about writing itself. Without it you’re limited in your ability to use AIs to write stuff and in your awareness of how much your non-AI written work is influenced by AI writing.
I've never been surprised at AI writing. Emotion the biggest part of communication and these grey boxes have none.
AI always seems so verbose and wordy.
I get that the mainstream ones have been RLHF'd to death, but surely there must be others that are capable?
This is called Hemingway because he was apparently good at communicating efficiently which made him a popular author.
I never passed any AI writing as my own. I would feel utterly awful. Also, I love tweaking words until they sound perfect.
The number of people who just nonchalantly admit that AI writes their messages is honestly scaring me.
First of all, they will make substantive changes you didn’t intend. The meaning will get changed, errors will be introduced. Tone will be off, and as the author says, your voice will disappear. There is no single “correct” way to write something. And voice and tone are conveyed with grammatical and usage variation. Don’t give that up to a robotic average.
Secondly, you will never improve, or even maintain, your own writing skills if you don’t actively engage with the suggested changes. You also won’t fully realize half the purpose of writing, which is to understand the topic better yourself. Doing the work of editing your piece will help you understand the subject even better. If you just let the machine “fix” your errors, you’ll become a worse writer and less of an expert over time.
Ha. Well I guess you did, _this time_.
Can we not just ask an AI to correct our spelling mistakes and leave the rest alone?
How is the author complaining about the quality of their own writing while admitting to not even bothering reading what they wrote, let alone editing it?
(Also, why would using a LLM based grammar checker trigger an AI writing detector? Did it end up rewriting substantial parts of the original submission?)
And that's, I think, a valid choice; you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry? That's for formal writing, and blog posts are not formal.
Not reading what you write smells more like laziness.
Same thing for spell checks, grammar checks, and even AI usage. If you use things lazily, the result will be lazy as well.
Instead of asking for an AI tool to write your thoughts in your place, you can write it yourself and ask it to criticize your text, instruct it to not rewrite anything, only give you an overall picture of text clarity, sentiment, etc.
But that of course would require more work. Asking ChatGPT to produce a text based on a lazily written, bullet point list of brainfarts is probably easier.
Plus, "lazy" would actually be just using AI to edit the writing.
LLM cant really do that. It can help you produce correct sentence where you struggle to create own, but it does not have capabilities to do what you suggest.
LLMs definitely can do this. The output tends to be overly positive though, claiming that any sort of rough draft you give them is "great, almost ready for publishing!". But the feedback you can get on clarity, narrative flow, weak spots... _is_ usually pretty good.
Now, following that feedback to the letter is going to end up with a diluted message and boring voice, so it's up to you to do with the feedback whatever you think best.
I never ask the LLM to evaluate my text in terms of being good or bad. Instead I try something like this:
"In this section I tried to explain X, I intended to sound in Y and Z fashion, and I want a reader to come out with ateast W impression. Is the text achieving these goals? Do I communicate my ideas clearly and consisely, or are they too confuse and meandering?"
I typically get useful feedback. I preface specifically asking it to not rewrite, simply pointing the bits that it finds faulty and explaining why.
Of course the prompt is different is I am writing, for example, technical documentation, or if it is an attempt at creative writing.
I used it many times for exactly this, with good results. It points out ambiguous contructs, parts that are dissonant from the tone I intend, etc.
I have no idea why you think that LLMs can't do that lol
There's nothing magical about a long text you write yourself vs a stream o reddit comments in a thread. It's all sentiment analysis on text. It can extract ambiguity, how ideas are connected in the context, categorize and summarize, etc.
You should try it and see it for yourself. Feed it some large text of a single author and ask it to do those things, see if the results are satisfactory.
> you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry
If it is dry, then it is not stylistically perfect. Per definition, dry writing is just an imperfect writing. Stylistically perfect writing does not have to be dry and usually is not dry.
What happens here is that people use "stylistically perfect" when they mean "followed a bad stylistic advice".
I do not mean this comment to be kick against AI. It is very good for some stuff, it is less good for other stuff. What annoys me is someone calling output superior while actually complaining about it being inferior.
Hey, maybe that llm needs to be used differently to achieve actually good writing results.
The problem is that it has a pretty high false positive rate. Maybe it thinks it's AI because there are absolutely no spelling mistakes. Or maybe you're French and you use latin-roots words in English that are considered "too smart" for the average writer.
And the problem is that people run those tools, see "80% chance to be written by AI", and instead of considering that 20% is high enough to consider you don't know, will assume it's definitely written by AI.
Grammarly has seriously started rewriting whole paragraphs recently, I have been having to reject more and more "prompts" where in the past I would accept them almost by default because they actually were Grammer checks.
Personally, I would recommend them to simple use any old editor with spellchecking enabled. That suffices for most writing where you just want to keep your own voice. To me, the red crinkly line just means that I should edit that word myself. In the rare case where I'm stumped on the spelling I'll look at the suggested edit of course, but never as a matter of course.
Computers, digital text, and digital information distribution/transportation have made writing and thoughts cheap. Arguably due to what we are surely all aware of, humans rarely value that which is cheap, whether monetarily or in effort and consequential qualities. What people seem reluctant or maybe unable to acknowledge is that predating the current AI Slop, was what could be called Human Slop, low quality, low effort, careless output that was cheap; regardless of whether AI slop now outperforms.
It is why you are justified in pointing out that even in the post complaining about AI Slop, the human has apparently abandoned what would have been common practice in just the recent past, using basic spellcheckers or simply reviewing what was written and also practicing with deliberation; the art and skill of writing, grammar, and sentence structure.
No one is perfect and that is also what makes anything human, somewhat inexplicable and random variation. However, it takes a certain refinement before unique human character becomes a positive quality and is not just humans being sloppy ... human slop.
https://www.literaturelust.com/post/what-writers-need-to-kno...
> Every NYT bestseller from 1960 to 2014 falls in the seventh-grade level spread, from 4th to 11th.
> ...
> Since 2000, only 2 bestsellers have scored higher than 9th-grade readability.
> ... ...
> The bestselling authors of our time are writing at the 4th-grade level.
> > “8 books tie for the lowest score,” a 4.4, just above 4th-grade level. Prolific, well-known authors with huge sales: James Patterson, Janet Evonvich, and Nora Roberts.”
> These three authors have written a combined total of 419 books.
We see the same thing in how people dress. People used to write "respectably", and they used to dress the same, and in TV interviews they spoke with great care and deliberation.
Then we threw all of it down the toilet.
The article here is still full of AI slop, and so many people in the comments are defending the author. Blows my mind.
you are missing the writing era, which is gone. whatever we have now will slowly congeal into cold grue that will get a name or names
the madness of bieng chastised for speakerphoning and disturbing people gulping the slop
what do we call that?
What it is going to be is a 'Slop Decade' - a much better label if you insist on having one.
"Save during the summers and you'll make it through the winters".
Several subreddits became AI slop submission repositories and their human engagement dwindled. Some subreddits that were inundated with AI slop implemented policies that ban it, and it seems to work well.
Strict no slop policies work, and surprisingly, so do rules that require AI submissions to be tagged as AI. Forcing slop slingers to tag their slop does a good job at discouraging said slop, it turns out that admitting your slop is slop is embarrassing or something.
Or maybe there'll be the elite enjoying the world, while the rest of us have to work manual labor. But at least it'll be AI systems ensuring our compliance!