/rant
Harder to fake.
See also this video from Nate B Jones: https://youtu.be/FDkvRl1RlT0?si=WUK2WJTXvKAWKD0r
> Writing documentation is arduous and a little painful, which as it turns out is a good thing as it incentivizes the writer to be as succinct as possible.
It takes more effort to be brief, even for humans. Good documentation writers were always brief.
So like ATS checkers for resumes, I find myself needing an AI checker for my text.
Ultimately, we will have AI write everything for another AI to parse, which will be a massive waste of energy. If only there was some agreed-upon set of rules, structures, standards, and procedures to facilitate a more efficient communication...
If I was your manager, and you sent me your seventeen page AI generated thing coz you think I'm just gonna summarize anyway and I expect something long: You misread me.
I make a point all the time to everyone that won't listen, to not send me walls of text. I'm not gonna read them. I'm gonna ignore them, close your bug reports until I can understand them because you spent the time to make them short and legible. If you use AI for that, I don't care. But I better have something short and that when I read it makes actual sense and when I verify it, holds up. If I wanted to just ask AI, I'd do it myself. You have to "value add" to the AI if you want to be valuable yourself.
The only time I send something longer is if it’s a postmortem for some prod issue, which I write by hand.
I use AI every day, often multiple agents at once, but knowing when it’s appropriate and when I need to be the one thinking really hard about something.
I just type what I want to say and hit send. YOLO
Made me smile. Perhaps the new term for making a human hand-written reply is that I didnt use AI … “I YOLOed it”.
It will probably take a couple hundred years but I'm pretty sure I'm right about this :)
API or die /s.
Seriously, though, fuck that shit!..
I feel the loss of this signal acutely. It’s an adjustment to react to 10-30 page “spec” choc-a-block with formatting and ascii figures as if it were a verbal spitball … because these days it likely is.
man I see this on Jira a PM or BA is like "yeah I'll write that AC for you" giant bullet list filled in a bunch of emojis and checkmarks
I've noticed Claude does far fewer listicles than ChatGPT. I suspect that they don't blindly follow supervised learning feedback from chats as much as ChatGPT. I get Apple vs Google design approach from those two companies, in that Apple tends not to obsess over interaction data, instead using design principles, while Google just tests everything and has very little "taste."
In general I feel like the data approach really blinds people to the obvious problem that "a little" of something can be preferable while "a lot" of the same is not. I don't mind some bullet points here and there but when literally everything is in bullet points or pull quotes it's very annoying. I prefer Claude's paragraph style.
I suppose the downside is that using "taste" like Apple does can potentially lead a product design far, far away from what people want (macOS 26), more so than a data approach, whereas a data approach will not get it so drastically wrong but will never feel great.
I also much prefer the output of Claude at present.
Turns out you can get away with a lot when you have a quasi-monopoly on an addictive product, and you buy out your realistic competitors...
> Claude does not use emojis unless the person in the conversation asks it to
All of the PMs I interacted with across companies started using Notion for everything at the same time. Filling Notion documents with emojis was the style of the time.
This slightly pre-dated AI tools becoming entirely usable for me.
Notion-core
Somehow they must have been over-represented in the training data (or something in the tokenising/training/other processes magnifies the effective presence of punctuation) because I don't remember them being that common and LLMs seem to love spewing them out. Or perhaps it is a sign of the Habsburg problem: people asked LLMs to produce README files like that because they'd seen the style elsewhere, it having spread more organically at first, and the timing was just right for lots of those early examples to get fed back into training data for subsequent models.
How quickly we become reverse centaurs.
it's literally their job to ship functional product features...
Indeed. I've spent my professional career seeking out positions at companies of increasing prestige and technical renown, each with a higher reputation for professionalism and performance than the last. And yet this invariant has held in every position.
As far as I can tell, the only difference between each company has been the quality of the manager I was supposed to please, which I have noticed (perhaps predictably) is not strongly correlated with the company's reputation or success.
I usually differentiate between real managers who exist to make decisions, versus those who manage people. The latter are “overseers” not managers.
Who cares about features or functional - of whether they even know what functional means in that case?
That's how it looks more and more...
Just give me normal bulleted items, I can read.
I like them even more on code comments. It tells _precisely_ how much effort went into the pull request, so I don't spend time reviewing lazy work.
I propose that what you enjoy is having a token of the appearance of effort, easily constructed and easily observed and easily suitable for low-effort handling of these proxy objects for actual work.
They’re saying that the emoji usage is telling them that very little effort was put into the PR and that they’ll treat it accordingly.
My apologies!, sincerely.
(If only the message I was responding to had had emojis and checkmarks for me to efficiently process it!!!!)
Instead he didn’t read it at all, and just threw the whole thing at Claude Code as a big prompt. The result was… interesting!
They put up a PR with all the obvious tells, the markdown table of files that changed, the description that basically parrots back things the human obviously wanted them to stress in the task (“this implements a secure, tested (no regressions) implementation of a Foo…”), and the code is an absolute mess of one-off functions placed in any random file with no thought to the way the codebase is actually organized.
Then I give feedback after spending like an hour going through their 2000 line change, and then here comes back an update with a very literal interpretation of my feedback that clearly doesn’t really understand what I was even saying. Complete with code comments that parrot back what I said (“// Use the expected platform abstractions for conversion (not bespoke methods”).
Reviewing coworkers PR’s feels like I’m just talking to the LLM directly at this point, but with more steps and I have less control over the output.
Some people have put me on their blacklists after these interactions, sure, but they're the exact people I don't want to work with again. The important thing here is that I've never done someone else's work for free.
The laziness is offloading work down the line.
Both predate common use of LLMs, unless my memory is even more shaky than usual on this, but must have been over-represented in the training data (or something in the tokenising/training/other processes magnifies the effective presence of punctuation) because LLMs seem to love spewing them out.
Ideally AI would minimize excessive documentation. "Core knowledge" (first principles, human intent, tribal knowledge, data illegible to AI systems) would be documented by humans, while AI would be used to derive everything downstream (e.g. weekly progress updates, changelogs). But the temptation to use AI to pad that core knowledge is too pervasive, like all the meaningless LLM-generated fluff all too common in emails these days.
EVERYONE (engineers, pms, managers, sales) uses Claude Code to read and write Google Docs (google workspace mcp). Ideas, designs, reports. It's too much for one person to read and, with a distributed async team, there's an endless demand for more.
So for every project there's always one super Google Doc with 50 tabs and everyone just points their claude code at it to answer questions. It's not to be read by a human, it's just context for the agent.
The economic reality check is going to be devastating. It won’t be a crash of AI as a tech, it will be a crash of every ‘AI native’ company that does not even know what is their product any more.
These companies have enough market power that they can afford to be ineffective. So they were. And they are ineffective in novel way.
So, I approach it in good faith, but I do get upset when people say "I'll ask claude". You need to be the intermediary, I can also prompt claude and read back the result. If you are going to hire an employee to do work on your behalf, you are responsible for their performance at the end of the day. And that's what an AI assistant is. The buck stops with you. But I don't think people understand that and that they don't understand they aren't adding value. At some point, you have to use your brain to decide if the AI is making sense, that's not really my job as the code/doc reviewer. I want to have a conversation with you, not your tooling, basically.
So, what you are saying is that I should fire the bottom N% of underperforming agent instances?
You know, like employers do as opposed to taking any responsibility?
The dude is just acting like a manager with a technical employee (agent) who does the hands-on work. If you are upset about this you should be hopping mad about the whole manager-director-VP-SVP hierarchy above this dude.
You're saying this as if it's some rebuttal ad absurdum, when it's absolutely the case: when the higher layers don't understand what they do, we have a problem with that too, and that's been true since forever. Remember Dilbert and Office Space, and making fun of the ignorant middle managers and execs?
In this case, what we're complaining about is coders not understanding the code they ship (because some AI wrote it and they don't bother to review it or guide the AI fully).
Well put. I generally skip AI-generated PR descriptions for this reason as they tend to miss the forest for the trees. Sometimes a large change can be explained by a short yet information-rich description ("migrate to use X instead of Y", "Implement F using pattern P") that only a human could and should write.
Young "AI native" coworker opens PRs with 3 screen slop description, I flagged that "I know he ain't reading all that, and therefore I ain't reading all that", so he should just give a max half-screen overview. I expect that the PR description makes sense, is correct, and have been reviewed by the person opening the PR. You can still use agents for that, but at least there is a chance with shorter descriptions that it's not completely bs.
I used to have a colleague (senior engineer) who never cared to write a single line in Pull Request descriptions, as if other people had to magically know what he meant to achieve with such changes.
Now? His PRs have a full page description with "bulleted summaries of bulleted summaries"!
Minimum word lengths are the greatest dis-service high school and college have ever done to future communication skills. It takes years for people to unlearn this in the workplace.
Max word counts only please. Especially now with AI making it so easy to produce fluff with no signal.
In college, I took a constructive writing course because I thought "Hey, easy A!" After the second or third week, the professor told me that, while the class had a word minimum, I would also be given a separate word maximum. She said I needed to learn brevity and simplicity, before anything else.
The point being: I was able to cruise through high school with my longwindedness as a cheat code, never stressing about minimum lengths, despite my writing being crap in other ways.
Although I have regressed in the two decades since, it helped me a good deal. I am grateful to that professor for doing that.
Good for thinking through a concept but unsalvageable in the edit phase. Easier to throw away and rewrite now that you know what to say.
Nowadays I like conversation as an ideating step. Talk to a bunch of people, try to explain yourself until they get it, see what questions they ask. Sometimes in HN threads like this :)
Then write it down.
You get super high signal writing where every sentence is load bearing. I’ve had people take my documents and share them around the company as “this is how it’s done”
It can take weeks of work to produce a 500 word product vision document. And then several months to implement, even with AI.
Me too. Try speech to text one day, you may find that you'll use 2x the words than you do with a typed vomit draft. I was surprised
Don't you get dinged as a slow performer? Management expects x5 speed on everything now that AI is available.
No because the document is not the work. Management wants someone to figure out the solution to their problems. The document is just a step in solutioning.
Without the doc, others would have to re-do all that work if you get hit by a bus. Or you’d be stuck in endless meetings conveying the vision instead of figuring out the next problem.
Document length is inversely proportional to the quality of your thinking/insight. When you create fluff, everyone can see you didn’t do the work.
If your boss asks you for specific documents and expects a quick turnaround, and you regularly take 3 weeks or whatever to produce them, then yeah probably.
If your boss generally leaves you alone to find and solve problems on your own, then probably not.
"I have made this letter longer than usual, only because I have not had time to make it shorter." - Blaise Pascal
Brevity is an art, and it is hard.
I've gotten better at phrasing myself adequately in one go. Rute mechanical memorization has also made writing itself cheaper. (read my username)
I can now yap quite adequately over text, yet i regularly find AIs at a minimum 2x as verbose as my preferred phrasing after manual word mashing.
An odd tradeoff of my verbal-based writing seems to be that I am a fairly slow reader. I read aloud in my head, albeit a bit faster than I could speak, but I still hear the words as an internal monologue.
When discussing this a few times with friends, I've learned how different everyone's experiences are when bridging thoughts=>speaking, thoughts=>writing, thoughts=>typing, and text=>thoughts (or even text=>understanding).
Even though almost copying is everywhere (patents, graphic design, business): albeit in other areas it is often applauded and less obviously deceptive.
We talk about countries copying e.g. Japan was notorious for it. I think the underlying motivation there is ownership - greedy people feeling they own everything (arts and technology). "We own that and you stole it from us" along with the entitlement of never recognizing when copying others.
Since "write an essay" can be anything from three paragraphs to a 50 page paper and the teacher probably doesn't think either is the appropriate response to the task, some sort of numerical guide is a good starting point, even if a fairly wide range is a better guide than just a minimum...
(plus actually there are real world work tasks involving composing text that fits within a certain word range, and since being concise and focused isn't AI text generation's strong suit, I'm not sure those work tasks will disappear...)
My high school professors had a really good solution to this:
Minimum word lengths but you have to write the essay in class by hand. You have 2 periods.
Some of us still write a lot but having limited time and space (4 pages) really put a hard limit without saying so. In higher classes they started saying “I’m gonna stop reading after 3 pages so make sure you get to the point”
The grading was thorough and harsh. In college I was never graded harder on writing. My writing and comprehension abilities improved dramatically over that period of time.
Demanding that students mind read is not a good strategy. Specifying expected length, checking for it is a good strategy. Teacher should also check for other things - whether paragraphs logically follow, grammar, sentence structure, you name it. But dont make them guess.
Subject yourself to a classroom of kids that you must teach to write, and throw out minimums. Will some students do fine? Sure, of course, and what of the others that turn in one sentence? That never grow? That have to go into the math class and hear their idiot parents say "why are you learning that we have calculators"
Strawman argument; the correct thing to do is not to throw out minimum word count and leave it at that, rather to emphasize the role of brevity and concision while still being sufficiently thorough.
It's widely understood that LOC is a poor measure for many coding purposes, so it shouldn't be controversial that word count is an equally flawed measure for prose.
While I hated it in high school, but think I better understand it now. I think part of the problem is they never explained the "why" or the "how", just the requirement. I wasn't able to write anything more than a page or two without extreme difficultly until college when the requirements went up to 30 pages.
In theory, someone who can write a 30 page paper could effectively distill it down to a short memo when needed, summarizing their primary point(s). Someone who can only write short memos would have a hard time writing something longer one day if/when required. I was trying to do a knowledge transfer one day, opened up Word, and just typed 20 pages on everything I knew about a tool we used heavily, but wasn't documented anywhere. I don't think I could have done that before I was forced to write those longer papers in college.
John Nash's Ph.D. Thesis is notorious for being short: it's still 27 pages (typed, with hand-written equations and a whopping total of two citations) but that's an order of magnitude below average. On the other hand, most of us don't invent game theory.
Students used to minimum-word-count essays sometimes have to do some self-retraining to realize that the expectation is that you have more that you want to say than you have room to say it, and the game is now to figure out how to say more in fewer words.
Same as lines of code, etc.
I certainly wish more teachers encouraged parsimony and penalized fluff and bullshittery, but I'd be surprised to find them doing it outside of some narrow cases where the point is just to make you write something at all.
Tthey generally want to encourage their students to engage with the topic at a certain level and practice the thinking needed to research, structure, and implement an argument of a certain length. They want you to put at least 5 pounds of idea in the 5-10 pound idea bag.
If you're convinced you've hacked word economy and satisfied the assignment except for this goshdarnpeskyminimumwordcount, you're probably misunderstanding the lesson the instructor is willing to read through a bunch of bad writing to impart and cheating yourself.
We people being people, and being manager when there is no outcome when everyone is happy, this is why I am not going to be manager. I just wanted to know honest opinion about how to solve it from the OP, or even if this is solvable.
His explanation: I don't want to read more than that, and you should be able to fit all the most important details in one page.
Great lesson.
When I started my PhD I was already really good at typesetting with LaTeX. I started to bring in fully typeset works in progress for my supervisor to read through. These proofs often had fatal flaws. He asked me to stop typesetting until after the work had been verified because it looked too convincingly correct due to being typeset.
That was about 15 years ago but I've never forgotten it. Drafts should look like drafts. Scrappy work and proofs of concept should look as such. Stop fucking with people by making your bullshit, scrappy ideas look legit. Progress is a cooperative effort. It's not about trying to make people say yes.
A huge AI signal to me is not em dashes, not emoji, not even the "not X, it's Y" construction which oh god I'm falling into the trap right now aren't I.
It's a combination of these factors plus a tendency to fluff out the piece with punchy but vague language, often recapitulating the same points in slightly reworded ways, that sounds like... an eighth grader trying to write an impressive-sounding essay that clears the minimum word limit.
Did the bright sparks who trained these things just crack open the printer paper boxes in their parents' homes filled with their old schoolwork, and feed that into the machine to get it started?
Search engines only show a snippet of the content and that always looks convincing. It's the whole content that is off and, unfortunately, a few seconds/minutes can pass before you realize it (If you ever do).
Even though real humans write like that when writing documents, they never did that in informal messaging.
The length itself is not an indicator per se, but you can sense when it is not honest. If others do not have a sense for it, it seems like complaining about something new.
This is not adding value for anyone except people whose function is to look busy, and people trying to avoid their busy work.
In the future everyone will have a bot and our bots will just handle all interactions
It's some sort of a leverage: "I spend 5 minutes prompting, so that you could spend 30 minutes reviewing". Not gonna happen LLM buddies.
Bulk of pretty much every thing is fluff. Not just work place artifacts.
In many ways this is the root of all complexity.
“Anything more than the truth would be too much.”
- Robert Frost