So let me start from @jbarrow's comment: "AI written, generated from the codebase."
My actual learning process looked like this:
1. I walked through the nano-vLLM codebase, asking Claude Code some high-level questions to warm up. 2. Then I asked detailed questions one by one, let it explore, and double-checked the code myself. As someone without an ML background, it sometimes took hours to understand a single concept. 3. Once I felt I understood enough, I started drawing Excalidraw diagrams to explain what I learned.
Does this count as "generated from the codebase"? I don't think so.
Where we might disagree is the writing process.
As a non-native English speaker, my workflow looks like this:
1. Write a short paragraph (<100 words), then ask my writing agent to "fix this for readability and grammar." 2. Review the output. *If it changes any technical meaning, I correct it.* I consider this a responsible way to write a tech blog. 3. Move to the next paragraph.
Is this "AI-written"? I'd call it "AI-assisted." Every idea in every sentence is mine. Honestly, things like "em dashes" never stood out to me when reviewing. I suspect that's common for non-native speakers.
I wrote this comment the same way. The LLM fixed 14 grammar mistakes that I think would distract readers more than any LLM-ish phrasing.
That said, I'm open to suggestions on how to improve my writing process :)
To be honest most native readers wouldn’t register grammar errors full stop.
I guess I have more awe of people who speak a foreign language at all compared to piping it through some agent malarkey.