This matters less for text (including code) because you can always directly edit what the AI outputs. I think it's a lot harder for video.
I wonder if it would be possible to fine train an AI model on my own code. I've probably got about 100k lines of code on github. If I fed all that code into a model, it would probably get much better at programming like me. Including matching my commenting style and all of my little obsessions.
Talking about a "taste gap" sounds good. But LLMs seem like they'd be spectacularly good at learning to mimic someone's "taste" in a fine train.
How can I proclaim what I said in the comment above? Because Ive spent the past week producing something very high quality with Grok. Has it been easy? Hell no. Could anyone just pick up and do what Ive done? Hell no. It requires things like patience, artistry, taste etc etc.
The current tech is soul-less in most people hands and it should remain used in a narrow range in this context. The last thing I want to see is low quality slop infesting the web. But hey that is not what the model producers want - they want to maximize tokens.
With Opus 4.6 I'm seeing that it copies my code style, which makes code review incredibly easy, too.
At this point, I've come around to seeing that writing code is really just for education so that you can learn the gotchas of architecture and support. And maybe just to set up the beginnings of an app, so that the LLM can mimic something that makes sense to you, for easy reading.
And all that does mean fewer jobs, to me. Two guys instead of six or more.
All that said, there's still plenty to do in infrastructure and distributed systems, optimizations, network engineering, etc. For now, anyway.