That's fair, and something I've observed too. I wish I had written "the rest of us shouldn't freak out and quit software today".
But here's another data point: At the biotech I work for, writing good code has never been the bottleneck. I actually told my boss that a paid Claude vs free subscription wouldn't be that much value because even if it took every piece of code or algorithm we've ever written and 10x-ed the hell out of them, we'd still be bottlenecked by the biology and physics which dictates that we wait 24 days for our histology assay pipeline.
I have a hunch most fields outside of software are this way. And I'm personally not planning to quit anytime soon.
We were doing that over at Vowel a few years back, unfortunately it didn't pan out because you're competing directly against Zoom, Google Meet, Microsoft Teams, etc. They are all (slowly) catching up to where we were as a scrappy startup 4 years ago.
It was truly game-changing to have all of your meetings in an easily searchable database. Even as a human.
It works really well.
Maybe this rephrase will help: the proposed solution is to render all knowledge explicit.
I'm not sure.
It it is collected via preferences then it isn't necessarily something that can be communicated (except in the LLM's latent space).
That still feels tacit to me.
To simplify that argument, the relationship between King and Queen in the Word2Vec latent space can be easily explicitly labelled.
But the relationship between Napoleon and Tsar Alexander I also exists and encodes much of the tacit knowledge about their relationship but isn't as easily labelled (eg, Google AI Mode says "Napoleon I and Tsar Alexander I had a volatile "bromance" that shifted from mutual admiration to deep animosity, acting as a defining conflict of the Napoleonic Wars".)
Word2Vec is a very simple model. In a more complex LLM that deeper knowledge can be queried by asking questions but you can never capture it all. Isn't that what "tacit knowledge" is?
Full transparency has a cost, and we cannot afford it.
Slack is kinda there with Salesforce - can do a lot already on Agentforce and in Slackbot, but two aren't integrated just yet and Slackbot doesn't support group chats/channels. One interesting aspect in this will be - who has superiority boss, client, analyst or developer?
Non-ambiguous is like a first semester algorithms class in university.
Also, we are seeing a cultural shift around that as well. Now people bring "AI notetakers" to Zoom calls without even asking for your permission. People are already acting like privacy laws don't exist anymore, it's going to be even easier for the AI lobby to take it down now. Just like piracy normalized copyright infringement, opening the path to the current rulings around "fair training".