upvote
> Models are starting to get good at ambiguity

That's fair, and something I've observed too. I wish I had written "the rest of us shouldn't freak out and quit software today".

But here's another data point: At the biotech I work for, writing good code has never been the bottleneck. I actually told my boss that a paid Claude vs free subscription wouldn't be that much value because even if it took every piece of code or algorithm we've ever written and 10x-ed the hell out of them, we'd still be bottlenecked by the biology and physics which dictates that we wait 24 days for our histology assay pipeline.

I have a hunch most fields outside of software are this way. And I'm personally not planning to quit anytime soon.

reply
Ok, but you job is clearly not a good sample for a "job most mortals work on".
reply
> Soon, all meetings will be recorded, transcribed and stored in a well-indexed place for the agents to search when faced with ambiguity (free startup idea here!)

We were doing that over at Vowel a few years back, unfortunately it didn't pan out because you're competing directly against Zoom, Google Meet, Microsoft Teams, etc. They are all (slowly) catching up to where we were as a scrappy startup 4 years ago.

It was truly game-changing to have all of your meetings in an easily searchable database. Even as a human.

reply
Tacit knowledge is definitionally not recorded in any of these systems. This proposes to solve the problem of tacit knowledge by getting rid of it. It is not clear to me if that solution is either possible or desirable.
reply
The labs are spending hundreds of millions of dollars hiring people doing many fairly random (but economically valuable) tasks to collect this tacit knowledge for RL.

It works really well.

reply
It ceases to become tacit as soon as it is collected.

Maybe this rephrase will help: the proposed solution is to render all knowledge explicit.

reply
> It ceases to become tacit as soon as it is collected.

I'm not sure.

It it is collected via preferences then it isn't necessarily something that can be communicated (except in the LLM's latent space).

That still feels tacit to me.

To simplify that argument, the relationship between King and Queen in the Word2Vec latent space can be easily explicitly labelled.

But the relationship between Napoleon and Tsar Alexander I also exists and encodes much of the tacit knowledge about their relationship but isn't as easily labelled (eg, Google AI Mode says "Napoleon I and Tsar Alexander I had a volatile "bromance" that shifted from mutual admiration to deep animosity, acting as a defining conflict of the Napoleonic Wars".)

Word2Vec is a very simple model. In a more complex LLM that deeper knowledge can be queried by asking questions but you can never capture it all. Isn't that what "tacit knowledge" is?

reply
So self chosen total surveillance and transparency so your fav LLM can be better?
reply
Could always use a local LLM for stuff like that. One of my relatives works for one of the big audit firms and that's what they do.
reply
Sure. Still from what he said, your company wants every communication from you stored somewhere, ready for analysis. I don't think an unfiltered data acquisition is good, my interpretation and decision making is also part of my work. Also meetings may share some personal details that I would never tell on the record.

Full transparency has a cost, and we cannot afford it.

reply
Why record when it can build in realtime as meeting is going on.

Slack is kinda there with Salesforce - can do a lot already on Agentforce and in Slackbot, but two aren't integrated just yet and Slackbot doesn't support group chats/channels. One interesting aspect in this will be - who has superiority boss, client, analyst or developer?

reply
deleted
reply
In coding the ambiguity is very, very limited and constrained compared to any non dev job that involves any decision making
reply
That's.. not even close to being the case. It's literally a series of ambiguous questions and strategic decisions.

Non-ambiguous is like a first semester algorithms class in university.

reply
Unfortunately you can't record meetings in many jurisdictions, including court sessions. Hence we have to rely - for worse, or perhaps even for better - on human driven note taking.
reply
You're downplaying the AI lobby here. They're eating down copyright laws, something that seemed impossible just a couple of years ago. Screwing privacy laws is just the next step.

Also, we are seeing a cultural shift around that as well. Now people bring "AI notetakers" to Zoom calls without even asking for your permission. People are already acting like privacy laws don't exist anymore, it's going to be even easier for the AI lobby to take it down now. Just like piracy normalized copyright infringement, opening the path to the current rulings around "fair training".

reply
Such invasive practices are pretty disgusting. But I don't think it will be pervasive. Once it spreads, AI vendors and abusive companies will be hold accountable. There is also an obvious conflict, the surveillance will likely be very selective. Programmers have to record everything, while middle managers have a choice to sign off everything. Senior management will of course do whatever but have full insight on the data. This will create even more backlash. Of course the social culture will turn stone cold and hostile over night with such installments.
reply
thanks for the downvote anon. its an convenient conversation.
reply
I disagree but it wasn't me who downvoted, just so you know.
reply
Yeah I wasn't accusing you. Was likely that you disagree, I can deal with that.
reply