upvote
> It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week.

That's because they weren't sold regex as as service by a massive company, while also being reassured by everyone that any person not using at least one regular expression per line of code is effectively worthless and exposes their business to a threat of immediate obsolescence and destruction. They finally found a way to sell the same kind of FOMO to a majority of execs in the software industry.

reply
> even if they immediately throw the output in the metaphorical garbage bin.

Gotta be careful if you do that tho; e.x. Copilot can monitor 'accept' rate, so at bare minimum you'd have to accept the changes than immediately back them out...

reply
In a couple years, we'll have office workspaces equipped with EEG helmets that you must wear while working, to measure your sentiment upon seeing LLM-generated code. The worst performers get the boot, so you better be happy!
reply
I wonder if Copilot can write a commit and backout routine for them.
reply
If you use AI to back it out, sounds like you’ve found an infinite feedback loop for those metrics.

Did industrial psychology die out as a field? Why do we keep reinventing the wheel when it comes to perverse incentives. It’s like working on a team working with scrum where the big bosses expect the average velocity to go up every sprint, forever, but the engineers are the ones deciding the point totals on tickets.

reply
From a management perspective I would be highly skeptics of token leaderboards. You are incentivizing people to piss away company money with uncertain rewards.

I mean… throw some docs into the context window, see it explode. Repeat that a few times with some multi-step workflows. Presto, hundreds of dollars in “AI” spending accomplishing nothing. In olden days we’d just burn the cash in a waste paper basket.

reply
My company doesn’t enforce AI usage but for those who choose to use it, every month they highlight the biggest users. It’s always non-tech people who absolutely don’t understand how LLMs work and just run a single chat for as long as possible before our system cuts them off and forces them into a new chat context.
reply
"Can't fix stupid"
reply
What's stopping someone from just having the AI churn out garbage all day long? Or like, put your AI into plan mode with extra high reasoning and have it churn for 10 minutes to make a microscopic change in some source file. Repeat ad infinium.
reply
> What's stopping someone from just having the AI churn out garbage all day long?

In my case it's morality.

reply
Interesting consideration, 'mandates' and all. Definitely in camp 'toss the output', here. I think I'll see 'morality' leaving when $EMPLOYER fires 'professional discretion'... forcing usage and, ultimately, debasing the position.

edit: Peer said it well, IMO. The consequences aren't really yours. Also: something, something, Goodhart's Law.

reply
I would argue that making the company experience the consequences of its choice of metrics / mandates is in fact a moral imperative.
reply
deleted
reply