Surely with all of these ridiculous developer productivity gains enabled by AI, they should finally be able to fix all of these ancient issues quickly and clean up the backlog.
Nope, “workforce reduction” thanks to AI again. This charade is getting boring.
So nothing really changes in terms of product development velocity, it’s just headcount reduction.
But that’s not what their own marketing strategy communicates.
Has any of the companies who went all in on AI gotten better at their job because they went all in on AI?
You have never interacted with Jira?
What hope slop-maker-users have then?
On the other hand, LLMs seem perfect for triage and finding duplicates, so it's still surprising that they've let it get this bad.
(Source: I build tooling around Claude Code and have spent hours swimming in the GitHub issues based on downstream user feedback)
If investor fears are that AI makes GitLab's business less valuable, including this in their "GitLab Act 2" announcement makes a whole lot of sense:
> The agentic era multiplies demand for software. Software has been the force multiplier behind nearly every business transformation of the last two decades. The constraint was the cost and time of producing and managing it. That constraint is collapsing. As the cost of producing software collapses, demand for it will expand. Last year, the developer platform market used to be measured in tens of dollars per user per month, this year it is hundreds/user/month and headed to thousands. Not only is the value of software for builders increasing, but we believe there will be more software and builders than ever, and we will serve an increasing volume of both.
Wrote a bit more about this on my blog: https://simonwillison.net/2026/May/11/gitlab-act-2/
That's how I interpret the move, too.
>The agentic era affords GitLab the largest opportunity in our history as a company, and we're making the structural and strategic decisions to meet it
>Operationally, we grew into a shape that was right for the last era and isn't right for this one
To meet their largest opportunity ever, they believe they need less resources. I'm not sure I understand how that follows.
>We're rewiring internal processes with AI agents, automating the reviews, approvals, and handoffs to speed us up
Is this also in the list of "we create code twice as fast and the bottleneck is review so YOLO no bottleneck?". I've yet to see a convincing justification for this. If anything, if you're going full throttle all the more reason to watch the steering wheel, no?
That said, 8 layers of management is a lot of management, and every line of the message seems like leadership truly believes they are sinking in bureaucracy. Let's see how unneeded those 3 layers they're cutting were.
Seems like a fair assessment. Maybe they should start by getting rid of the people who put that structure in place?
bottom level teams are merged to form larger teams.
At gitlabs team size, that means every manager has 2-3 reports? Yeah, I'd be cutting layers too.
> GitLab has at most eight layers in the company structure (Associate/Intermediate/Senior, Manager/Staff, Senior Manager/Principal, Director/Distinguished, Senior Director, VP/Fellow, Executives, Board).
> [...] You can skip layers but you generally never have someone reporting to the same layer (Example of a VP reporting to a VP).
So they're counting the board of directors as a layer above the CEO.
I'm speculating, but they probably also have an unbalanced tree - you'll often see the IT security chief reporting directly to the CEO (because it's important to keep on top of, and they need authority to do their job) but only having 50 people below them in the org chart.
In some corporations you also sometimes get almost-nonexistent ranks created to smooth over a reorganisation. If a level 5 bureaucrat decides to merge the departments of two of their level 4 bureaucrats, they could demote one of them. Or they could make one into a level 4.5 bureaucrat.
I never really got why they need to be a public company in the first place.
I wonder if they have 5-10 employees per manager at the bottom of the org chart, but a lot of middle managers and manager-like titles mixed through the middle.
If anyone has a VP-level position open, I'm willing to send you my resume. There is a salary level at which I am willing to do work entirely without shame.
Eight layers total
The GP miscalculated it.
Still. Not a huge fan of this announcement or the general ways the landscape is evolving these days.
I'm aware that the defective code was not written by AI but nonetheless, GitLab is what stands between many small organizations and their most precious resources. I was fortunate that 2FA stopped the damage, but what's going to happen the next time? What if my organization is permanently damaged because we taught the machines to go fast and break things, too [1]?
[1] VPN is an option but we're a non-profit with a number of non-technical users, so admittedly we're caught in a balance between making it harder to do things. As much as WireGuard is awesome, there's still a barrier.
I would love to help a non-profit and so, I am curious but what are your thoughts on authentik/authelia and others, can they might help in any use case to what you are suggesting, I would love to have a more in-depth discussion!
Also thanks for working at non-profit, although I am not entirely sure what is about but thanks to your non profits and all the other hard working people working at non profits for a better world once again!
Also their diffing, they use "..." diffing, and ".." is not apparently there in their GUI. As a git diffing tool I found this very odd.
Having said that, UI gripes aside, it works fine as a less complicated replacement for github.
[0] https://gitlab.com/gitlab-org/gitlab/-/work_items/588806
I can't seem to get past this - all these decisions (and a work-force reduction :() are the result of a few days of pondering? I've had stomach aches that have lasted longer ..
> The agentic era multiplies demand for software. Software has been the force multiplier behind nearly every business transformation of the last two decades. The constraint was the cost and time of producing and managing it. That constraint is collapsing. As the cost of producing software collapses, demand for it will expand. Last year, the developer platform market used to be measured in tens of dollars per user per month, this year it is hundreds/user/month and headed to thousands. Not only is the value of software for builders increasing, but we believe there will be more software and builders than ever, and we will serve an increasing volume of both.
Also notable that the workforce reduction they describe doesn't appear to target engineers - they're "nearly doubling the number of independent teams" in R&D and "removing up to three layers of management in some functions".
What is this based on? The only thing I can think of is AI coding tools but only a few companies do it properly. I don't see gitlab capturing any of that spending
Also the whole "removing layers". Today's prof g market video was about the topic. Afaik it was the Coinbase CEO telling the same. Do these people get together to discuss their talking points? Or are they signalling to investors?
If gitlab thinks they are as famous as github i don't know what to say. They should have atleast positioned themselves as a better github alternative
None of these visionaries and thought leaders have ever had an original idea in their lives, they just ape eachother.
Users want a product that delivers the value they are looking for, VCs are looking for infinite AI scale, these do not meet. So founders need to present two different values and visions, one for customers and one for VCs.
In a small early stage company you can pretty easily hide each side from the other so you can deliver value to your customers while dancing the VC dance, but as you get larger its harder.
I think founders will endure and VCs will calm down at some point, but there is going to be some suffering along the way.
Oh and have you heard that they built Cluade code with only 20 people? (ignore 12 years of AI research expertise head-start and that Anthropic now has thousands of developers)
It’s not clear at all this is the wrong move.
They simply don't have (or didnt) the skills to scale. THey were talking about using ceph to run things (which gives you an idea about how green their infra team was)
Its slow, large, excessively complex and not that resilient to failure.
You either want a bunch of NFS machines backed on to ZFS on nvme, with a central jumping off point that allows sharding (this is critical to allow one or more NFS server to fuck up and not kill access to everything else.)
Or, pay the money and use GPFS
https://docs.github.com/en/enterprise-cloud@latest/admin/dat...
I have no doubt GitLab has too many employees and can benefit from being a more focused company, but it's tiring reading these layoff posts so chock full of buzzwords. I guess they're desperately hoping if they prognosticate about AI enough it will placate the investors.
The mallard reaction is very possible in microwaves, but they use microwave-specific crockery. I think the vision was possibly killed by people not wanting to maintain a second set of crockery.
See here for a fun write-up: https://www.lesswrong.com/posts/8m6AM5qtPMjgTkEeD/my-journey...
Perhaps we can liken these auxiliary advances to agents and harnesses in the analogy. In the end, despite the unbridled optimism from certain backers, we never solved the fundamental issue with microwaves: that they use electromagnetic waves for cooking, and that electromagnetic waves have certain undesirable properties for this application.
[0] https://americanhistory.si.edu/collections/object/nmah_10880...
Understand that a lot of people don't have a lot of choice but I use mine (actually have a 4 in 1 when I had to replace the old one after it burst into flames and that's somewhat useful as a second oven).
It just made me realize why I don't have those found memories of my mom's cooking. When we got our first microwave she went full on the vibe cooking and took years to realize how dumb it was.
I hope my kid doesn't get the same kind of memories about my weekend projects.
You are obviously right and I see examples of it everywhere.
E.g I asked Claude opus 4.7 (the latest/greatest) the other day “is a Rimworld year 60 days?”. The reply (paraphrased) “No, a Rimworld year is 4 seasons each of 15 days which is 60 days total”.
Equally, it gets confused about what is a mod or vanilla since it is just predicting based on what it read on forums, which are clearly ambiguous enough (to a dumb text predictor).
Yes. A RimWorld year is 60 days, split into four 15-day quadrums (Aprimay, Jugust, Septober, Decembary), each corresponding to a season.Can you imagine how silly they’d look when everyone realised.
If pointing out the flawed approach to making something more productive isn't productive, then what do you consider to be productive?
> Less than a decade ago the idea that a computer could take a fuzzy human-readable description and turn it into executable code was science fiction
Cobol was sold to people on the idea that anyone could create something with fuzzy human readable description that would result in executable code. That was back in the 60s.
What lessons did we learn?
1) Leaving things to the people who make fuzzy human readable descriptions turns out to be a terrible way to have things implemented.
2) Slowly and deliberately thinking things through before, during, and after implementation always leads to better results.
It's a lesson that keeps needing to be re-learned by people who don't/can't look at things through a historical lens.
It was the same with cobol, as it was with programming in spreadsheets in the 80s, as it was with the nocode movement in the 00s, as it is now again with LLMs in the 20s, and it will be again with a future generation in the 40s.
---
> As is the ability to write long form text, and be so hard to distinguish from real that placing an em dash in your text will cause an uproar on this forum.
Long form text generation that is hard to distinguish from human authored text also goes back to the 60s.
That's when we got the first instances of the Eliza effect.
> You can describe things by their fundamental functions and make many things sound elementary but I find it counter productive given the capabilities we've seen from this technology
The capabilities we've seen are:
- Text prediction/generation
- Inducing the Eliza effect
Your attempt at an analogy will make sense when someone tries to install a house as middle management at some company.
To believe that first you would have to ignore tool calling, ReAct loops, and the whole agent feature. That would be silly.
How?
It all still functions with text prediction
Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
>> Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
Woof, you're sounding mighty aggressive for someone with such a fundamental misunderstanding of the technology you are defending. Have you ever even actually implemented a system around an LLM, or do practice ~~voodoo~~ "prompt engineering"?
> I can point you to ReAct loops and tool-calling and agent-based systems.
Those are all implemented - quite literally - by parsing the *text* that the LLM *autocompletes* from the prompt.
Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call. The response is then appended to the ongoing prompt, and the LLM is called again to *autocomplete* more output.
"ReAct loops" and "agent based systems" are the same goddamn thing. You submit a prompt and parse the output. You can wrap it up in as many layers as you want but autocomplete with some additional parsing on the output is still fucking autocomplete.
If you're going to make such strong assertions, you should understand the technology underneath or you'll come off looking like a idiot.
No. Code assistants determine which tool they can execute to meet a specific goal. They pick the tool, the execute the tool (meaning, they build command line arguments, run the command line app, analyze output, assess outcome) as subtasks.
And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.
Those literally work with text prediction.
If you take the text prediction out of it, nothing happens.
You stick a harness around a text predictor which then triggers the text predictor.
If you think I am missing something then please do point it out.
LLMs are the most successful form of neural network we have, and that's because they are token prediction machines. Token predictors are easy to train because we're surrounded by written text - there's data nicely structured for use as training data for token prediction everywhere, free for the taking (especially if you ignore copyright law and robots.txt and crawl the entire web).
We can't train an LLM to have a more complex internal thought loop because there's no way to synthesize or acquire that internal training data in a way where you could perform backprop training with it.
Even "train of thought" models are reducing complex thoughts to simple token space as they iterate, and that is required because backprop only works when you can compute the delta between <input state> and <desired output state>. It can't work for anything more complicated or recursive than that.
image: https://mataroa.blog/images/b5c65214.png
but it says that there are 3 e's in strawberry ;)
Now this is literally something which occurs because of it being text autocomplete and the inherent issue of token based Large language models. So you are literally right :D
My point is that AI can have its issues and it can have its plus points (just like text autocomplete but some suggest its on steroids)
The issue to me feels like we are hammering it in absolutely everything and anything, perhaps it should be used more selectively, y'know, like perhaps a tool?
> Mark this prediction it will happen
But this historically is a very strong predictor of a poor prediction
Gemini: There is *1* "e" in the word "strawberry".
Seems fine
See: https://fediverse.zachleat.com/@zachleat/116529994444529036
This is like saying that somebody speaking Chinese is just playing the Chinese Room [1] experiment. The only reason it's less immediately obviously absurd here is because the black box nature of LLMs obfuscates their relatively basic algorithmic functionality and let's people anthropomorphize it into being a brain.
This is not quite accurate. The human lips, throat etc have evolved to be better at producing speech, which indicates that it's not that recent. And that it was a factor in the success of groups who could do it better than others.
It likely started "no later than 150,000 to 200,000 years ago."
sources:
https://en.wikipedia.org/wiki/Origin_of_speech#Evolution_of_...
I think, therefore I am. You parrot, therefore you are... ?
Last year this level of ignorance and cluelessness was amusing. Nowadays it's just sad and disappointing. It's like looking at a computer and downplaying it as something that just flips switches on and off.
It will be interesting in the next few years. Assuming we won't be in the 3rd world war thanks to the USA and will have much bigger concerns.
You're grossly inflating the level of contribution from your average software developer. Are we supposed to believe that the same people who generated the high volume of mess that plagues legacy systems are now somehow suddenly exemplary craftsmen?
Also, it takes a huge volume of wilful ignorance and self delusion to fool yourself into believing that today's vibecoders are anyone other than yesterday's software developers. The criticism you are directing towards vibecoding is actually a criticism of your average developer's output reflecting their skill and know-how once their coding output outpaces or even ignores any kind of feedback from competent and experienced engineers.
What I see is a need to shit on a tool to try to inflate your sense if self worth.
The ones who never acknowledge a mistake even if the process is crashing; the ones who put "return true" in a test so that the test doesn't execute and will insist that you broke their code if you remove the return true and when the test actually runs it fails; the ones who read a blog post about some new thing and decide we need to do like that; the ones who will write code that fails and then be nowhere to be seen when there is customer support to do.
Trying to portray everyone who ever used a tool as the incompetent cohort is an exercise in self-delusion.
Gitlab has been strapped for cash and desperately seeking a buyer to cash out for years.
If anything, the LLM revolution represents an opportunity that Gitlab is failing to capitalize upon. They have a privileged position to develop pick axes for this gold rush, but apparently they are choosing to dismiss themselves from the race altogether.
Gitlab's decision is being taken in spite of LLMs, not because of them. Enough of this tired meme.
It that scrapes Hn it works. Ironically, it's why I'm here.
I feel like that overstates the point quite a bit. There's a lot that's similar: neurotransmitter release is stochastic at the vesicle level, ion channels open and close probabilistically, post-synaptic responses have noise. A given neuron receiving identical input twice doesn't produce identical output. Neither brains nor LLMs have a central decider that forms intent and then implements it. In both, decisions emerges from network dynamics, they're a description of what the system did, not a separate cause (see Libet's experiments).
Now pretty clearly there's a lot that's different, and of course we don't understand brains enough to say just how similar they are to LLMs, but that's the point: it's an interesting thought experiment and shutting it down with a virtual eyeroll is sad.
I claim that a modern frontier LLM can be given simple instructions that make it impossible for a person to reliably distinguish it from a person over a bidirectional text-only medium.
This one stood out to me:
>Machine-scale infrastructure. [...] Git itself wasn't designed for that load, and bolting AI onto platforms not built for agents is the biggest mistake of this era. [...] Git itself is being reengineered for machine scale.
Git itself is so far down the list of bottlenecks that do or could hamper LLM-driven development, even projecting years into the future...
Models will only get better with time, not worse.
Demand will keep raising.
It's unlikely, but not totally improbable - Model collapse means that the subsequent models would get worse over time, not better.
1. AI free training sets no longer exist. This might degrade quality, although some claim that it will not.
2. Cost. Right now they are burning a lot of money to convince people it's good. But they might not be able to keep it up forever and need to increase prices (which few will want to pay) or degrade the quality to save money.
I don't know, I've seen more big organizations that have a dysfunctional amount of middle management and "meetings about meetings" than ones that truly benefit from that culture.
Tons of middle management that makes no decisions what so ever.
Everytime you ask a question, they delegate, until you end up at person 1 again and they just can't decide anything.
It's like they all have decision paralysis.
This, like virtually all layoffs, is for economic reasons. Of course you can't say that because that reflects poorly on your growth and makes your investors uneasy and yadda yadda yadda. But what do investors like? Hm? AI!
Oh! Oh!!! This is strategic, you see, so we can use even more AI, yes yes that's right mhm.
they do on the org level. that's not news for anyone who has worked at upper mgmt level in corporations. rule no.1 is you keep your mouth shut about anything there. and of course it's for economic reasons.. it's a business, not a charity to provide lifelong employment for employees who aren't aligned to mgmt goals. Mgmt tells stories depending on who asks. Levels below execute them (by identifying those who aren't aligned).
If anyone at Gitlab management is reading this; getting your microservices to run fully stateless in a Kubernetes cluster should the #1 goal. No disclaimers about potential risk. It's been 5+ years. Get it together. Stop bolting on minor package management features no one is going to end up using anyways.
Forgejo is great.
I'm not saying you should never self-host your git server, but it's not for everyone.
Arguments against self hosting have to change as our SaaS overlords are decaying in front of our very eyes.
I get self-hosting got for security, compliance, and retention reasons, but for almost everything else it seems questionable for any use I would consider normal.
Setting aside the whole "I'm not going to pretend otherwise which reads suspiciously like Claude, I don't understand how this is supposed to make employees feel any better. No one knows what's going on and through talking we'll figure it out? Mmmmmmhmmmmmm.
For some people it might actually be worth it, not to solve anything but to talk to someone. It still sucks anyway.
One of the really interesting things about GitLab was that not only did they have employees in a large number of countries but they also published their employee handbook which helped show quite how much work it was to support that:
https://handbook.gitlab.com/handbook/people-group/employment... lists 18 countries right now. I guess they're losing 5 of those.
Here's a permalink to the current version of that page https://gitlab.com/gitlab-com/content-sites/handbook/-/blob/... since it mentions that "Diversity, Inclusion & Belonging is one of our core values" and so is likely to be updated pretty soon!
They even used to have a public payroll.md page detailing how payroll worked in multiple countries - they moved that into their private docs a few years ago but the last public version is here: https://gitlab.com/gitlab-com/content-sites/handbook/-/blob/...
UPDATE: I got the countries piece wrong. The linked OP says:
> Reduced operational footprint: We’re reducing our country footprint because operating in nearly 60 countries does not allow us to give every team member a great experience. We anticipate reducing the number of countries by 30% focused on geos where we have only a handful of people or fewer. Team members who are in good standing and would like to relocate are welcome to do so. We'll continue to serve customers in those markets through our partner network where appropriate.
I said they operated in 18 countries, so clearly my impression was out-dated and incorrect.
Also "We anticipate reducing the number of countries by 30% focused on geos where we have only a handful of people or fewer" suggests to me that it's a 30% cut to countries with "only a handful of people", not a 30% cut to countries overall.
Are they going to rectify this by laying these people off?
Yeah, sure. A couple of years ago it was Covid overhiring.
You know the one thing that is never ever going to be given as a reason for layoffs? The growing salary-productivity gap.
Yes, letting some LLMs "plan, code, review, deploy" will for sure improve quality and depth of innovation you ship.
New values: Speed with Quality, Ownership Mindset, Customer Outcomes.
In other words, work harder, not smarter, and no more DEI.
The ball is right there, bouncing alone in front of the goal, and they just have to position themselves as "we're the stable ones" to score that market when the exodus inevitably happens.
Nope, full throttle and stimulants, just because.
So many things they could be doing, to make people buy into their services. For example they could simply run campaigns about how they promise to never use customer and user repositories for AI training. Or they could show better uptime statistics. Their CI language is better than Github's too.
If anyone gave me a choice between Gitlab and Github, I would go with Gitlab. But if I had additionally the choice to use Codeberg, I would choose that.
Maybe they are just not looking to grow. If they made such a statement, that would actually be a pleasant surprise. No hunger for "infinite exponential growth", just to impress investors? Great! That's a fat plus in my book!
Gitlab pricing was bonkers. It always felt like their sales team were trying to play gotcha with us over the years with pricing schemes that would milk us for money.
Their pitch is not to you, the dev. But, to the investor class. We are in this funny place in the market where you can make more money by catering to the investor class than to customers. In other words, an upside down world.
I understand the meaning, however, in that they're well positioned by having the company name and domain name, same general way that non-technical people will pay wordpress.com to host their blog/small website because it's very easy, rather than DIYing it or paying a 3rd party.
"Editions There are three editions of GitLab:
GitLab Community Edition (CE) is available freely under the MIT Expat license. GitLab Enterprise Edition (EE) includes extra features that are more useful for organizations with more than 100 users. To use EE and get official support please become a subscriber. JiHu Edition (JH) tailored specifically for the Chinese market."
Personal opinion, but I think a great deal of the people who are presently overloading github with one person created vibe coded projects would be just fine with the "CE" feature set.
I find it a bit concerning that this piece focusses so much on customers and shareholders... I know I don't pay, but perhaps sometime I will, and I am learning GitLab and applying at large orgs as GitLab consultant. All because of CE... So I hope it will stay. It is a nice and very complete on-ramp to EE.
I have to regularly use Azure DevOps and the whole platform is painful, and now is rotting on the vine. I hear there is internal strife at Microsoft between Azure DevOps and GitHub products.
The American corporation and its values are anathema to craftsmanship. You can ******* a **** all you want, it's never going to turn into gold, but your hands will be covered in crud.
We've all heard the joke about two people running from a bear and only one has to be less eaten than the other.
This is a race to the bottom. We shall see who winds.
> Interpersonal excellence: individuals who are good humans, embrace diversity, inclusion and belonging, assume good intent and treat everyone with respect
Every IC ought to use the present day as the opportunity to build a nimble competitor to their old employer (or whatever industry incumbents they want).
They're literally setting themselves up for this.
Were I to have crafted this post, it would have included things like
"We ask our employees, customers and investors time to prove ourselves to you again as we re-commit to listening to our stake-holders and ensure our organization is properly re-positioned to execute our continued plans to deliver the best possible service..."
But instead it comes across as "someone read an article about Amazon's two-pizza team rule and we figured there were worse things to try."
Having been in some of these values meetings, I really imagine it went like this: someone wanted speed, and someone else wanted quality. Sorry, I mean Speed and Quality. Many people said there is a tradeoff between those two things, and only one thing can be first.
Some brilliant businessman: "I know, we'll combine them. We want Speed _and_ Quality." Thus, "Speed with Quality." Tada!
Values are a tradeoff: only one thing can be first. Trying to duck that is stupid.
Also "our velocity is 3x higher than it would be in the imaginary invisible universe where we made worse decisions 6 months ago" is impossible to measure, whereas "we cut a bunch of corners and shipped a piece of garbage on an arbitrary deadline" is very measurable.
Let's pick: Speed-Quality
Errrh... Let's forget about: Price
I've noticed that the more a company pushes on ownership the more difficult it is to actually execute it.
Every company I've worked at hammers the "ownership" idea and I hate it so much. It's how they drive a culture where employees are expected to invest themselves into "owning" a problem space that can be taken from them at any moment. It's how they trick you into doing extra work that's not in your job description.
Unless you're ACTUALLY an owner, don't be fooled by an "ownership" value.
It's the norm at Big Tech these days. Directors and VPs take all the glory if it goes well while ICs, team leads, and people managers get all of the blame if it doesn't. When the charlatans get exposed, they bounce on to the next company with their charlatan friends. Rinse and repeat while swapping RSUs for index funds, retire with >$10m before 50. If we stopped allowing this to work in our industry, it wouldn't be such a common thing. Unfortunately, with how everything is these days, these people are getting hired on vibes and bravado.
The part I'd missed was that as middle management he didnt have any real authority himself... you live and you learn I guess.
How? Did the bozo get butthurt over being exposed?
All the responsibility is still yours though.
One must really wonder, if they ever try to hear themselves talking or read their own prose. Maybe they do, but simply don't care at all?
I think same group of management consultants do a round of industry and in short time every company is using same duplicitous language of ownership, design thinking, customer first mindset, cloud first, cloud native, AI native, enterprise 2.0...and on and on it goes.
Does anyone know what caused this?
Very weird to include social awkward geek in there. But my guess would be like 99% of dev teams do not have a trans or furry.
I’ve been in the business and seen a ton of hires on vibes. DEI actually asked people to expand the talent search, not hire anyone unqualified (which is what the anti-DEI folks are desperate to have us believe it did).
I predict some major EEO lawsuits will eventually bring the pendulum back in the other direction because my sense is that the return to vibes hiring (and RIF-ing) is resulting in very actionable discrimination cases.
> my sense is that the return to vibes hiring (and RIF-ing) is resulting in very actionable discrimination cases.
Your sense? Based on what?With respect, it seems like the hiring managers you were complaining about above weren’t the only ones operating mostly on vibes.
I’ve worked with several excellent “just leave me alone” sysadmin types.
Perhaps I'm missing something here.
To me "individual contributor" means anyone who is NOT: A (technical) "Lead", "Chief", "Architect", or (possibly) "Staff" anything, and has no management or team-leader responsibilities.
It's not like (most) hiring managers put "not a team player" in the pro column.
For example: someone not always looking into your eyes while talking can be perceived as "rude". Same for wearing noise-canceling headphones in a talk-heavy environment. Oh, you don't drink alcohol during the "optional" Friday-afternoon company mixer? That's just weird. Want to have a day off for Eid rather than Christmas? Wellll, you did ask for it six months in advance and we did approve it already, buuuuut Dave planned a last-minute meeting which conflicts with the mandatory team meeting, so we moved the mandatory team meeting onto your day off... We'll just pay the hours you spent doing first-line support during Christmas in cash, okay?
https://onlinelibrary.wiley.com/doi/10.1111/padr.12641
heres an article that discusses how inflated diversity could possibly be a cause of social tension. the article's abstract concludes with a shrug ('too many factors!') but it does provide links to research papers arguing both for and against this case.
on the surface it seems pretty clear to me. behaviour is encoded in genetics. if one were surrounded by the same group for a few thousand years, they would share a common base of encodings, therefore social behaviours could be assumed to a higher degree. reference behavioural encodings drastically diverge across cultures (as embodied by religious value sets, or at a different meta level, the idea of low trust vs. high trust societies). based on this drastic divergence, predictions made about one's neighbour scale downwards in accuracy relative to increased cultural diversity.
so i see that jacking up societal entropy leads to lowered societal cohesion. but thats just my stance and id love to hear yours.
diverse, millenia old, genetically encoded behavioural structures exist in our shared reality. id love to discuss this idea and the exact types of behaviours that can be encoded, down to the generational timespans required for encoding. that way we can talk about my idea in objective good faith.
'its all in your head' isnt objective good faith. applying the golden rule, you clearly accept bad faith ... man you couldnt tolerate a dissenting idea even momentarily before bringing out social ostracization and logical fallacies! sounds pretty similar to the behaviour of a racist, were you projecting?
that was said facetiously. im not trying to accuse you of anything, rather to show how it feels to be accused. to conclude i think its pretty easy to predict what my neighbours are eating for dinner at home and pretty hard in the city so youre gonna have to try a bit harder to convince me that the evidence of my eyes and ears is wrong.
The goal should be to hire the best team for the use case, regardless of gender/race/culture/background.
It was never trumping skill. This is just a willful rewrite of history perpetuated for some political goal.
The goal was always to ensure that skill had adequate opportunity to be displayed without bias.
See all the Falsehoods Programmers Believe About Names/Addresses/Birthdays/Phone Numbers/Time Zones/etc, for example. Do you want a backend engineer who designs a 64-character ascii text field for legal name and have everyone nod in agreement, or would you rather have one who knows that it isn't going to work for their cousin "Pablo Diego José Francisco de Paula Juan Nepomuceno María de los Remedios Cipriano de la Santísima Trinidad Ruiz y Picasso"?
> it's really hard to make a case for why DEI concerns should trump traditional evaluation metrics for skill
It doesn't. The goal of DEI has always been to attract a diversity of perspectives, all else being equal. Nobody ever proposed choosing a woefully unqualified diverse candidate over an obviously-qualified Generic White Guy. The only people who would oppose that would be the unqualified Generic White Guy who just happens to be the nephew of the CEO's golf buddy.
Hiring someone in the off chance that their ethnicity gives them some unique critical unknown unknown that will pop up half a decade down the line resides in the same mental space as a programmer writing `if (5 == i)` in case a future programmer accidentally deletes an =. It's just speculative defensiveness whose efficacy is simply not well established by actual research. And, in my view, just works to confound actual signals that, evidently, gitlab and other employers feel get unfairly overshadowed when emphasizing explicitly pro-diversity hiring policies.
We should just get a representative sample of the population and give them equal say in the design of the plane, engines, etc.
https://www.mckinsey.com/featured-insights/diversity-and-inc...
Landing page:
https://www.mckinsey.com/featured-insights/diversity-and-inc...
It's obvious why this is the case if you sit down and think about it. Echo chambers of like-minded individuals can't understand customers as well as a workforce of people who represent the diversity of those customers.
This isn't just diversity of race or gender, it's also diversity of thought and background.
Also critical and under-emphasized: the E and I in DEI, equity and inclusion. Power distance and lack of inclusion can railroad companies into giving the people with the most power the most influence on decisions, rather than giving the best ideas a chance to breathe.
In business a classic example might be "men designing women's clothing." How are you going to understand your customers if none of your employees and leadership resemble those customers? Perhaps you can figure it out and make some decent products but your competitor who has more diversity in their workforce is likely to outperform you, which is exactly what McKinsey's studies have demonstrated.
I will also point out that the only reason anyone started questioning this obviously true business concept and changing opinions into being against DEI is because the Republican Party's strategists figured out that they could appropriate and leverage the term "DEI" and attach it to the latent reactionary racism that much of the US still holds dear.
You can get away with saying "I don't like DEI" in public but if you say "I don't like black people" or "I don't think women should get hired for important roles" [1] that is obviously not acceptable, even though a large percentage of Americans feel that way. Right wing media twisted a largely innocent term into a useful dogwhistle.
[1] https://journals.sagepub.com/doi/10.1177/1532673X251369844
You might not like it, but this is what peak performance looks like.
Okay, I'll bite. Why is it a strength, and why is it the greatest strength?
All people are equal, so it shouldn't matter if you have an all Asian team, an all black team, or any mix of all races.
When there is a team like that, there is invariably sniping about how "X only hire their countrymen".
And all people aren't the same, you want a mix of minds and skills for most types of work. I'd totally hire someone that couldn't really do that much directly but was fun to be around and connected introverts that have some (potential) synergies in their ideas and generally made the group more productive over all.
Especially in business, the actual (not the managerial) judgment is the collective judgment on the whole groups output and actions by the market. Forging a high performing group out of different people is not the same as maximizing the median metric on some individual test of skill. Like quality, it's a bit undefinable, tho unmistakable when you experience it.
It’s not like all surgeons and astronauts were white males for a long time out of inherent superiority.
Corporate DEI was never real. There's no "push against" it, simply because there was never a genuine push for it. Large companies don't have moral values - if they did their CEOs wouldn't be billionaires.
That’s totally illegal and discriminatory but companies were not facing consequences for it under the Biden administration. The constant injection of DEI politics all over society - at work, in movies, in ads, etc - led to a backlash and personally I think it is one of the things that led to someone like Trump being re-elected. And this administration is very against DEI ideology. That’s one reason corporations quickly abandoned it - they didn’t want to face legal scrutiny now.
Another is that DEI culture produced no positive results, as expected. Companies already had incentives to hire the best employees they can. If you change that with other incentives thrown in, it’ll make things worse. And ten years after DEI began to appear everywhere, it was obvious it produced no benefit at best, and led to worse teams at worst.
Another reason is simply that a lot of the activists pushing this type of ideology grew out of the activist age group. And I think many of them likely don’t hold those beliefs as strongly anymore. But either way, younger people are different. Especially young males who are more conservative.
All of that and other things has led to DEI being removed or at least de emphasized.
Tell this to the people enjoying unearned privilege under DEI policies.
But you don't have to dislike yourself to recognize systemic unfairness that you benefit from and want to help change it.
1. https://fortune.com/2026/02/13/costco-defies-trump-on-dei-bu...
GitHub is already the main platform for random open-source projects, and that's unlikely to change any time soon. GitLab's selling point is essentially "Github, but not by Github". They would do Just Fine offering a highly-restricted free account for the handful of hobbyists who care enough about leaving GH but don't care enough to go to Forgejo & friends and for the people doing evaluations, offering free credits to the few high-profile FLOSS projects who accidentally end up on GL-the-SaaS instead of self-hosted GL, and for the rest just focusing on paid corporate customers.
Where do you find those, seriously? That might’ve been the case a couple of years ago, where they’ve gaslighted people and played on their feelings, but now gloves are off. AI bros are literally posting about lack of sleep, dopamine hits, vibe coding on a toilet/walk/watching TV, FOMO is through the roof everywhere, prophesying doom of SE, etc.
Employees tasked with doing 10x more work with less help don't even have to feel bad about it happening. It'll also create employment opportunity in disrupting their old employer.
These companies are willingly signing up to become IBM.
Of course, once you have a big incident, then the value of more human review becomes obvious.
I seriously don't know how people are working like this now. I'm on my ass looking for work and in the last month it feels like everyone has completely lost their minds.
At least companies like Coinbase made principled stances against forced DEI and employee activism earlier than everyone else. Doing it now seems weird because if it does become mandated again, they're going to look so phony.
Mandates? There is this weird revisionist history that DEI was a Biden era invention that all these companies were forced to roll out in January 2021. These programs were simply the latest evolution of prolonged and steady cultural shifts. I remember attending events trying to promote diversity in the computer science department when I was in college 20+ years ago. Killing DEI isn't wiping out four years of progress, it's attempting to wipe out decades.
The obvious decline started around 2010; coincidentally also the era of the rise of SJW-ism and nontechnical derailing drama. Once the diversity quotas started appearing, the inevitable results were obvious.
Whether or not you are left or right, the objective truth is that a Democrat added DEI mandates and Republican removed the DEI mandates. I didn't say anything about whether or not that is right or wrong, but the fact that companies seemingly embraced DEI and then once a Republican removed it, then they abandoned it so quickly means they really didn't care about DEI at all and it was all phony. It just goes to show you that when they start praising themselves for being "moral" it's not because they actually care, it's because they are forced to and they don't give a shit about anyone.
It doesn't make sense for it to be 40% of their values, especially if they're losing money (or very close to it).
I am not sure if you had implied it but that would align with my experience as well: places that tout diversity were the worst places to work (as someone who is seen as 'diverse') while the ones that treated everyone the same and had the expectation everyone pulls their weight.
I absolutely despise people treating me differently because of who / what I am rather than doing good work. I will take mildly inappropriate good-nature jokes over head pats every day of the week.
I highly doubt it considering that you can’t even spell it right you incompetent pillar
(Saying this as a strong advocate for diversity and inclusion, lest there's confusion)
That said, some management people say it's important for a large company to write down the values that they actually practice. I can see several reasons why it's good, but I haven't ever seen anybody go and do it, so IDK.
DEI isn't mandatory, so an org heavily invested in DEI training probably had serious issues in the first place (whether they end up on the other side at the end of the trainings being another question)
That's different from putting it as a core value though. Most companies have some kind of "make more money with less resources" stated value, and I don't think we see it as an issue ?
Also, idk why people view quotas as all of "diversity". I've literally never worked at a place that considered this but I see people mention them all the time on the internet.
Of course, its statistically most likely that any individual would belong to the much larger latter group but stats like that only apply to other people, right?
Worse, its a zero step thinkers solution. Step zero is a merit based system, step one is for the people with motels on Boardwalk and Park Place to ensure they can never lose again by rigging the system to ignore merit in favor of capital.
I'm not a random variable, I'm a specific human. Predicting future outcomes need to take into account my personal traits. Otherwise you get into absurdities like "statistically speaking, when you join a family reunion, 15% of the people you see there will be Indians, and another 15% Chinese".
Someone I'm close to is going through this right now. They work at a place that officially highly values "inclusion", and their employer's website is dripping with virtue-signaling language related to it. But that someone is disabled, and in fact there's nobody at the organization who owns accessibility issues. Disability accommodations are haphazard, and often not timely. Why? Because no one owns them. They just get punted to an internal employee affinity group of disabled people who don't have a real chain of command, a real budget, or even a real prerogative to do accessibility work, let alone meaningful power— many of its members are routinely chastised by their bosses whenever they dedicate any time to solving access problems within the company. "That's not what we pay your for", "that's not your job", "I need you on this other thing", etc.
Meanwhile the organization receives public accolades from meaningless business press organization as a "great place to work" or even "great place to work for people with disabilities".
I think it's fine for companies to value diversity, and to value it publicly. A little virtue signaling is fine, as a treat; it may actually repel nasty people, encourage good behavior, or make employees feel more welcome sometimes. That stuff is good.
But there's also a real possibility that a company making diversity an explicit value results in lots of energy going into activities that let that company's executives pat themselves on the back about how good they are without actually doing much for inclusion. I wouldn't take any sizeable company's stated values too seriously, including that one.
Then again I don’t even know what it means for something to be a core value. What is the practical upshot of “collaboration” being a core value of a company? Were people not collaborating before?
Yeah I think they're mostly useless. At least you definitely don't get core values by just declaring that they are your motto. For example Amazon is pretty widely agreed to have customer satisfaction as a core value. They didn't get it by saying "Our core values are customer satisfaction...".
Essentially, what's happening here is that this right wing political media saw an opportunity to latch onto resentment of employees whose companies were just trying to change employee behavior for the better.
Companies are well aware that implementing DEI successfully will financially outperform other companies who don't. McKinsey has found this to be true repeatedly. But of course, people don't really want to hear these kinds of things and a lot of socially conservative people don't like being told that they need to learn how to interact with that queer looking person they'd rather just avoid. When Jim and Bob want to hire a new employee they just want to hire another Jim or another Bob and be left alone.
You know how your company puts meetings on your calendar where they preach about wellness and exercise and stuff like that? Just because they are annoying meetings doesn't mean they're wrong. You should focus on your wellness and exercise. Same deal with DEI: it's obviously beneficial to everyone, but America has a whole lot of people who really don't want it.
We are within the same lifetime as full blown segregation, redlining, of women being disallowed from opening bank accounts without spousal approval. There are people still alive from that era. Your great-great-grandparent may have been alive during legal racial slavery.
Re-read the thread. They made a joke about acronyms.
secondary comment - 'suck it fascist'
third comment - 'fascism and communism would both get rid of dei'
whereabouts did the room get misread?
Also, in the current environment, I don't see how anyone can look around and argue that merit-based hiring is a norm anywhere. Even at hotspots of anti-DEI, "merit" often means "friend of a friend" or similar.
And that the discussed-to-death diversity hiring quotas are not its entirety, or even necessarily a part, of it.
Merit not being a threshold but a range in actuality probably also plays a role (along with how utter theater the typical job interview really is).
> I wouldn't want to be hired based on something so meaningless.
But that's kinda the point of it all, isn't it? That it's supposed to be empowering the disadvantaged / marginalized. If your background does not put you at a disadvantage, there's nothing to compensate for, then it would indeed be meaningless. But if there is, and you made it, then that is by definition extraordinary. So it is meaningful.
There's definitely a question about whether they'd be stealing your thunder by this, but I'll leave that to an actual aficionado of the topic. Not exactly the expert on all this.
Tough crowd.
[0] and funnily enough, I agree! I just also think that if you believe there's a way out of this that isn't racist, you're a moron.
I don't know, looks like you're quite the natural yourself. You both manage to be ashamed of your ethnicity, and hate another.
Is this how fucking retarded the MMIWG2SLGBTQQIA+ [0] is these days, where you can't assume the man with a big swinging dick is a man, while you divine the contents of other people's souls without pause?
Go to hell, bigot.
[0] https://www.cbc.ca/news/politics/gazan-mmiwg2slgbtqqia-pushb...
What for? You seem to be enough of a victim already.
Or sorry, do you have a preferred slur?
---
It's incredible how far culture war has rotten the North American mind. I literally just joined in to offer my understood perspective to the guy, which I don't even necessarily find right (as I explicitly highlighted), but I do appreciate facets of.
But oh no, John Convenient-Idiot-Illiberal saw the right trigger words and had to spiral into a tirade with their sob story. You sure showed us dude. Hope that middle class money affords you a therapist. You sure could fucking use one.
> The planning is happening openly, including a voluntary separation window. That creates real uncertainty for our team over the next few weeks, but we believe the outcome will be better for it.
No good way to execute lay-offs, my preference would be to do it like a band-aid. What use is it to do it in open unless they plan on having gladiatorial matches to keep your job. Otherwise it's just like a painful game of Duck Duck Goose.
The mediocre people who dread looking for a new job during a hidden recession aren't going to leave. They can't afford the risk of not being able to find a new place of employment before the severance pay runs out.
It's not that different from making it part of the process in the first place.
Neither of these groups are valuing long term expertise
What Gitlab is announcing here is that employees need to apply for a separation, at a yet-to-be-determined time under still-unknown terms, without a guarantee of acceptance, in the next 7 calendar days. Much different and just so much worse.
Plenty of time to whip up a dead man's switch.
Two big red flags here.
First git itself is distributed and built for scale.
I guesss they mean “gitlab” instead of “git”. But such a huge mistake would never go unnoticed.
Are they going to rebuilt git??
Secondly: a big rebuilt of monolith to services. Firstly there is nothing wrong with a Modulith. Secondly “rebuilt” will cause a lot of busy work without immediate value for customers.
And first of all: this announcement is done due to the stock price not AI The productivity increase with AI is inflated because they want their stock price up.
Sell Gitlab stock while you can. The leadership team has no clue what they are doing.
Sadly non engineering leaders buy into this dogma. AI is very usefull but in my experience doesn’t 10x if you don’t YOLO it.
there're different dimensions for "scale" - like handling large monorepos, orders of magnitude more commits, tighter requirements for latencies (for agentic use, e.g. for agentic history navigation)...
It makes you have 10x more the errors if you YOLO it ;) especially at a scale even remotely comparable to gitlab :/
Doesn't really inspire the greatest of confidences when they are literally dropping the ball on one of the greatest opportunities as github is being ensloppified.
Sometimes I wonder if I am more passionate towards my 7$/yr vps's and websites running on it than 7 billion $ companies (GitLab has a market cap or net worth of $4.36 billion. The enterprise value is $3.10 billion.[0] to be exact)
break things and move fast should work when you have 1000 users on your website, not 1000 full on entreprises (probably more for gitlab)
> I guesss they mean “gitlab” instead of “git”. But such a huge mistake would never go unnoticed.
> Are they going to rebuilt git??
These comments make me realize again how you all (who were alive ie) must have felt during the pets.com and dotcom mania. Some of these sentences are almost onion-video like titles. Its so all weird at a certain point. I am unsure how to feel about this.
I wish them the best of luck with that plan. Middle management is where the institutional knowledge sits on how to actually get shit done despite challenges & broken processes/systems.
It's an even worse plan than eliminating juniors.
They don't cause the broken processes. They are the symptom of a broken executive process. A fish rots from the head down, and the people at the top get exactly the kind of company that they ask for.
Really? In my experience it's the rank-and-file employees who have this knowledge of how to get on with it without ceremony and politics. And the broken processes and politics are created BY the middle managers.
Having to rewrite all my CI will suck but will be worth it.
- when you see the word substrate in corporate speak, you know where that’s from…
My manager has started speaking like this. He showed a slide recently which had the words AI and Quantum nearby
That's true, but it's interesting how FizzBuzz as said to be the bete noir of the average dimwitted software developer, and how much cutting-edge engineering organizations used to emphasize code in their recruitment processes.
If writing code is being replaced by "engineering judgement" it's going to need a much smaller cohort of developers. Too many opinions spoil the broth, after all.
Could someone explain it?
If you have a lot of new stuff to build, and if you're not currently losing money, why start a new initiative with a layoff?
My guess is they are doing this to prep for an acquisition. Probably by an AI company or Datadog or similar.
Yes, and the people who are all-in on agentic AI are, in practically every example I’ve seen, not that. They’re the jackasses giving Claude root access to their prod DB and then writing a blog post about how much they’ve learned from their mistake.
> Agents open merge requests in parallel, trigger pipelines around the clock, and push commits at a rate no human team ever did. Git itself wasn't designed for that load, and bolting AI onto platforms not built for agents is the biggest mistake of this era. We're doing a generational rebuild of the underlying infrastructure to handle agent-rate work as the default. Git itself is being reengineered for machine scale. The monolith is giving way to modern, API-first, composable services. And agent-specific APIs are being built so agents can act as first-class users of the platform, not as bolted-on consumers of human-shaped interfaces
Is there any broader consensus or information on this? Git doesn't scale? is being rebuilt for agents?! Monoliths are out and services are back? Humans are second class citizens now (human shaped interfaces - bad!!)?
What the hell are they planning to do in there at Gitlab?!
> GitLab’s six core values are Collaboration, Results for Customers, Efficiency, Diversity, Inclusion & Belonging, Iteration, and Transparency, and together they spell the CREDIT we give each other by assuming good intent. We react to them with values emoji and they are made actionable below.
Since those terms don't speak for themselves individually, it's worth seeing what they're supposed to mean to get a sense of what GitLab is forsaking now. Each section is actually pretty lengthy, so you should go look and skim for yourself.
Here's the page: https://handbook.gitlab.com/handbook/values/
And here's an archive from yesterday, for when that changes: https://web.archive.org/web/20260510150031/https://handbook....
GitLab's "internal" workings are surprisingly public, so you can just look at the git history yourself: https://gitlab.com/gitlab-com/content-sites/handbook/blob/ma...
GitHub is publicly destroying itself in a desperate attempt to realize Microsoft's AI dreams, and as its main competitor your response is... to do the same?
Rather than going for a "Humans first, robot assistants welcome" approach which promises to deliver things like stability, reliability, trustworthiness, and human connections, they decide to go all-out on firing the humans and letting bots handle things like code review while explicitly shifting the existing human-first company values towards making the remaining humans responsible for the bot's mistakes.
They could've chosen to market themselves as the sane save haven for the GitHub exodus. Instead they choose to go down in history like Google abolishing "Don't be evil". But hey, I bet chanting "AI! AI! AI!" (albeit quite late to the game) will deliver a very solid lukewarm increase in shareholder value!
Like, I know there are actual reasons and incentives here for the ever-present AI pivot. But I think they're stupid and short-sighted incentives.
"So...you decided to throw away what distinguished you from your faster, more stable competitor?"
I guess someone will be selling enterprises something that lets them say, "We're doing AI too!" Might as well be gitlab?
Email me subject “gitlab” if interested - thomas@ our domain (I am the cofounder)
Reduce the work force of 30%. I don't know, dude, you didn't convince me.
There's a lot of cool things happening between Gitea/Forgejo, Tangled, and Radical, but I doubt the latter two have any significant usage beyond OSS hobby projects. I'm not sure if the former two do, either.
Gitlab is a terrible company, period.
Source: I'm ex-GitLab
They seem to be mostly reducing headcount of managers and claim (supposedly) to be prioritising engineering.
On top of that their redesign sounds interesting - they want to adapt the platform itself (and concept) to deal specifically with how AI "users" will code and submit changes (and the rate of and interaction of that model) vs humans. We'll see how this plays out but this doesn't sound like a bad idea to me at all (assuming humans of course still get priority).
"We're firing a bunch of people because we think we don't need them anymore due to AI and we'll make more money without them."
There are times when businesses must fire people to stay afloat and it's a business that objectively needs to exist. This isn't one of them, so don't waste everyone's time with your BS, please.
Until I got to "One platform, three modes." and my brain just pattern matched "AI slop" and the entire post dissolved into meaningless for me.
I don't know if I can stop my mind reaching this conclusion. I'm sure someone at GitLab made some effort to carefully edit the post... But that it wasn't entirely rooted in a human who'd worked out how this stuff goes, but clearly had lots of AI writing it out... Just made my instinct go "this isn't worth paying attention to after all".
The planning is happening openly, including a voluntary separation window. That creates real uncertainty for our team over the next few weeks, but we believe the outcome will be better for it.
Not even the balls to do the deed yourself. This reads like Shrek's "Some of you may die,... but that is a sacrifice I am willing to make.""Act 2" for crying out loud, get out of town.
"We over-hired, we're ram-packed full of managers pinging each other on Slack all day and need to cut costs to sustain our operation. We think GitHub's shit and we want to be a nimble org with a fighting chance at eating their lunch. We're also gonna provide 1000 free runner hours/mo to open source projects that move from GitHub to gitlab, and we're gonna make project namespaces on gitlab.com a first class thing like GitHub did"
Ah, yes, finally gitlab will have the same uptime leves as GitHub.
"We did nothing wrong, but ended up in the wrong shape!"
If I had any inkling of giving GitLab a try, this killed it.
Software stocks won't win longterm if their value proposition is "we vibe code now".
It's Ruby, which is pretty horrific but still I think there was probably something not quite right in your setup because it isn't normally that slow.
>Once approved, our new bonus program will give every team member who isn’t on an incentive compensation plan or bonus plan today, the opportunity to earn a cash bonus based on their individual performance, targeting 10% of salary, awarded at their manager’s discretion.
LOL. So basically buckle up and do what you're told and grind. And hope your manager likes you or you'll get nothing.
Uh, if this is what I think it means, I wouldn't trust using a product where their company thinks that approvals for reviews can be automated.
Aside, none of these announcements even attempt to make sense.
GitLab's TAM is exploding, demand is through the roof, LLM tooling is making each IC more productive, and to capatalize on this moment GitLab is
... "transparently restructuring" by asking employees to quit so they don't have to lay off as many...
Hmm, does the CEO of — checks notes — “GitLab” know what Git is?
Funny enough it’s not the agentic pivot or AI injection that’s sending me running, though, but the dropping of DEI from their values. Queer folk are still out here fighting tooth and nail for basic opportunities to put roofs over our heads, PoC still out here getting harassed and harmed by cops, disabled folk still struggling for basic accommodations so they can contribute rather than languish. DEI isn’t something you pick up when the popular movement swings towards it as a method of convenience, it’s a value you have to live by especially when times are tough and countries harass you for it.
Fuck you, GitLab.
What we are witnessing so far has been just the tech world’s reaction. As typical companies catch on to the agentic era, we’re going to see more layoffs. A part of it may be due to “unlocked productivity” but more of it will be to make space in their ranks for hiring more AI native workforce. Which will also be scarce at the beginning.
I think we should get ready to see a very different kind of talent war, and at a scale and pace never seen before.
You can always tell when the title is incredibly vague or bereft of details (e.g. "An update about our product") that it's going to be some flavor of either lay-offs, shutting down, or other enshittification.
I think you need to explain it like it’s a bash script else I don’t think you understand it.
(Ironically I don’t think if this article was the prompt, I don’t think an agent would code it up the way you are thinking)
What can go wrong.
Imagine if gcc / clang decided to let agents implement new features without a lot of checking..
Now GitLab announces it will have to fire people - the AI slop cuts away at finacnial gains here.
AI slop is killing everything.
almost like a copy of my post :) https://news.ycombinator.com/item?id=47982975
We've seen these tech waves several times - C and COBOL instead os ASM, CAD/4GL, template generation, Visual Basic and the likes (good old Delphi), Java (which allowed to a lot of mid-inept people to write compilable non-immediately-crashing programs), spread of python, and now AI. Every time we have an expansion of the industry, and every time glorious promises which get delivered on modestly. The point here is that they get delivered on.
And with AI i suppose it will be similar, though much better than before. In those previous waves human brain was the limit. This time we throw that limit away from the start - nobody will be able to comprehend the sheer amount of AI-generated code. Yes, that approach will hit some limit down the road of course too...
... so where's the delivery?
I have no doubt that AI is making some programmers quite a bit more productive. But if it is even 10% as good as all the marketing claims, we should be seeing an explosion of new tech startups, and a huge increase in feature shipping rate and number of bugs closed. Why isn't this obviously happening? Where's the next Dotcom Boom or Cloud SaaS Explosion?
What I am seeing instead is million-line AI slop pet projects whose sole "user" is its developer, and large companies falling over each other to enshittify their products. If there's no genuine user value being delivered, who's going to pay for those thousand-dollar-per-month developer tools?
i see it isn't your first rodeo :) So, in Dotcom the companies needed huge financing for hardware and those money were the main limiter, in Cloud SaaS era small teams with relatively small financing mostly for salaries were able to deliver large - AirBnb, Uber, WhasApp, ... - and the employees, their brain abilities and their ability to work together were the main limiter. Now with AI we don't have these limiters. I'd say the slopped up Claude Code and OpenClaw are the examples of the new wave which is just starting.
>large companies falling over each other to enshittify their products.
Oh, yes, each wave the software is even more sh.tty than before, and this time i think we're really in for a shock to our imagination of how sh.tty it can get. All these datacenters here and later in space would need some slop to churn through :)
My bet is that we'd not have a software as a static set of bits existing for more than one execution. I think we'll have Just-In-Time software. An ephemeral one. It will be generated on the fly for specific task and discarded after. That will keep those datacenters busy at least for some time.
Another storyline i, with some horror, expect is merging of the coming boom of actual physical robots with the boom of AI-slopped software - that should be fascinating :)
It would be irresponsible to treat it as completely ephemeral though; clever tooling would make it easy when you remember "I already solved this issue 3 months ago, let me pull that back and reuse it."
What terrifies me is doing it with the current slopbox user experience. From a UI perspective, it's clumsy system that discourages developing mastery in favour of guesswork and gacha. (When you said the wrong thing in a classic command line, it at least told you so rather than trying to stagger along with it) And as an executing tool, it's simply sluggish-- once you've expressed what you want, Claude takes minutes to do what a regex does in milliseconds.
I wonder if the latter is fixable-- pre-configure the bot to generate answers as reusable code instead of slowly pumping the changes themselves.
For years I've been telling people that every office worker should be able to do at least some programming, just to avoid ever having them spend several days manually repeating the same handful of steps on a large set of data.
I can 100% see AI taking over this market. Teaching office workers to write half-decent prompts is probably easier than teaching office workers Python. But you don't need a $1000/month subscription to write barely-good-enough-to-run-once one-off scripts, and you can't build a business solely on ad-hoc scripts.
> the employees, their brain abilities and their ability to work together were the main limiter. Now with AI we don't have these limiters
Was it? Don't we?
There has never been a shortage of college kids willing to throw together MVPs. Sure, hacking together the bare minimum of business logic with auto-generated Rails code and a $20 Bootstrap template during a hackathon is being replaced by an afternoon talking an AI into generating a Tailwind-styled SPA in whatever Javascript framework is fashionable this week, but what does it really change? Writing MVP-level code was never the hard part.
The hard part is the engineering behind making it scalable, extendable, and durable. That's still staying the same: you're now just giving the prompt to an AI rather than a junior dev. If anything, having to deal with inept managers now sending full-blown AI slop proposals rather than blabbering a handful of buzzwords and leaving the professionals to fill in the rest is going to slow down our ability to work together.
Things like long discussions over formatting that should just be enforced by linters, pushing non-idiomatic patterns despite official docs and tooling recommending otherwise, or turning simple problems into meetings scheduled “for next week”, "in two weeks", "let's have a meeting and invite everyone" instead of just fixing the issue and opening a PR. Which sometimes takes 10 minutes!
At some point it starts to feel like responsiveness and initiative are treated as threats rather than strengths. Autonomy and ownership matter a lot more than people realize. Wonder how that'll look like!
I've done some organizational consulting in the past, often trying to help companies understand why their employees don't trust management. I suspect the powers that be thought that post was decent, and I think the GitHub survivors will likely ignore most of it. And I don't know anything about what's going on there. But if you told me GitHub employees were made MORE nervous by that post than LESS, I would not be surprised.