We are in a future that nobody wanted.
Their business is buying good products and turning them into shit, while wringing every cent they can out of the business. Always has been.
They have a grace period of about 2-4 years after acquisition where interference is minimal. Then it ramps up. How long a product can survive once the interference begins largely depends on how good senior leadership at that product company is at resisting the interference. It's a hopeless battle, the best you can do is to lose slowly.
See also their moves in the gaming industry.
I mean, that's obviously not the case, but it's weird that it happened twice!
Who could have POSSIBLY foreseen any kind of dire consequences?
The person(s) who wanted this want Azure to get bigger and have prioritized Azure over Windows and Office, and their share price has been growing handsomely.
‘Microslop’, perhaps, but their other nickname has a $ in it for a reason.
some people wanted this future and put in untold amount of money to make it happen. Hint: one of them is a rabid Tolkien fan.
Nor deserved.
jokes aside it’s all because of hyper financial engineering. Every dollar every little cent must be maximized. Every process must be exploited and monetized, and there are a small group of people who are essentially driving all this all across the world in every industry.
It stings to have this happen as we're putting a lot of effort specifically into the core product, growing teams like Actions and increasing performance-focused initiatives on key areas like pull requests where we're already making solid progress[1]. Would love if you would reach out to me in DM around the perf issues you mentioned with diffs.
There's a lot of architecture, scaling, and performance work that we're prioritizing as we work to meet the growing code demand.
We're still investigating today's outage and we'll share a write up on our status page, and in our February Availability Report, with details on root cause and steps we're taking to mitigate moving forward.
I don’t think GitHub cares about reliability if it does anything less than that.
I know people have other problems with Google, but they do actually have incredibly high uptime. This policy was frequently applied to entire orgs or divisions of the company if they had one outage too many.
(See also: Windows, Internet Explorer, ActiveX, etc. for how that turned out)
It's great that you're working on improving the product, but the (maybe cynical) view that I've heard more than anything is that when faced with the choice of improving the core product that everyone wants and needs or adding functionality to the core product that no one wants or needs and which is actively making the product worse (e.g. PR slop), management is too focused on the latter.
What GitHub needs is a leader who is willing and able to say no to the forces enshittifying the product with crap like Copilot, but GitHub has become a subsidiary of Copilot instead and that doesn't bode well.
It could be, some people are just terrible at their job. Lots of teams have low quality standards for their work.
Maybe that still comes down to leaders but for different reasons. You can ship useless features without downtime.
I understand that the 'updating the part of the page that's changed' functionality is now dramatically slower, more unresponsive, and less reliable than the 'reload the entire thing' approach was, and it feels like browsing the site via Citrix over dial-up half the time, but look, sacrifices have to be made in the name of making things better even if the sacrifice is that things get worse instead.
React allows this? I didn't realize that I needed React to do this when we used Java and Js to do this 20 years ago. I also didn't realize I needed React to do this when we used Scala and generated Js to do this 10 years ago. JFC, the world didn't start when you turned 18.
They need to start rolling back some of their most recent changes.
I mean, if they want people to start moving to self hosted GitLab, this is gonna get that ball rolling.
My previous org had an on prem version hosted on a local VM. It was extremely fast, we setup another VM for the runners, and one for storing all the docker containers. The thing I’ve seen people do it use the VM they put their gitlab instance on for everything and ends up bogging things down quite a bit.
I feel this is just the natural trajectory for any VC-funded "service" that isn't actually profitable at the time you adopt it. Of course it's going to change for the worse to become profitable.
> It's owned by Microsoft.
I see no contradictions here.
If you have a captive audience, you can get away with making the product shittier because it's so difficult for anyone to move away from it - both from an engineering standpoint and from network effects.
And then many UI changes people have been complaining about are related to things like copilot being forcibly integrated - which is very much in the "Microsoft expect to gain a profit by encouraging it's use" camp.
It's pretty rare companies make a UI because they want a bad UI, it's normally a second order thing from other priorities - such as promoting other services or encouraging more ad impressions or similar.
it's almost as if Microsoft bought it, isn't it?