This is probably the first time I felt vindicated with my self-hosting move literally the day after I finished the migration, very pleasant feeling. Usually it takes a month or two before I get here.
I host forgejo on a single NUC with a bunch of other stuff in Proxmox, the page loads in 6ms! Immich is not quite as fast but still a ton faster than Google photos.
I’ve got a nice and powerful Minisforum on my desk that I bought at Christmas not even switched on.
Setting up Forgejo + runners declaratively is probably ~100 lines in total, and doesn't matter I forget how it works, just have to spend five minutes reading to catch up after I come back in 6 months to change/fix something.
I think the trick to avoid getting tired of it is trying to just make it as simple as humanly possible. The less stuff you have, the easier it gets, at least that's intuitive :)
I run both right now, but I am in the process of just running NixOS on everything.
NixOS really is that good, particularly for homelabs. The module system and ability to share them across machines is really a superpower. You end up having a base config that all machines extend essentially. Same idea applies to users and groups.
One of the other big benefits, particularly for homelabs, is that your config is effectively self-documenting. Every quirk you discover is persisted in a source controlled file. Upgrades are self-documenting too: upstream module maintainers are pretty good about guiding you towards the new way to do things via option and module deprecation.
No matter the tool, manage your environment in code, your life becomes much easier. People start and then get addicted to the ClickOps for the initial hit and then end up in a packed closet with a one way ticket to Narnia.
This happens in large environments too, so not at all just a home lab thing.
A NixOS config is a bit different because it’s lower level and is configuring the OS through a first-party interface. It is more like extending the distro itself as opposed to configuring an existing distro after the fact.
The other big difference is that it is purely declarative vs. a simulation of a declarative config a la Ansible and other tools. Again, because the distro is config aware at all levels, starting from early boot.
The last difference is atomicity. You can (in theory) rely on an all or nothing config switch as well as the ability to rollback at any time (even at boot).
On top of all this are the niceties enabled by Nix and nixpkgs. Shared binary caches, run a config on a VM, bake a live ISO or cloud VM image from a config (Packer style), the NixOS test framework, etc.
I'm still usually under 10% cpu usage and at 25% ram usage unless I'm streaming and transcoding with Jellyfin.
It's been fun and super useful. Almost any old laptop from the past 15 years could run and solve several home computing needs with little difficulty.
My setup is roughly the following.
- Dell optiplex mini running Proxmox for compute. Unraid NAS for storage.
- Debian VM on the Proxmox machine running Forgejo and Komodo for container management.
- Monorepo in Forgejo for the homelab infrastructure. This lets me give Claude access to just the monorepo on my local machine to help me build stuff out, without needing to give it direct access to any of my actual servers.
- Claude helps me build out deployment pipeline for VMs/containers in Forgejo actions, which looks like:
- Forgejo runner creates NixOS builds => Deploy VMs via Proxmox API => Deploy containers via Komodo API
- I've got separate VMs for - gateway for reverse-proxy & authentication
- monitoring with prometheus/loki/grafana stack
- general use applications
Since storage is external with NFS shares, I can tear down and rebuild the VMs whenever I need to redeploy something.All of my docker compose files and nix configs live in the monorepo on Forgejo, so I can use Renovate to keep everything up to date.
Plan files, kanban board, and general documentation live adjacent to Nix and Docker configs in the monorepo, so Claude has all the context it needs to get things done.
I did this because I got tired of using Docker templates on Unraid. They were a great way to get started, but it's hard to pin container versions and still keep them up-to-date (Unraid relies heavily on the `latest` tag). Moving stuff over to this setup bit-by-bit and I've been really enjoying it so far.
The problem is that people never stop tinkering and keep trying to make their homelab better, faster, etc. But its purpose is not to be a system that you keep fine tuning (unless thats what you actually are doing it for), its purpose is to serve your needs as a homelab.
The best homelabs are boring in terms of tech stacks imo. The unfortunate paradox is that once you do start getting into homelabs, its hard to get out of the mentality of constantly trying out new stuff.
My "homelab" is basically Linux + NFS, with standard development tools.
I think the most important thing for me is that I chose when I have time to upgrade, it's no longer forced upon me, that's why I prefer to depend on myself rather than 3rd party services for things that are essential. Been so many times I've had to put other (more important) things on hold because some service somewhere decided that they're gonna change something, and to get stuff working again you need to migrate something. Just got so tired of not being in control of that schedule.
There’s only one solution to this.
Quit your job.
The number of consistent issues i've had with anything github-related lately is crazy. Even just browsing their site is difficult sometimes with slow loads that often just hang entirely.
That said, I've got Linux and macOS setup with a Mac Mini (using a Claude-generated Ansible task file), but configuring a Windows VM seemed a bit painful. You didn't happen to find anything to simplify the deployment process here, did you?
No, unfortunately not, the Windows VM setup + Forgejo Windows runner was the most painful thing for me to setup, no doubt. It's just such a hassle to reliably set things up, even getting logs out of it was trouble... To be fair, my Mac Mini was manually setup at first, then I have Nix on top of it, while Windows I've 100% automated it, so not entirely fair comparison, automating the Mac Mini setup would be similarly harsh I think. But it's a mix-match of Nix for configuring the VM and booting it, XML files for "autounattend" setup, ps1 bootstrapping scripts and .cmd script for finalizing, a big mess.
I do need a good backup solution though, that’s one thing I’m missing.
Immich automatically dumps its DB every day, for Forgejo I have a little script that runs as part of the Backrest backup that does a pgdumb of the database before doing the backup.
It works great, I even had to do disaster recovery on it once and it went smooth.
The downside with that is it misses one of the key purposes of GitHub: posturing for job-hunting/hopping. It's another performative checkbox, like memorizing Leetcode and practicing delivery for brogrammer interviews.
If you don't appear active on GitHub specifically (not even Codeberg, GitLab, nor something else), you're going to get dismissed from a lot of job applications, with "do you even lift, bro" style dissing, from people who have very simple conceptions of what software engineers do, and why.
I mostly use Forgejo for my private repos, which are free at Github, but with many limitations. One month I burned all my private CI tokens on the 1st due to a hung Mac runner. Love not having to worry about this now!
Sometimes wonder if my coursemates back in the days, who automated commits to private repos just to keep the green box packed, actually got any mileage out of it.
Edit: to the "do you even lift bro", the response becomes "yeah man, I've built my own gym - oh, you go to Planet Fitness? Good luck."
6 years early [0] and you have better uptime than GitHub.
“There were 1 billion commits in 2025. Now, it's 275 million per week, on pace for 14 billion this year if growth remains linear (spoiler: it won't.)
GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week in 2025, and now 2.1B minutes so far this week.”
Source: GitHub COO on April 3, 2026. https://x.com/kdaigle/status/2040164759836778878
https://thenewstack.io/github-will-prioritize-migrating-to-a...
Curious because for a long time we as an industry maintained that reliability and brand value are business critical; but seems like they are cared very little now a days.
Happy to be corrected about my perception too.
I'm pretty sure it still does - I used it at a previous job and at somewhere that I interviewed recently they said they used GitHub (given their size and being a somewhat regulated industry I can't imagine they rely on github.com).
What I would like to see is a combined uptime for "code services", basically Git+Webhooks+API+Issues+PRs, which corresponds to a set of user workflows that really should be their bread & butter, without highlighting things you might not care about (Codespaces, Copilot).
A service's availability is capped by its critical dependencies; this is textbook SRE stuff (see Treynor et al., The Calculus of Service Availability). Copilot may well be on the side of it (and has the worst uptime, dragging everything down), but if Actions depends on Packages then Actions can be "up" while in reality the service is not functional. If your release pipeline depends on Webhooks, then you're unable to release.
The obvious one is git operations: if you don't have git ops then basically everything is down.
So; you're right about Copilot, but the subset you proposed (Git+Webhooks+API+Issues+PRs) has the exact same intersection problem. If git is at one nine, that entire subset is capped at one nine too, no matter how green the rest of it looks.
And to be clear: git operations is sitting at 98.98% on the reconstructed dashboard linked above[1]. That is one nine. Github stopped publishing aggregate numbers on their own status page, which.. tells you something.
With that set, I wasn't proposing a set of totally independent services to be grouped together, I was talking about a set of things that I think represent pretty core services for Github users. If Git is dragging the rest of those down, fine; PRs are useless without it. In fact it is worse than some but it's not the worst of that group, and it is still a lot better then the dregs of Actions and Copilot.
Having said that, the numbers are of course terrible, two nines on a couple of things and one on everything else would be bad for a startup, it's an utter embarrassment for a company that's been doing this over a decade.
So based on their own reporting, the uptime number should be 99.31. Which means only like 6 additional hours and they'd fall below 99.0%
Also, looks like people might be pummelling the SourceHut servers looking for an alternative: https://sr.ht/ is down. (Edit: was down when I wrote that, back up now).
> We have resolved a regression present when using merge queue with either squash merges or rebases. If you use merge queue in this configuration, some pull requests may have been merged incorrectly between 2026-04-23 16:05-20:43 UTC.
We had ~8 commits get entirely reverted on our default branch during this time. I've never seen a github incident quite this bad.
These don't really look any different than past incidents which have red bars on their respective days, except maybe that those tended to be several hours.
What do the green bars even mean? Are they changed to non-green retroactively if people complain enough or something? So far as I can tell, literally none of the previous green days have any incident shown in the mouse-over, but there are multiple for today only, so I kinda have to assume the mouse-overs are conveniently "forgotten" or all incidents become non-green and they just don't bother informing anyone on the same day. Either seems intentionally misleading.
Good riddance I hope it completely destroys them.
No fuss instant refund of my unused subscription (£160) appreciated.
Only Pro (without plus) can be paid annually for some reason.
I used all the 'Premium Requests' every month on (mainly) Opus 4.5 & 4.6. From what I've read on here it seems I was probably a rather unprofitable customer - it felt like a steal.
Sure, if you're out after reaching the most people, gaining stars or otherwise try to attract "popularity" rather than just sharing and collaborate on code, then I'd understand what you mean. But then I'd begin with questioning your motivation first, that'll be a deeper issue than what SCM platform you use.
I think it is time that Microsoft lets go of GitHub. They are handling it too poorly.
The first one I've built is a little ASCII hangout for Claude @ https://clawdpenguin.com but threads like this make me want to build it for Github too.