… you don't have to update the UEFI entries every time the kernel updates. (I guess you might if you do like a kernel w/ CONFIG_EFI_STUB, and you place the new kernel under a different filename than what the UEFI boot entry point to then you might … but I was under the impression that that'd be kind of an unusual setup, and I thought most of us booting w/ EFI were doing so with Grub.)
rEFInd is _so_ much simpler: one efi entry, one text config file in the efi partition, nothing that needs to change when the kernel updates, and no massive pile of templating and moving parts to mysteriously break dumping you at an impenetrable grub “rescue” shell.
Does anyone have an opinion on iSCSI vs NBD?
https://forums.gentoo.org/viewtopic.php?p=4895771&sid=f9b7ac...
https://github.com/NetworkBlockDevice/nbd/issues/93
Whether that’s the case with the latest version, I don’t know, but it’s something you might test if you choose to try it.
nbdinfo nbd://server
nbdcopy nbd://server:2001/ nbd+unix:///?socket=/tmp/localsock
https://github.com/NetworkBlockDevice/nbd/blob/master/doc/ur...... And it's very, very fun.
I have recently upgraded my house to 10Gbps Ethernet, with only one room still stuck at gigabit, and unfortunately, it's my main office. I'm working on getting the drop there now (literally, just taking a break here).
Even once I'm done, accessing an iSCSI drive over 10GbE will be 4-8 times slower than a local NVMe drive, but it will sure be a lot better than it was!
Ideally, I could run VMs on the NAS and have great performance, but that's another hardware upgrade...
NVMe-oF is the best protocol with least overhead for network drives, with a proper setup you lose only 10-20% latency compared to local disk even with Intel Optane. Throughput should be almost similar.
to make this actually work well, consider modifying your switches QoS settings to carve out a priority VLAN for iSCSI traffic
THe caveat was, you needed readonly root, so that meant freezing the OS, anything that needed changing was either stored in a ram disk (that you need to setup) or a per host nfs area (kinda like overlayfs, but not)
If you needed to update the root dir, you chrooted into it and did the (yum) update.
Looks like ZFS is only used to store the image on the server, though. I was expecting this to be more interesting because of that.
Wouldn't that need a local disk?
Then anaconda or whatover os installer picks up and installs the OS in a PXE install sequence when there is a local disk.
Hmmh? I haven't done so in years, but configuring multi-boot used to be considerably easier than disk-less operation.
You can install a prettier looking boot selection menu like rEFInd, but the default works just as well, and I think the mainstream distros all setup secure boot too. On my pc it was very easy, on my (8yr old) laptop I had to add some secure boot keys and the bios was very confusing, using terms that didn’t seem to match what they should have been.
My setup has worked almost entirely flawlessly and survived updates from both OSes. Only issue being “larger” windows feature updates putting windows back as the first OS in the list, but that happens maybe once or twice a year? And it’s a quick bios change to fix the order.
The Linux NTFS resizing code also has a tendency to trigger data corruption. Not really Linux' fault, but it's a good reason to do partitioning from inside of Windows, which can be a pain already.
Another issue I've run into is Windows creating a very small (~300MiB) EFI partition that barely fits the Windows bootloader, let alone a Linux bootloader and kernel. You can resize and recreate the partition of course, but reconfiguring Windows to use a different boot partition is a special kind of hell I try to avoid.
If Linux corrupts someone files, it is 100% Linux's fault and is absolutely unacceptable.
There are some exceptions (some hardware from Microsoft doesn't trust the third party certificate used, for instance, and Red Hat Enterprise has their own root of trust if you opt into that), but they're very rarely ever an issue.
SFP28 might be cheap enough now too, I'm not sure...
I have been waiting for such a feature for like 15 years now. Without it, zfs is just a fad and useless filesystem (all that complexity for NOTHING).
ext2 for the win! still
--
0: https://klarasystems.com/articles/troubleshooting-zfs-common-issues-how-to-fix-them/