Teams, Office (especially online), One Drive, SharePoint, Azure, GitHub, LinkedIn, all became very shitty and partially unusable with increasing number of weird bugs or problems lately.
OP wasn't suggesting it was, just that the lack of quality in one significant area of the company's output leads to a lack of confidence in other products that they release.
It does sound hard, and might need to employ homomorphic encryption with hw help for any memory access after code has been also verifiably unaltered through (uncompromised) hw attestation.
This. A while ago a build of Win 11 was shared/leaked that was tailored for the Chinese government called "Windows G" and it had all the ads, games, telemetry, anti-malware and other bullshit removed and it flew on 4GB RAM. So Microsoft CAN DO IT, if they actually want to, they just don't want to for users.
You can get something similar yourself at home running all the debloat tools out there but since they're not officially supported, either you'll break future windows updates, or the future windows updates will break your setup, so it's not worth it.
https://www.windowscentral.com/software-apps/windows-11/leak...
So they are not incentivized to keep Win32_Lean_N_Mean, but instead to put up artificial limits on how old of hardware can run W11.
I have no insider knowledge here, just this is a thing which get talked about around major Windows releases historically.
This was most evident back in the 90s when they shipped NT4: extremely stable as opposed to Win95 which introduced the infamous BSOD. But it supported everything, and NT4 had HW support on par with Linux (i.e. almost nothing from the cheap vendors).
Citation needed since that makes no logical sense. You want to sell your SW product to the most common denominator to increase your sales, not to a market of HW that people don't yet have. Sounds like FUD.
>but instead to put up artificial limits on how old of hardware can run W11
They're not artificial. POPCNT / SSE4.2 became a hard requirement starting with Windows 11 24H2 (2024) (but that's for older CPUs), and only intel 8th gen and up have well functioning support for Virtualization-Based Security (VBS), HVCI (Hypervisor-protected Code Integrity), and MBEC (Mode-Based Execution Control). That's besides the TPM 2.0 which isn't actually a hard requirement or feature used by everyone, the other ones are way more important.
So at which point do we consider HW-based security a necessity instead of an artificial limit? With the ever increase in vulnerabilities and attack vectors, you gotta rip the bandaid at some point.
A key difference between regular software and Windows is that almost nobody buys Windows, they get it pre-installed on a new PC. So a new PC purchase means a new Windows license.
Are they as important as stated? Microsoft says so. Everyone here loves and trusts them, right?
What is missing here that was present when this same computer was running Windows 10?
Yes, you can bypass HW checks to install it on a pentium 4 if you want, nothing new here.
>What is missing here that was present when this same computer was running Windows 10?
All the security features I listed in the comment above.
This computer had the security features that you listed while it was running Windows 10, and now that it is running Windows 11 it is lacking them?
(I'm not trying to be snarky. That's simply an astonishing concept to me.)
> > What is missing here that was present when this same computer was running Windows 10?
> All the security features I listed in the comment above.
I'm running 11 IoT Ent LTSC on a some T420; it runs pretty okay.
In their intended applications, which might or might not be the ones you need.
The slowness of the filesystem that necessitated a whole custom caching layer in Git for Windows, or the slowness of process creation that necessitated adding “picoprocesses” to the kernel so that WSL1 would perform acceptably and still wasn’t enough for it to survive, those are entirely due to the kernel’s archtecture.
It’s not necessarily a huge deal that NT makes a bad substrate for Unix, even if POSIX support has been in the product requirements since before Win32 was conceived. I agree with the MSR paper[1] on fork(), for instance. But for a Unix-head, the “good” in your statement comes with important caveats. The filesystem is in particular so slow that Windows users will unironically claim that Ripgrep is slow and build their own NTFS parsers to sell as the fix[2].
[1] https://lwn.net/Articles/785430/
[2] https://nitter.net/CharlieMQV/status/1972647630653227054
https://github.com/Microsoft/WSL/issues/873#issuecomment-425...
Not true. There are increasingly more cases where Windows software, written with Windows in mind and only tested on Windows, performs better atop Wine.
Sure, there are interface incompatibilities that naturally create performance penalties, but a lot of stuff maps 1:1, and Windows was historically designed to support multiple user-space ABIs; Win32 calls are broken down into native kernel calls by kernel32, advapi32, etc., for example, similar to how libc works on Unix-like operating systems.
Also, as far as my (very limited) understanding goes, there are more architectural performance problems than just filters (and, to me, filters don’t necessarily sound like performance bankruptcy, provided the filter in question isn’t mandatory, un-removable Microsoft Defender). I seem to remember that path parsing is accomplished in NT by each handler chopping off the initial portion that it understands and passing the remaining suffix to the next one as an uninterpreted string (cf. COM monikers), unlike Unix where the slash-separated list is baked into the architecture, and the former design makes it much harder to have (what Unix calls) a “dentry cache” that would allow the kernel to look up meanings of popular names without going through the filesystem(s).
From there, it hits the MFT, finds the specific record for the file, loads the MFT record, and ultimately returns the FILE_OBJECT to the I/O Manager and it bubbles up the chain back to (presumably) Win32. The MFT is just a linear array of records, which include file and directories (directory records are just a record with directory = true, essentially).
Obviously simplified. Windows Internals will be your friend, if you want to know more.
[1] https://www.kernel.org/doc/html/latest/filesystems/path-look...
[2] I was under the impression that it could look up an entire path at once when I wrote my grandparent comment; it seems I was wrong, which on reflection makes sense given you can move directories.
[3] https://www.kernel.org/doc/html/latest/filesystems/path-look...
But there's another issue which is what cripples windows for dev! NTFS has a terrible design flaw which is the fact that small files, under 640 bytes, are stored in the MFT. The MFT ends up having serious lock contention so lots of small file changes are slow. This screws up anything Unixy and git horribly.
WSL1 was built on top of that problem which was one of the many reasons it was slow as molasses.
Also why ReFS and "dev drive" exist...
Ext4 also stores small (~150B) files inside the inode[1], and so do a number of other filesystems[2]? NTFS was unusually early to the party, but if you’re right that it’s problematic there then something else must also be wrong (perhaps with the locking?) to make it so.
[1] https://www.kernel.org/doc/html/latest/filesystems/ext4/inli...
[2] https://en.wikipedia.org/wiki/Comparison_of_file_systems#All..., the “Inline data” column.
If even MS internal teams rather want to avoid it, it seems like it isn't a great offering. https://news.ycombinator.com/item?id=41085376#41086062
Remember, I said the _file system_ was just fine. It's that extensible architecture above all file systems on NT that causes grief.
The only method to 'turn off' Defender is to use DevDrive, which enforces ReFS, and even then you only get async Defender, it's not possible to completely disable.
> Example use cases include:
> * Running unmodified Linux programs on Windows
> * ...
That won't work if the unplugged Linux program assumes that mv replaces a file atomically; ntfs can't offer that.
You can read more if you wish in 'Inside the Windows NT File System' by Helen Custer, page 15.
A comment like yours is just like saying: "I know a buggy open-source software, why would I trust that other open-source project? The open-source community burned all possible goodwill".
There is no CEO of open source, there are no open-source shareholders, there are no open-source quarterly earnings reports, there are no open-source P&G policies (with or without stack ranking), and so on.
Still, the fact that it's open source is a good thing. People can now take that code and make something better (ripping out the AI for example) or just use bits and pieces for their own totally unrelated projects. I can't see that as anything but a win. I have no problem giving shitty companies credit where its due and they've done a good thing here.