They need to introduce something below the Standard license targeting the Neo. What I'd personally consider is:
- Standard gets 16 GB vRAM (to perfectly target the base MacBook Air). But leave it at 4-6 vCPUs to not compete with the Pro (still for general computing, not power-users)
- New "Lite" tier with 8 GB vRAM max for the Neo (4 vCPUs). Increasing to 12 GB vRAM if the Neo does.
Then you target a $89 price point one-time-purchase for the "Lite" tier. Essentially three plans, targeting your three major demographics: budget, standard, and pro/power-user.
[1] https://samhenri.gold/blog/20260312-this-is-not-the-computer...
You took what I said out of context and then replied to something else. Running Parallels on a Neo is a novelty. Parallels is both what the thread is about AND what my reply was expressly about.
Nobody can reasonably read what I wrote, in context, and believe I was referring to the computer itself as a novelty.
These won't run Crysis, but they don't need to.
You can give it less. It may refuse to install, but even without using any workarounds, you can change the assigned RAM after installing and it will not refuse to boot. The minimum for Windows Server 2025 is 2 GB, and it’s basically the same OS (just with less bloat).
While I have a preference for VirtualBox I'd say I'm hypervisor agnostic. Really any way I can get this to work would be super intriguing to me.
I use VMWare Fusion on an M1 Air to run ARM Windows. Windows is then able to run Windows x86-64 executables I believe through it's own Rosetta 2 like implementation. The main limitation is that you cannot use x86-64 drivers.
Similarly, ARM Linux VMs can use Rosetta 2 to run x86-64 binaries with excellent performance. For that I mostly use Rancher or podman which setup the Linux VM automatically and then use it to run Linux ARM containers. I don't recall if I've tried to run x86-64 Linux binaries inside an Linux ARM container. It might be a little trickier to get Rosetta 2 to work. It's been a long time since I tried to run a Linux x86-64 container.
I used to use VirtualBox a lot back in the day. I tried it recently on my Mac; it's become pretty bloated over the years.
On the other hand, this GUI for Quem is pretty nice [1].
I've run amd64 guests on M-series CPUs using Quem. Apple's Rosetta 2 is still a thing [1] for now.
Also is it possible to convert an existing x86 VM to arm64 or do I just have to rebuild all of my software from scratch? I always had the perception that the arm64 versions of Windows & Ubuntu have inferior support both in terms of userland software and device drivers.
I’ve got that one and I’m yet to feel limited.
I have a current gen MacBook Pro for work configured with stupid amounts of ram and I feel no difference in terms of fluidity at all.
This is them confirming that the CPU has enough virtualization support that they can virtualize rather than emulate the guest OS
The best Windows laptop you can buy is still a MacBook.
That was until I realized how many reports are coming from people talking about their work laptops loaded with endpoint management and security software. Some of those endpoint control solutions are so heavy that the laptop feels like you've traveled back in time 15 years and you're using a mechanical hard drive.
Some of this is not _just_ a corporate problem. Why would Winzip have an auto run application and tray application in the first place? Every single app seems to think they need one, and it's a classical tragedy of the commons. Perhaps on a virgin Windows install, your app with autorun and a tray icon will be more responsive. But when 20 other apps pull that same trick, no one wins.
This is actually one of the reasons I'm not excited at the idea of Linux defeating Windows. If it did, corporations would just start crapping up Linux the way they've crapped up Windows.
I use a corporate Windows VDI at work, so the experience is understandably subpar there, but it is still horrible on high.end hardware. Took me half a day just to herd it through update after update, while avoiding linking it to a Microsoft account despite its protests.
It's literally used to run only Steam and Firefox, and it still sucks compared to the ease of install/management of Linux. Ubuntu LTS took me about an hour to set up dual boot, apply updates, install Steam, and every other software and tool I use daily.
Why is Windows 11 still so clunky in 2026? It doesn't feel like the flagship product that many bright minds have improved for three decades. Why are hobbyists and small companies outperforming Microsoft's OS management?
Not nine different/only somewhat overlapping pieces of software from companies that were competitors. Nine equivalent products. I guess defender made ten.
At the same time, as someone with a well maintained Windows gaming rig, I don't like spending time in the OS these days. Something about transparently doing stuff that puts money in their pocket while inconveniencing me gives me the ick.
Microsoft also puts a lot of crap into a default install that you may want to disable. Windows 11 with some judicious policy editor settings isn't so awful.
If your Explorer context menu is taking more than a split second to load, there's something wrong with your hardware.
First experience of Windows 11, trying to download a file through firefox caused my 18 core 10980xe to have the entire UI freeze for the full time the download was going.
Reverted back to windows 10 immediately and the problem went away.
Windows 11 is full of spyware from the Mothership
I think that Apple has gotten so used to having fast storage in their machines that the newer OSes basically don’t work on spinning rust.
My home laptop is even faster.
https://browser.geekbench.com/v6/cpu/17011372
This was the latest UTM in the App Store, so native Hypervisor.Framework access for arm64 Windows acceleration.
Also it isn't 2-3x faster, stop with the made up nonsense please. Just checked and my 3 year old AMD laptop is on par with the NEO geekbench score I found online (slower in single core but faster in multi core), not 2-3x slower.
I have a PC with a 10+ year old 256GB SATA Samsung SSD that's still in top shape, but that's different because that drive has those 256GB split over several NAND chips inside, so wear is spread out and shuffled around by the controller to extend lifespan. But when your entire wearable storage is a single soldered chip, I'm not very optimistic about long term reliability.
I still think it's a great machine, but I think all these worries about NAND dying really haven't come to fruition, and probably won't. I have about a hundred plus of various SSD Macs in service and not one has failed in any circumstance aside from a couple of battery issues (never charged and sat in the box for 2 years, and never off the charger).
1. How do you know nothing happened? Define nothing in this case. Do Mac users check and report their SSD wear anywhere?
2. Didn't the OG 256gb M1 have 2 128MB NAND chips instead of one 256 meaning better wear resistance?
NAND is still the same wearable part that regular X64 laptops have, Apple doesn't use some magic industrial grade parts but same dies that Samsung, Micron and SK ship to X64 OEMS, and those are replaceable for a reason, because they eventually fail.
The MacBook neo is for students, grandparents, travel, etc.
Hell, even if it dies after 6 years it was still a better experience than using a $500-600 windows PC and the cost comes out to ~$8/month spread over 6 years.
Do you think SSD drives are replaceable for no reason? Just because M1 mac aren't failing left and right doesn't mean their NAND won't fail.
I can't in good faith buy a machine with soldered wearable parts. That's like buying a car with soldered brake pads because "in 6 years average users don't feel like they need changing".
I still had laptops on my hands from 20 years ago (IBM thinkpad) that work fine simply because you can swap their drives with fresh ones. How many M1 mac will still be functional in 20 years?
I thought wear leveling worked at the page/block level, not the chip level? On an SSD, if there was a failure of an entire chip, you're still screwed.
I can see this could be a weaker spot in the durability of this device, but certainly it still could take a few years of abuse before anything breaks.
an outdated study (2015) but inline with the "low end ssds" i mentioned.
https://techreport.com/review/the-ssd-endurance-experiment-t...
No it doesn't. Most 1TB drives are rated for around 600 TBW, so enough to overwrite the drive 600 times, nowhere near 300k cycles. If you search for specs of NAND chips used in SSDs, you'll find they're rated for cycles on the order of hundreds to thousands, still nowhere near "300k".
https://www.techpowerup.com/ssd-specs/crucial-mx500-4-tb.d95...