Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think we'd see a serious boom in Windows on ARM if ARM vendors had a performance benefit to offer and Microsoft implemented a backwards compatibility layer as Apple has in this case. The difference here is in the offering. Microsoft was making a tablet offering where they're just not competitive with no ability to use x86 software while Apple here is offering much more - backwards compatible and all.

Until then there's just no reason to bother with ARM and vendors and users alike can see that.

Only recently has Microsoft announced a backwards compatibility layer, if we can see some hardware vendors step up to Apple levels of performance, there might be a serious shot now: https://www.extremetech.com/computing/315733-64-bit-x86-emul...



> Microsoft implemented a backwards compatibility layer as Apple has in this case

Windows-on-ARM already has 32-bit x86 emulation - and 64-bit x64 emulation is coming soon:

* https://docs.microsoft.com/en-us/windows/uwp/porting/apps-on...

* https://www.extremetech.com/computing/315733-64-bit-x86-emul...

I haven't looked too deeply, but it seems to use the same WoW (Windows-on-Windows) mechanism that's been present in Windows NT going back to at least Windows XP 64-bit Edition (no, not 2005's "Windows XP x64", but the original 2001 Windows XP for Intel Itanium IA-64).

> The WOW64 layer of Windows 10 allows x86 code to run on the ARM64 version of Windows 10. x86 emulation works by compiling blocks of x86 instructions into ARM64 instructions with optimizations to improve performance. A service caches these translated blocks of code to reduce the overhead of instruction translation and allow for optimization when the code runs again. The caches are produced for each module so that other apps can make use of them on first launch.

Looks like Windows dynamically translates code in DLLs/EXEs on a JIT/on-demand basis and caches each translated run-of-code.


Running x86 code via Rosetta on the MacBook Air is significantly faster than it is running on the previous generation. With the Surface X, that isn't remotely the case. It's a pretty major gap.

This is from the Verge's review of the Surface Pro X:

> The worst part is that even if an app installs, it doesn’t mean you’re going to have a great experience. Photoshop installs and opens just fine on the Surface Pro X, but the usability of it is terrible. https://www.theverge.com/2019/11/6/20950487/microsoft-surfac...

Maybe it's improved since then? Everything I've heard suggests there is a pretty big performance penalty running x86 code on Windows ARM. If there is a significant performance hit on the M1, the performance gains of the M1 are so good, it erases the difference.

The only way an emulation layer works is if the new platform is fast enough to erase that difference. When Apple moved from PowerPC to x86, it nearly did. With this transition, Apple knocked it out of the park. Microsoft? Not so much.


Rosetta seems to have a similar performance hit to Microsoft's solution (10-30% overhead compared to native, depending on the workload).

The difference is that Microsoft is locked into using Qualcomm chips, which just aren't very close to Apple's performance.


The impression I got was that it was a lot slower than a 30% performance hit, but perhaps the Qualcomm CPU is that much slower? If so, it really begs the question—Why bother making a tablet-laptop thing which performs so poorly?


The tablet-laptop form factor is great, and a lot of people simply don't need high performance but do need to run some specific line of business application from 2003, so including x86 emulation makes sense.


It's a bad cycle.

Lackluster performance means low adoption. Low adoption means developers have no incentive to support it. Lack of developer support means little or no software will be written for the platform so performance never improves. Repeat.


> Lackluster performance means low adoption

And yet Electron rules cross-platform desktop development, not Qt nor wxWidgets or openstep.

Developers are fine with worse performance - even lackluster performance - if a platform or framework offers some other compelling advantage, especially _developer performance_ (i.e. it saves money).

Electron saves money because creating one-off custom UI widgets can be done easily and cheaply with just HTML+CSS+JS. While there are many things that are literally impossible with HTML+CSS+JS at-present (such as "splitting" a rendered UI, like how iOS 5 did with its notification banners and home-screen folders), but besides certain flashy effects everything is doable quickly. Whereas building a custom UI in any other platform would take days or weeks just to get a prototype done if you couldn't hack-on additions to an existing "UI component".

So your line of reason would be improved by saying:

"Lackluster performance with no other benefits means low adoption", the rest then follows.


Because those performance stats are the usual level, Apples own chips just exceed the general trend by a huge amount. The hardware is just better than Qualcomms or any other widely available ARM cpus.


Basic example is how even the MS's very own Visual Studio is horrible slowbloat (won't even mention how ugly), compared to let'say IntelliJ editors on the same Windows that are noticeably faster.


Have you tried running VS in Software-mode without hardware acceleration of the UI/graphics?


Huge difference in performance though.

M1 seems to run things on Rosetta emulation in some cases faster than they ran natively because of how much uplift the hardware has.

Microsoft ARM devices have such lower performance than any emulation will run worse than natively, making it essentially unusable as a serious offering.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: