That MS has a working x86 compatibility layer makes me think that Apple could well have something like this too. If the new chips have performance comparable with Intel's, even a 5x performance hit for "legacy" apps might not be catastrophal, since developers in the Apple ecosystem are usually fast to update their stuff and cater to the demands of the early adopters with lots of money.
Apple has gone the emulation route with the PPC/x86 transition before. With their tight grip on the ecosystem, including the development tool chain, I think most software will be updated very quickly.
This is quite different from the situation with Windows software, most of which feels like the Devs hate the it, the platform and themselves.
There were 68k/PPC fat binaries too, though this was as much about compatibility with older systems as it was for performance (the emulated 68k system on PPC was quicker than any 68k hardware).
A lot of the guts are shared with iOS, which runs natively on those chips already. I think it's safe to assume they have internal macOS builds running on their A series processors as well. They've probably been testing that for years now.
> The first Intel powered Macs shipped in early 2006 with 10.4.4, but Intel compatible builds of OS X date back to 2001/2002 internally.
NeXTSTEP/OPENSTEP ran on x86 already. The early Apple releases, called Rhapsody, were released for x86 and PowerPC.
It's possible they dropped support for x86 in early Mac OS X Server 1 releases (1999/2000), and readded it around the time of Mac OS X 10.0 to 10.2 (2001/2002), but I expect there was support in the codebase for the whole time.
It may have been practically unmaintained (and untested, and maybe even without ensuring it compiles), but I doubt they actually removed the x86 code that was there.
There was a big kernel change from Rhapsody and OS X Server 1 to Mac OS X Public Beta. Rhapsody/OSXS1 and NeXTSTEP had used Mach 2.5 with the BSD 4.3 personality (Rhapsody/OSXS1 was, essentially, just re-skinned NeXTSTEP without any compatibility with the classic OS X API ("blue box", later "carbon")). With OS X Public Beta, the kernel was replaced with a new one based on Mach 3 with a new Unix personality based on porting FreeBSD's upper layers onto the Mach microkernel.
It's not inconceivable that a lot of the previous x86 compatibility was lost or broken at that time. Certainly anecdotes from the Marklar x86 skunkworks team indicated that they spent about 2 years porting and fixing a lot of bugs, which had to be submitted to the normal kernel team via patches that were very carefully written to seem as though they were requesting changes related to niche PPC behaviours (for instance, the PPC was bi-endian, so you could plausibly start submitting changes related to various endian brokenness as if you had tried to use that).
And of course, other layers -- Quartz, Carbon, I/O Kit, and a bunch of others, had never existed in NeXTSTEP and may have needed their own porting work from scratch. NeXTSTEP ran on x86, but a lot had changed since then.
But more important than a lot of the actual porting work was that the overall portability work had been done. Gone were most of the inline assembly / platform-specific hacks and optimizations that earlier iterations of Mac OS and 68k NeXT code likely had.
Right, if it was unmaintained (which I could easily believe it was!) I wouldn't be at all unsurprised by it being a multi-year effort to get it working again with all the big changes made to OS X in the time period.
And things that were rewritten from scratch for Aqua (in 10.0, like the entire graphics stack) will have never run on a little-endian system, and those alone would be a major porting effort.
I could see Apple working around the emulation speed issue by entering an agreement with AMD where Apple designs 95% of the chip, has AMD design an instruction decoder to translate x86(-64) to native uops for the most common instructions (only falling back to emulation for uncommon instructions), and has AMD "manufacture" the chip (so it technically falls under AMD's x86 license).
This would allow Apple to avoid much of the overhead of software emulation and I'm sure AMD would be happy to play along since it gets them a (thin) slice of Apple's margins which they would otherwise not have. After a few generations when x86+ARM fat binaries are the norm in the MacOS ecosystem they could drop the x86 decoder (falling back to software emulation only) and presto.
Apple has far more control over their platform than they did in previous platform migrations. They'll more likely announce a 'little checkbox' in the developer tools, put minimal effort into emulation performance, and mandate that applications going forward comply. Problem solved.
Mac OS is not Windows :) Apple's never shied away from migrating platforms when they deem it useful, and having things running in a secret lab for years.
Last time they migrated from a niche instruction set to the dominant instruction set in the PC and server space however.
That is not comparable to a migration from the dominant instruction set to a niche instruction set, which a migration from x86 to ARM would be in the current computing landscape.
Last time, the Mac platform basically existed in isolation, thus the only problem was that apps for this platform had to be recompiled. This time, the Mac is no longer isolated - millions of developers write client- and server-side applications on Macs that are to be run on mostly x86-based servers, and their toolchain implicitly relies on the architecture being the same on dev and prod machines. That is not to say that it's impossible to change the architecture of the dev machines to something else - it's just a huge additional drawback that was not to be considered at all back then in the PowerPC->x86 transition.
These two facts tend to get downplayed or overlooked pretty frequently when it comes to the "ARM-based MacBooks" discussion, but I consider them fairly substantial and they dampen my enthusiasm for such a transition quite a lot.
To be fair, how many developers with Macbooks are actually writing platform-specific code? I'm under the impression that most Macbook-wielding developers are web developers and work mostly with JavaScript, Python, Ruby etc. which all have ARM runtimes available. Even the "IDEs" (Atom, VS Code) are written in JavaScript nowadays or at least in Java with minor C parts (Jetbrains), which is also available for ARM platforms. Also, none of the web stuff is ever running on Mac OS, it's almost always Linux, maybe some Windows IIS.
There are also a lot of people only using their Macbook for presentations, text writing, or even only surfing the web. Apple's own office suit will be ported to ARM when they change their CPU architecture, Microsoft has Office for ARM available (or at least in the pipeline for 2019), and LibreOffice is available for ARM as well.
If Apple really wanted to do this, they would release their small Macbook (non-Pro) with ARM first and then describe a plan to change to ARM for the Macbook Pro line within a few years. No need for a transition period where emulation takes place, everything important is already ARM-ready. The iMac Pro is another thing, that might actually be harder but I imagine manageable if Adobe etc. are willing to invest/to be paid to support ARM.
>To be fair, how many developers with Macbooks are actually writing platform-specific code? I'm under the impression that most Macbook-wielding developers are web developers
There are of course the millions of iOS developers.
And besides that, in any conference, from C to Rust to C++ to Java, you'll see tons of Macbook-wielding developers, often the majority.
And when it comes to keynote speakers at conferences (as opposed to audience) the PC laptop is the exception as opposed to the norm...
ARM wouldn't be a niche instruction set in the current computing landscape. It might even be the dominant one, if we consider how many people are carrying around multi gHz ARM computers in their pocket every day.
But a MacBook is not a smartphone. A MacBook is a laptop, which is a portable version of a PC, which clearly has x86 as the dominant instruction set today.
I've done a fair amount of development work on both an ARM Chromebook and a Raspberry Pi, and I didn't run into any major issues.
It depends heavily on your tech stack, though. I find that developing on ARM and deploying to x86 was no big deal with Node, Python, and Go. Your mileage may vary with other languages and VMs, though.
1. Apple ARM computers have insanely fast CPU/GPUs in, kinda way more than they need.
2. https://www.theverge.com/2018/10/15/17969754/adobe-photoshop...
3. Apple rewrote all of their apps (or killed them), so they're bound to be cross platform. Why else start from scratch and release with fewer features? Final Cut Pro X, iWork, Logic Pro X.
90% of apple sales are for ARM computers, bet they'd love to only make 1 OS, would save loads of money
ARM is not yet dominant in the desktop computing landscape, but it might become so. Apple are notorious early adopters and, developing the chips themselves, have some great insight on the potential.
They are also in a position to isolate themselves again now.
But a MacBook is neither a smartphone nor a tablet, which is what the term 'mobile' refers to. A MacBook is a laptop, which is a portable version of a PC, and in that landscape, ARM is a niche instruction set.
I bet that Apple won't rely that much on emulation this time (like back when switching from PPC to x86), instead either require app-store apps to be uploaded as LLVM bitcode, or upload fat-binaries with ARM and x86 machine code (NextStep aka OSX did this already a quarter century ago), or maybe even statically translate x86 machine code to ARM on the app store "server side".
Most command line code installed through homebrew is compiled on the user machine anyway, which leaves the closed-source and legacy UI applications not distributed through the app store (but by the time Macs switch to ARM, OSX probably will forbid to run those anyway).
> I bet that Apple won't rely that much on emulation this time (like back when switching from PPC to x86), instead either require app-store apps to be uploaded as LLVM bitcode
LLVM bitcode remains architecture-specific (if not platform-specific), you can not just recompile x86 bitcode for ARM.
> or upload fat-binaries with ARM and x86 machine code (NextStep aka OSX did this already a quarter century ago)
That doesn't obviate the need for a transition compatibility layer, complex software can take years to port to different architectures.
> I bet that Apple won't rely that much on emulation this time (like back when switching from PPC to x86), instead either require app-store apps to be uploaded as LLVM bitcode, or […]
LLVM bitcode is platform specific. It deliberately isn't designed to be portable.
A huge amount of x86 is no longer patented and most of those instructions make the bulk of common x86 instructions. That could drastically reduce the overhead.
Even if they went the full-blown emulation route, A12 is almost an order of magnitude faster than the old designs Windows was running on.