Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ubuntu successfully virtualized on M1 (forums.macrumors.com)
230 points by inductive_magic on Nov 26, 2020 | hide | past | favorite | 166 comments



What I’m interested in, which may probably take years (or never happen well), is running Linux natively on these Macs after there’s no more macOS support from Apple. Typically Macs stop getting software support at around 7-9 years after first release. Hardware does tend to last longer and can serve some purposes beyond that.


Me too. Interestingly 2 MacBooks from the Intel era were amongst the best machines ever, objectively, for Linux.

The MacBook 2,1 is almost unique in being supported by Coreboot and Libreboot. Furthermore, because it had an Atheros wireless card, you didn't need any blobs at all. Only two old Thinkpads are comparable, among x86_64 machines.

The MacBook Air 11 Late 2012, which was used as a daily runner by Linus for a number of years, was a pure Intel machine. Except for a weak Broadcom card, it was flawless with a stock kernel. Plus, it was silent, small and fast. The only comparable machine in terms of silent operation, cost and Linux support was IMHO the Xiaomi Notebook 12.5, which was released quite recently.

The problem with ARM Macs is not just secure boot. The secure enclave chip already gave serious trouble when trying to run Linux on their last Intel machines, as e.g. the keyboard doesn't show up as a standard USB device. So I don't have high hopes.


Completely open is cool, but it’s a high bar to pass. A lower bar might be, can I just install Ubuntu and use it as a daily driver? At least up through the mid-2015 this is the case.


Oh yes, I'm happy with that. The problem is that might be even hard to achieve with the T2 chip.

It's sad because the new MacBook Air seems superior to all competitors and the price is quite fair after academic discount.

Also, Apple makes it easy to buy any keyboard distribution from any location, which is really cool. With many brands, I need to hack the system and shop from NL to get a US ANSI keyboard here in EU.


Standard layout for dutch macs is ISO. Although English-US with ANSI is available


I meant for other brands where I have to get them from NL to obtain a US-like keyboard. Otherwise, they ship me the local EU keyboard.


Yeah, in particular the Mac mini has a lot of curb appeal as a 10+ year micro server. Considering it's plugged in and the fan should only spin up occasionally, it should last a long long time.

I'm pretty sure we'll have Linux running on it by then. Perhaps just CLI only.


My 2011 Mac Mini is running under the TV, serving files, running some small docker containers. It works fine for that workload.

I just use VNC into it to manage anything. Another little machine that does what it needs. It's quiet and doesn't need much.


Why not just use an Intel NUC?


Last I checked, a well configured NUC was nearly as expensive as a Mac mini and almost certainly slower than this. Depends largely on what you are looking for though. From the sounds of it the NUC also has a loud fan? (For some reason I thought they were fanless).


Yeah, both the Mac Mini and NUC have fans. The Intel NUC will definitely be slower and hotter (thus louder), but I still think it's a better choice if you specifically want to run Linux on it.


>What I’m interested in, which may probably take years (or never happen well)

Probably wont take that long. Apple gives instructions to disable strict boot check and boot from any OS. The rest are the drivers. For graphics, they can start just using a framebuffer...


And 7-9 years is a fantastic support lifetime and yes, I know some folks who still use their 9 year old macbooks such as grandparents etc without major issue.

You have new android phones that ship with software 1.5 years old and NEVER get updated.


7 to 9 years is fantastic, but not really that out of the ordinary for this industry as far as I can tell?

My 2011 model Lenovo came from the factory with a 2012 copy of Windows 8.

Lenovo released BIOS and driver updates to the 2011 machine at the end of 2019 and continue to do so. Windows 8 is EOL in 2023 with free upgrade paths to 8.1 which has a further out EOL date, meaning that the OS is still supported by both companies, as well.

Admittedly, however, your Android example is dead-on. Android seems to have always struggled with software updates; I blame version fragmentation along with very cheap (free in most cases) licensing costs, which spur on lower-quality vendors to take on the product.


I agree but I'll tell you (as an apple fan in the ecosystem) that sometimes this kind of "let's abandon the old things" has some drawbacks. For example, the changes in the https behaviour and support for the websites, made an old mac useless because safari didn't support the new TLS versions and there weren't any decent alternative for an old system like that (we're talking about a 2004 mac if I recall correctly, intel 32bit, so yeah, I know it's old in many ways).

The only solutions was to install linux and it worked.

So, even though I'm fully committed to the ecosystem and I love each and every apple product I still do think that having to trash a machine after 10 years isn't so good considering that those machines are usually still capable to work daily.


I'm typing this on a 2011 MacBook Air. I don't use this as my daily driver, but I use for lightweight day to day things.


These M1 Macs are more like iPads than Intel Macs. Jail breaking them would be the first step, but after that one would need a ton of reverse engineering to write all the drivers for all the little custom HW built into M1 SoC.

I have never heard of an iPad running Linux (and iPads been around for a long time), so chances of M1 running Linux natively is slim.


What would native support provide that virtualization does not?


Performance, a teensie bit of insulation from long-dead software rot, and the warm fuzzies you can only get from running a free software stack.


I’m guessing there won’t be a significant performance advantage, not insulation from rot.

The likelyhood that Apple’s drivers at EOL outclass the driver support the community can provide seems high.

On the other hand, I agree with the last point. There is an aesthetic dimension that can’t be ignored.


The community does a pretty decent job of supporting other Apple hardware; often vastly outperforming default MacOS (or Windows in many tests). Graphics is a particularly soft target, eg: https://www.phoronix.com/scan.php?page=article&item=macos101...

I know —and apologies for the originally unintended, but retrospectively way-overused pun— I've compared an Apple to an orange there, and I realise it's very different hardware... But it's also not. It mostly depends on access. If developers can shim in a loader, and it doesn't require soldering, it'll be popular and get attention. These chips look too great to ignore.

Not my downvote, btw. Hopefully they'll toss an opinion in too.


That’s very good to know - I have an iMac 5K 27” that is only a year or so away from losing support from Apple, which will become a Linux machine at that point.


Virtualization doesn't always play nice if you're doing performance intense/measurement work, although I wouldn't even try doing that on Apple hardware to start with


I don't know why you believe it can never happen well, as linux already run in ARM processors for some time now.

Obviously it would take another canonical to do grunt driver creating work. But I suppose that if we don't see a quick response from intel or AMD, this hardware will take off and people will write the drivers.


It's not just a question of driver support. Apple is doing a lot of custom stuff with their architecture that might not have an analogue on current ARM systems, meaning that it might require not just new or updated drivers, but effectively a completely new architecture variant added to Linux before you can even boot.

Also, with the T2 chip in Intel Macs, there was a lot of nonstandard behavior[1], like the keyboard not being just a standard USB device.

Linux can run on ARM processors, but it's the architecture that can make a huge difference. Heck, even Apple's Lightning-to-HDMI adapter has an ARM chip in it[2], but that doesn't mean you can boot it to Linux very easily.

[1] https://news.ycombinator.com/item?id=25221804

[2] https://www.theverge.com/2013/3/1/4055758/why-does-apples-li...


> but effectively a completely new architecture variant added to Linux before you can even boot.

This sounds wrong. At most it may need a custom bootloader, which can then load a regular arm64 kernel plus the required drivers for Apple specific peripherals.


The laptop works without updates.


It might keep working for a while, but Macs depend on external services to function properly. What if Apple replaces OCSP in a future release of OS11 and turns off their existing OCSP service?


Is the OS secure abandoned without 2-3 years of updates to any new issues discovered?


You expect a consumer device to be support for more than 9 years? That’s not practical.


Older macs could run Linux pretty well. I even had Gentoo running on an ancient PPC mac yonks ago. A few years back, I got Linux working (sorta) on a MacBook 14,3

https://battlepenguin.com/tech/linux-on-a-macbook-pro-14-3/

With all the security on the newer Apple chips, I wonder if we'll ever see Linux boot on these things natively, much less get actual hardware driver support.


Apple started putting the T2 chip in the Macs 2 years ago. I suspect they plan on locking down the platform soon.


I don't really get this narrative about the T2. Apple removed the T2/integrated it into the M1 and didn't lock down things any further. If they were going to lock it down, this would have been the time for it, given there's no Bootcamp for Apple Silicon. Yet Apple specifically allows you to sign custom kernels offline on Apple Silicon Macs.

People also seem to be convinced that the T2 was a "locking down" measure, while all that's really missing is a Linux driver to interface with the T2. Surely Apple doesn't care at all about actively supporting Linux (and publishing specs), but they also don't lock this down.

The only support I see for the idea that Apple will "lock down Macs" is that for some reason they're irrationally hell bent on turning Macs into iPads. I don't see how that idea holds up if you inspect it closely though.


I'm not trying to peddle a conspiracy theory, but the slow progression "boil frogs" method is how this sort of thing is usually done. I mention the T2 because secure storage for secure boot is a pretty obvious way to implement.

The percent of revenue coming from Macs continues to go down. I'm just assuming that at some point, the only (financial) reason the Mac exists is for Xcode.


Apple's Mac category revenue was $9 billion in the latest quarter, up 29% year-on-year—growing faster than any other category. And this was before the M1 launch.

At this revenue run rate, a standalone Mac company would be in the Fortune 100, next to companies like Nike and Coca-Cola.


That's a quarter. See:

https://cdn.statcdn.com/Infographic/images/normal/8817.jpeg

https://images.macrumors.com/t/hIKaZ3dlMYeHuw-3yqiPbLiHtnc=/...

And, yes, it's still significant, but it's also lower margin revenue than other Apple products.


Pretty sure Mac is going to do well this quarter, too. And surely Apple Silicon means that margins are going up.


It’s a conspiracy theory.

Any attempt to improve security is of course going to follow a progression. All software development is progressive.

The leap from there to a ‘boiling frog’ i.e. that this is being done so the user won’t notice a hidden agenda, is a conspiracy theory.


I suppose, if you consider any speculation to be a conspiracy theory.


It’s not ‘any speculation’.

It is a theory that a group of people with power have a hidden agenda that is contrary to what they have public stated, despite the facts being consistent with the public claims.

That is pretty much the definition a conspiracy theory.


A group of people that already understand and take great advantage of locked down platforms and services. And we're talking about a product (Mac) that was specifically much more locked down than its predecessor when it was launched.


A group of people who have said that they see the Mac differently from iOS, and who have explicitly said they don’t plan to lock it down.

For you to be right, this must be a planned deception.

That is a conspiracy theory.

It seems like you actually know this and are just trying to argue that that you’re right that there is a conspiracy.

For what it’s worth - I don’t think there is anything fundamentally wrong with that. Conspiracies are sometimes real.

I’d just rather be open about what we are saying.


>For you to be right, this must be a planned deception.

Sure, you're right, it's conspiracy theory -- but one doesn't have to be a conspiracy theorist to predict further anti-consumer action from companies that have a long and sordid past with anti-consumer actions.

Apple , quite literally, wrote the playbook for establishing gilded cages and vendor lock-in within the (modern) computer industry.

I mean.. most companies in the world don't generate enough ire that the public get together and maintain a wikipedia page about the criticisms that apply to the company separately from the parent page.[0]

But absolutely, I agree -- it's conspiracy theory. The problem I have personally with conspiracy theory regarding Apple is that over the past 30+ years i've seen a lot of it turn into fact gradually over time.

I have very few doubts that the trend will continue onward. -- as such I have little problem with entertaining conspiracy regarding Apple, as long as it's not absolutely insane.

Apple seems to be masterful at the magicians' game of misdirection. That's the best way I can put it.

'Ignore the man behind the curtain.'.

The most modern example?

Let's talk about how fast and cool and battery conserving their new processors are without taking into account the vast amounts of software breakage that came along with the architecture change and the future software landscape reform that's going to take place that will establish Apple customers even more firmly into territory they can't ever hope to leave.

...and this is coming from someone who , historically, has loved ARM. MY problem with it is that the architecture change, IMO of course, is going to undoubtedly be used to leverage and strong-arm customers into Apple-centric 'app-store' interfaces, pulling the market apart just like in the PPC days between Apple OK-d software and the rest of humanity -- for the sake of a lot of quick bucks for Apple and a lot of additional developer/customer friction for the rest of the world.

We'll see, I guess. I like the new M1 on paper, I just trust Apple about as far as I can throw them.

[0]: https://en.wikipedia.org/wiki/Criticism_of_Apple_Inc.


The presence of a Wikipedia page critiquing the worlds most valuable corporation doesn’t seem like evidence of anything much.

It’s worth noting that many of the criticisms on that page, while valid, are true of all their competitors too, and of comparable capitalist businesses.

What that page leaves out is that in many areas, Apple has made more progress at mitigating the problems than anyone else.

I have spoken to activists about this discrepancy, and received the answer that they are aware of this, but it is politically more effective to critique Apple because they are the market leader, than to critique their competitors.

I don’t really agree with this tactic, but I understand it.

Nevertheless why even bring this page up?

It seems like you think it supports a generalized narrative of ‘Apple as bad guy’, which then makes the conspiracy theory more plausible.

This is unsurprising because it is the normal way conspiracy theories are supported: some real problems are tied together with woven together with exaggerated claims in a grand narrative with a villain at the center.

We are asked to believe in the conspiracy because some of the things that are part of the theory are based in fact, despite the fact that the rest is just innuendo and naked assertions.

That is what you are asking us to do here.

See examples:

[1] “Apple seems to be masterful at the magicians' game of misdirection. That's the best way I can put it.”

Take this for example. Here you just affirm the consequent. Of course, as I said earlier for you to be right there must be a deception, and here you say that Apple are masters of deception.

But really that’s a naked assertion. If there is a conspiracy this would be true, but there is actually no evidence of it.

[2] “Apple, quite literally, wrote the playbook for establishing gilded cages and vendor lock-in within the (modern) computer industry.”

Obviously this is fantasy. If Apple had literally written such a playbook, you would be able to present it as evidence.

What they have written and presented literally, is a large body of work on how to produce a secure computing environment that mitigates most forms of security threat and is therefore trusted by consumers as a platform where they can purchase software.

That is literally what they have written.

Again you are just asking us to accept a conspiracy as if it were fact.


>A group of people that already understand and take great advantage of locked down platforms and services.

They never pretended otherwise. It's a commercial company, with its own platform.

It's not a generic OEM manufacturer for generic devices to install various OSes on.

Their whole pride is their software/hardware integration.


Sure. We're pretty far down the rabbit hole, but I'm responding to people that think my suggestion that Apple might lock down Macs is unfounded. You seem to agree with me that there's some reason to believe it's plausible.


For what it's worth, these are all the same arguments people have made regarding the TPM chip over the last ~15 years:

https://en.wikipedia.org/wiki/Trusted_Platform_Module

It also reminds me a lot of the Palladium controversy of ages gone by:

https://www.zdnet.com/article/microsofts-palladium-what-the-...


Isn't the TPM a large part of why it's difficult to run anything other than iOS on an iPad/iPhone?


iPhones don't really contain any hardware that does not have more or less a conceptual equivalent in other smartphones. I suppose with "TPM" you mean the "Secure Element", which on iPhones and modern Android phones contains the disk encryption keys.

Like a lot of other smartphones, the main difficulty in running something else on an iDevice is the locked bootloader. It won't run anything not signed by Apple. If an exploit like checkra1n is used to defeat this, iPhones can run Linux in principle [1], of course practically restricted by a lack of a large amount of drivers required to run Linux well.

This iDevice OS lockdown completely reduces to 1) the locked bootloader which must be "defeated" - which M1 Macs contain a built in tool for - and 2) the only OS with drivers for the hardware being Apple's Darwin. So no, the TPM isn't a large part of why it's so difficult to run anything else on an iDevice.

[1]: https://blog.project-insanity.org/2020/04/16/running-postmar...


You probably mean the Secure Enclave. The Secure Element contains payment applications and keys for Apple Pay.


No it was an intel thing for secure booting windows in x86 PCs back in the days when Microsoft was top of the world and everyone thought Wintel were evil monopolists trying to lock Linux out of the PC ecosystem. I think there are still TPM like capabilities in intel chips now, but they rebranded it?


I had assumed TPM was generic enough to refer to the T2 as well.


Delayed reply, but no, TPM is a specific international standard:

https://en.wikipedia.org/wiki/Trusted_Platform_Module

iPhones, iPads, etc. do use a similar concept though.


>I'm not trying to peddle a conspiracy theory, but the slow progression "boil frogs" method

Yes, but another name for the "they're using the boil frogs method" is "slippery slope fallacy".


How's that? Slippery slope depends on a "relatively small first step". Apple already has other hardware and software platforms that aren't open.


The boiled frog fable [1] is false.

[1] https://archive.vn/mu6T


As are many illustrative, and still useful, metaphors.


In other news, the cow did not jump over the moon


With their M1 based devices, they now explicitly added support for you to authorize alternative kernels to boot from (on your local machine).


Is this better than the previous T2 Intel Macs for Linux support?


Previously you weren't able to do that, but you were able to disable booting without signature checks. I'm mainly basing this info off this sole tweet: https://mobile.twitter.com/never_released/status/13263157410...


This is based off of Apple’s new Virtualization.framework introduced in Big Sur that lets you spin a Linux VM with half a dozen lines of code. The technology is still in its infancy (minimal emulated devices; not sure if there’s even a framebuffer) and I’d wait for something like Docker running on Hypervisor.framework (lower level APIs but more mature).


For Docker, virtualization framework is fine and provides everything we need. There is no framebuffer on hypervisor framework. There will be an arm qemu port for hypervisor framework fairly soon. I have Docker running on M1 with VF see https://twitter.com/justincormack/status/1331965304273047553


> The technology is still in its infancy

This is definitely in the experimental phase and not something for general use. The fact that there is a simple implementation is cool.


xhyve[0] doesn't look entirely production ready, but emulates a number of devices including networking, block storage, and a framebuffer while being based on Hypervisor.framework.

[0] https://github.com/machyve/xhyve


Docker For Mac is based on "Hyperkit" which is based on xhyve. So xhyve has a fair bit of mileage on it.


Does xhyve have any support for ARM though? I remember it as being pretty x86 specific re: emulated devices and targets. A very quick glance doesn't suggest this has changed.


It's funny how the way to get a Linux on ARM machine with decent performance might be getting a Mac Mini and running Linux as a VM. I doubt nested virtualization will work though.


Honest question, because I genuinely don't know: aren't there any Linux distros that run decently on ARM? If not, why?


Yes, many, but most ARM machines are currently "Single Board Computers" (SBC) and the performance is not M1 level. 2020 appears to very much be "year of ARM" with Graviton, and now the M1 chip looking very good.

EDIT: gravitation -> graviton


There are a few ARM server motherboards but most of them are either based on older hardware and/or the product lines have been discontinued.

I think you mean "Graviton" which is Amazon's custom processor using ARM Neoverse cores. You can't buy one of them as your own machine but you can use it through the Amazon cloud.

https://aws.amazon.com/ec2/graviton/


Most major Linux distros run on ARM just fine, thats not the problem.

Undocumented custom Apple hardware with proprietary MacOS only drivers & locked up bootloader is the problem.


> Most major Linux distros run on ARM just fine, thats not the problem.

To add some detail to this: you can spin up an AArch64 server VM on Amazon AWS. You're given the choice of which OS you'd like. The usual GNU/Linux suspects are there, Debian, Ubuntu, Red Hat, SUSE. You can also go with FreeBSD. [0]

The Raspberry Pi also runs various distros, as tinus_hn points out.

[0] https://lists.freebsd.org/pipermail/freebsd-cloud/2019-April...


> locked up bootloader is the problem.

Apple has documentation up on how to self-sign your own binaries for installing whatever you want on M1 bare metal.

> proprietary MacOS only drivers

Drivers are specific to an OS, calling them proprietary is redundant. It would be a great service to the community if Apple were to release Linux drivers or at least specs so someone else could write them. But if you want unix running on Mac hardware, the best experience is almost always going to be running MacOS.

(Until of course Apple stops shipping MacOS for the current hardware)


> Drivers are specific to an OS, calling them proprietary is redundant.

Not at all, proprietary means it's closed source it has nothing to do with which OS it's for e.g. Nvidia has proprietary drivers for more than just one OS.

This leads into the rest of your comment in that Apple doesn't need to flat out make Linux drivers as a service to the community (though that would be great) rather they could simply share the info needed about their existing software or about the hardware so the community could focus on coding a driver instead of reverse engineering proprietary hardware and software.


> Not at all, proprietary means it's closed source

You are correct. For some reason I had a whole other idea of what you meant in your previous comment.


> Drivers are specific to an OS, calling them proprietary is redundant.

Software being proprietary is independent of whether it's specific to an OS. I can write FOSS apps specifically for macOS. Drivers can also be FOSS.

> But if you want unix running on Mac hardware, the best experience is almost always going to be running MacOS

Maybe. If the standards get completely opened, no proprietary components or drivers, it remains to be seen which one would be best.


> Software being proprietary is independent of whether it's specific to an OS.

I've never really associated "Proprietary" with Open versus Closed source.

> Maybe. If the standards get completely opened, no proprietary components or drivers, it remains to be seen which one would be best.

It's been a while since I've used desktop Linux regularly so it's hard for me to say. Apple more or less designed this processor for running iOS & MacOS though so its going to have an edge regardless of quality of Linux drivers.


> I've never really associated "Proprietary" with Open versus Closed source.

But that's what it's about, more or less: https://en.wikipedia.org/wiki/Proprietary_software. While exact definitions of open, closed and proprietary software vary, no mainstream definition is related to "specific to an OS". Software can be single/multiplatform and it can be FOSS/proprietary, and those two axes are independent.

> Apple more or less designed this processor for running iOS & MacOS though so its going to have an edge regardless of quality of Linux drivers.

If Apple retains some "secret magic sauce", sure. However, on a level field, with full access to the hardware for everyone who asks, who can say?


> But that's what it's about, more or less

It makes sense. Just not really a term I would use that way so never occurred to me in that context.

> If Apple retains some "secret magic sauce". However, on a level field, with full access to the hardware for everyone who asks, who can say?

That's not really what I was thinking.

If you build the CPU and the OS together, you can make optimizations which aren't available to more modular systems.


There’s no IBM PC/AT for ARM to fork from.

So there are duplicate and diverse attempts to design purpose built ARM computers.

Just as it was with PowerPC Macs, ThinkPads, Xbox 360, PS3, Wii, router appliances, etc., that were all built on PowerPC CPU, “ARM platform PC” are usually not designed to run kernel binaries complied for any other machines.

There are recent attempts by SoC vendors to change that, too early to call widespread or matured just yet.

e: words


The Raspberry PI is an ARM system so anything that can run on that runs on ARM.


Thanks! I feel dumb now: I own an RPI2 running Linux and a C64 emulator, and I didn't make the connection when asking my question :P


There are even ARM laptops designed for GNU/Linux: https://en.wikipedia.org/wiki/Pine64#Notebook_computers.


Manjaro ARM is great. I use it daily with sway.

I haven’t tried docker, but given the widevine hack used it, presume all is ok on that front.


It seems like ARM created special instructions called NEVE for performant nested virtualization: https://www.cs.columbia.edu/~nieh/pubs/sosp2017_neve.pdf

It says it's added to ARMv8.4 and M1 seems to be ARMv8.6, but I found no information if the M1 has these extensions.


Nesting is a key part of any system to make it truly general purpose. It seems crazy that for years we lived with non-nestable VM's.

It goes hand in hand with the fact that an emulated or virtualized system shouldn't be able to tell that it is emulated (if the emulator doesn't want it to know). It should run as if on real hardware. Inability to do nested virtualization reveals immediately that emulation is in play.


You just need access to precise timers to tell if you are virtualized. If you can measure how long a full TLB miss takes vs just a cache miss you should be able to tell how deeply is your page table nested.


An emulator can emulate timers however it likes...


Ok, but that means that you can't have accurate wall clock time inside your virtual machine. Also, it would make most games impossible as all physics is ultimately tied to a clock. And of course a connection to internet is impossible.


Nested virtualization including the ARMv8.4 improvements are present on Apple M1.

However, Apple's virtualization solution doesn't currently support those.


https://github.com/evansm7/vftool provides a simple, CLI-based implementation for this.


Even though this is a simple VM based on MacOS’s ARM virtualization framework, this combined with ssh and clever volume sharing is probably enough to make Docker on Mac work (but only with aarch64 Docker images) (export DOCKER_HOST=ssh://user@ip.) I’m excited to give it a shot when I get my Air next week.


I remember someone running a data center with bunch of Mac mini when Intel Mac mini came out to save space with its power efficiency but I guess it's another time someone might do it again when Linux can run natively flawlessly.


Nice, though that it was done quickly raises the question of why Apple hadn't tried harder to get some (even hacky) version of this together before the release.


Perfect is the enemy of good.

Apple needed to get the platform to the market and what they've done so far is everything they needed to do. They haven't even updated the whole line yet, so just give it time and let them improve the basics first (no, virtualization isn't "the basics")


I guess it depends on who you are. For me being able to run a Linux VM isn't "perfection", it's a basic requirement.


I don't think that an entry-level MacBook is the right computer for you. Chill.


Putting together something hacky isn't really Apple's way of doing things. (They occasionally release overly complex, buggy things though)

As with the x86 Mac, Apple is relying on third party developers to take release full VM solutions.


Apple showed Parallels working months ago; it just hasn't been released yet.


Running Linux on a Mac does nothing for Apple. Why would they care?


That's a bit like saying running Photoshop on a Mac does nothing for Apple. Why would they care?

They care because if Photoshop isn't available then people who use Photoshop won't buy Macs. If Linux doesn't run in a VM then developers won't buy Macs.


But that's really not accurate.

There are some developers who might care about Linux on a Mac. Most Linux zealots I know hate Apple and won't touch a Mac just because of ideology.

Photoshop is not at all in that category. You're comparing apples to dump trucks.


Clearly Linux is an important application to run for some Mac users since Apple took the time to show Debian running during their keynote (Photoshop only got a passing mention).


Apple going to ARM for the M1 is interesting, wonder how it might shift the server market... As a developer who might want to target both, wonder if any decent ARM VPSes? Scaleway had ARM Dedis but quit offering that it seems. AWS has some but seems larger servers. Wonder if anything smaller for someone to tinker with without buying a new ARM MacBook. Maybe a Pi but was hoping more cloud stuff maybe to play with?


good job - Linux or Ubuntu is the killer app for apple.


That always makes me wonder how much they have contributed to Linux itself? I know they funded CUPS for a while, which I guess is something, but has Apple ever upstreamed any useful kernel work code or device drivers or user space libraries or apps?


They indirectly helped the space by lighting a fire under GCC's ass by funding clang development. Clang + LLVM is also useful in general for everyone, after all it's the backbone of code generation for languages like Rust.


They have upstreamed stuff to FreeBSD for a long time. I don't know how much or if it's used or not, but their kernel and a bunch of their other stuff is open source.

opensource.apple.com


They famously do opensource abysmally. They just throw code over the wall, largely for pre-existing licensing reasons. The projects they “own” have mostly floundered once they reached a level Apple deemed good enough (CUPS).


Yeah, I know, you always show up to argue this.

It's not their obligation to make you happy. It's not their obligation to maintain everything for you.

They are under no licensing obligation to provide you anything. They don't use GPL code, they use mostly BSD licensed code, which they don't have to give you. But they do anyway.

They give you code and tools they've developed and if you want to use them you can. If you don't, you don't have to.


Virtualization is so dirty. I’ve never really come around to it. I’m waiting for the day when there’s a correction for how much we rely on it now and go back to a “pure” bare metal world.


[flagged]


The forum perhaps but I read nothing of that sort reading the 3 pages of the linked post.

And the post contains more information than the tweet. Forum users are sharing attatchments, repo links and their experiences.


Agreed. Not sure what toxicity is being referred to.


This is really sad.


Care to elaborate?


Well, I'm not op, but I think it's sad that people are getting excited over hardware that can't run linux that runs linux virtualized. That's not a good thing. It's an okay, marginal thing in a bad situation. It's is not exceptional or notable. The fact that it is a notable accomplishment to get Ubuntu running just means the virtualization is, amazingly, worse (more complex and troublesome) than normal virtualization.


I'm not sure of what y'all are complaining about.

M1 computers literally just started being delivered last week; Being able to do something on a brand new platform is at the very least something to be pleased about. Given how good the system appears to be on all fronts it's normal that people are excited.

It's not "exceptional" but it just shows that, even with a whole new CPU underneath, developers will soon be able to do everything they're used to.

Give it time and someone will run Linux without virtualization on it.


That is still not good. At least on the Intel Macs you could boot Linux on USB natively. Now people cheer being able to virtualize OSes and wait for a hack that will allow them to boot other software on a platform where this is discouraged and actively prevented.


Whats the point of these submissions? ARM has been around forever and yes theres software for it.


This is industry revolution happening before your eyes.

Excluding smartphones/tablets, X86 has been the defacto computer architecture of personal computers. Microsoft tried an ARM processor but the tradeoffs were evident and it was considered half baked, leading to poor reviews.

For quite some time in our industry the main limitation has been clock speeds, nowadays the main limitation is temperature control.

Apple launched an ARM chipset that remains as cool as 50C when intel reaches 95C, this will forever change the industry of personal computing.

That's why people are raving about it.


Naive but genuine question: What are the major advantages of switching to an ARM processor? From Apple’s keynote, I gathered that there’s the benefit of instant-on, and allowing iPhone/iPad apps to just work in MacOS, but are there some other fundamental advantages to be had with ARM for Mac? What difference does it make for both power users and real consumers at the end of the day?


Another big one: not relying on 1 or 2 large companies for CPUs. Intel and especially AMD can’t afford to make (what they perceive to be) niche features on their CPUs. They have to design for the mass market, or if offering something more unusual or specialized they have to sell them at very high prices.

With ARM, any company is able to get an architecture license and go hog wild. The architecture (sticking to AArch64 since nobody cares about 32 bit ARM anymore for general purpose computing) is much cleaner and simpler with less cruft than x86. This makes it easier for a variety of companies to offer different solutions at different price, performance, and power points. Plus with the weight of Apple behind it there will surely be much more development in both the ARM software and hardware space as it applies to general purpose desktop computing.


Yeah, but Apple's stuff won't necessarily translate to generic ARM, they have their own extensions. Plus their own OSes.

I don't really see how this benefits Linux or Windows.


They have their own extensions, but having a mainstream desktop platform that runs on ARM will certainly be a push for compilers and software to have better support for ARM in general. For example, right now Docker ARM images are basically an afterthought and if something doesn't work in the ARM version a maintainer might just shrug their shoulders and view ARM as a niche use case. With a critical mass of ARM desktops that will no longer be the case.


The benefits don't really have that much to do with ARM specifically, more that Apple can tune their silicon designs for their very niche applications, while ARM Holdings designs and other silicon IP designers have to design much more generally as they're selling to a broader customer base integrating the cores into everything from smartphones to edge switches.

I'm not sure to what extent they've done so, but with their own silicon Apple has the freedom to simply not implement the unnecessary optional bits of ARM, like 32 bit support, and the optional extensions to the ISA not applicable to desktop class general purpose compute.

The saved space in transistor budget could allow them to save precious time and energy in performance critical spaces like the instruction decoder/fetcher.

No idea what they're actually doing (they're not exactly very open about their designs) but with their own processors they're free to optimize for a specific use case and a specific kernel. A specific example of this is how they've reduced the time spent in garbage collection by a factor of 3-5 IIRC, which has pretty dramatic ramifications for both performance and memory usage (as you can do GC quicker and more often)


Just a few clarifications:

- M1 is indeed 64 bit only

- Apple's software doesn't use GC - it uses reference counting which has indeed been speeded up dramatically.

The M1's (two) CPU designs are Apple's own and there is a lot of information out there on them e.g.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


Thanks for the info :)

I counted ARC as a sort of GC, though I've heard the opinion it is and isn't a type of GC, and see both sides' points

I love that article and the research behind it, but it's worth pointing out the details are sort of "reverse engineered" by probing XNU and running some diagnostic programs, not from apple supplied docs. I doubt apple will ever directly document it like IBM or Intel document their CPUs, though I hope I'm wrong!


It's a great piece and as you say it's a shame that we won't see more detailed info from Apple but I've seen a couple of interesting comments (leaks?) from Apple employees with more details (e.g. the tweets embedded in the article below). I suspect that we'll know an awful lot about the M1 by the time we've finished!

https://blog.metaobject.com/2020/11/m1-memory-and-performanc...


For Apple, deeper control of their vertically integrated stack. Which depending on who you ask, is a good or bad thing.

Other commenters here covered other benefits as well.


Longer battery life for individual consumers, cheaper electricity bills at scale


It was surely not a given that any already compiled non-trivial code must run as is on Apple's M1. Here the whole Linux kernel and the CLI parts of the distribution are apparently running virtual just using a small number of the API calls.

It's, for my understanding, more than amazing, knowing what's already known about M1 and Apple's newest OS and surely newsworthy.


This. And now begins the decline of x86 and Intel.


AMD is making a bigger dent in Intel than Apple ever will. Apple doesn't even have 10% of laptop marketshare whereas AMD are currently at 20% for laptops and on a steady climb. Not to mention both the desktop and server markets where AMD is doing even better. So unless you think the whole PC world is going to switch to macOS or there exists another company capable of competing with AMD using ARM, we're not going to see a whole lot of change.


> or there exists another company capable of competing with AMD using ARM

There's a very long history in consumer computing where the industry as a whole tries something, executes badly, says "ah well, we tried" and kind of gives up on it, then Apple does it well, then the industry notices "oh, it's possible" and does it well.

Obvious examples: Smartphones: WinMob/Symbian, then years later iPhone, then almost immediately after Android (with WinMob and Symbian quietly dying)

Personal media players: Creative Nomad et al, then iPod, then lots of stuff.

Ultrabooks: A variety of unusable hideously expensive compact PC laptops going back to the 90s and mostly abandoned by the late noughties, then MacBook Air, then every PC manufacturer makes a MacBook Air clone.

ARM computers: Surface RT (ridiculously slow, no software support), modern ARM Surface (expensive, slow, poor software support), M1 (good), ???

Not to say it's inevitable, but once Apple shows it can be done, history suggests that it will be done. Possibly complicated by the chicken and egg problem here; Microsoft needs Qualcomm to make good laptop chips to invest much in ARM Windows, while Qualcomm needs Microsoft to make good ARM windows to invest much in good laptop chips.


This is a really interesting point. I wonder if there is another route though. If we assume that Intel (for process reasons) are out of it, then could AMD produce an 'x86 version' of the M1 on TSMC 5nm using Zen3 and successors which would help other firms compete with Apple.

I guess the extent to which M1 is great because its ARM isn't clear - there are some advantages in instruction decoding, having bigLITTLE etc - but the impact of all these together hasn't been quantified. Plus there is the deeper integration with Apple's software.


> then could AMD produce an 'x86 version' of the M1 on TSMC 5nm using Zen3 and successors

I mean, they could, but the A12Z in the devkits, produced on TSMC's 7nm process, also showed very impressive results. It's not clear how much gain the M1 is getting from 5nm, but I'd be _extremely_ skeptical that it's the whole story, especially given limited improvements between the directly comparable A13 (7nm) and A14 (5nm, some microarch changes).

> Plus there is the deeper integration with Apple's software.

This is relevant to some OS niceties (eg the disconcertingly instant wakeup and resolution shifts) but should have no bearing on, say, SPECInt2017.


Agree that they couldn't match - but if they got within say 15% on performance and battery life would that be good enough?

I think the software integration goes a bit further than wake-up etc e.g. they seem to have speeded up handling of reference counting through architectural changes (not sure I could explain how though and even Apple's software engineers seem a bit in the dark!)


> but if they got within say 15% on performance and battery life would that be good enough?

I mean, depends what you mean by 'good enough'. It would, obviously, be good for AMD; I don't see that it would make it competitive in perf per watt terms, though.

> e.g. they seem to have speeded up handling of reference counting through architectural changes

They have, but that wouldn't be relevant to (most) synthetic benchmarks, I wouldn't have thought. SpecInt2017 won't be doing much if any reference counting, for instance :) The M1's advantage in ObjC and Swift should be expected to be even greater than its advantage in 'normal' (ie C) code, but beyond refcounting microbenchmarks I don't think there are many tools to demonstrate that (and the vast bulk of performance-sensitive code running on MacOS is C/C++ anyway).


>Ultrabooks: A variety of unusable hideously expensive compact PC laptops going back to the 90s and mostly abandoned by the late noughties, then MacBook Air

PowerBook 100 before the Air. 12" PowerBook G4, too.


I don't think the implication was that Apple itself is killing Intel, but rather that ARM is - this is just the first step in what many see as a trend that will have Intel scrambling.


Broadly agree with a few caveats:

- If this gives Apple a major competitive advantage at the higher end (where presumably most of the margins are) then we may see some moves to try to compete with Arm based machines.

- A lot will depend on where MS takes ARM Windows - could MS do a competitive Rosetta style layer to give backwards compatibility?

- Much less lock-in in servers - eg AWS Gravitron and Ampere.


> If this gives Apple a major competitive advantage at the higher end

Whatever about the highest end (really have to wait for the M1X or whatever to evaluate that), it clearly gives Apple a _huge_ advantage for the $1000-ish ultrabook market; battery life. There's simply no thin-and-light to rival the M1 MacBook Air's battery life without very severe compromises. I think the market will have to respond to that, or just cede most of the ultrabook market (which is significant).

I doubt the industry really cares all that much about the 45W 15" laptop market; no-one buys them. Mid-range ultrabooks, though, are a good combination of margin and volume.


A tangential correction: it’s AWS Graviton (one “r”), not Gravitron (two “r”s). I used to misread it as Gravitron too initially.


Thanks - I think I've built muscle memory for Gravitron now which is very unhelpful!


We need an AMD+ARM combo. Intel fatality.


I ain't keeping up with hardware but from I can remember Surface Pro X that came out last year was received positively, packed enough power, worked fine but wasn't trending for some reason. Thus the real question is, though there are ARM laptops since mid-2010s why is Apple taking once again credit for making a "revolution"?


Surface Pro X is a great hardware device but it is definitely not a great software experience. Same problems with the old Surface RT with the ARM chip, backward compat with existing software made the whole experience difficult for almost all folks. As for SPX with Windows 10 on ARM, you can't use any 64bit app and existing x86 32-bit apps were incredibly slow and not reliable (look at photoshop for an example). 64-bit emulation support is coming next year[^1]. Many of their development tools are not native either, some folks are still waiting for MS to ship Visual Studio to ARM [^2], .NET 5 was just shipped a week ago with some basic ARM support but no WPF for ARM until next year.

The difference here is Apple has a very fast x86 translation layer called Rosetta 2 that can run almost all Intel apps at 80% or so of the native M1 performance. Most folks don't even notice the difference. Windows 10 on ARM (SPX) had to emulate it at all time, with massive performance regressions.

What Apple has done is a complete transition that includes both hardware/software to ARM at the same time. All of their apps including development tools (Xcode) were native, huge number of macOS devs already shipped native support on day 1 or within the first week and Rosetta 2 made waiting for native support okay.

And that's just the software. The hardware? Super fast, long battery life, fanless in MBA, very quiet fan in MB and more. For a lot of people, it's a complete experience that worked out for them.

Note, not all developers are going to be ready for this, there's no docker support, some tooling will take awhile to update but they're going at much faster pace than MS at this point.

[1]: https://www.theverge.com/2020/9/30/21495510/microsoft-window... [2]: https://docs.microsoft.com/en-us/visualstudio/install/visual....


I take that it is a superior experience to what was available. My issue was with calling it a "revolution". From what you're saying the most important improvements have been the emulation and that it is better supported. That said, are there any benchmarks on performance hit during emulation for those and other devices? I searched but couldn't find something.


Here's one from Surface Pro X with Microsoft's SQ2 chip that they customized with Qualcomm: https://www.windowscentral.com/surface-pro-x-sq2-review#benc...

Switch to Geekbench 4 for x86 vs. ARM for emulation on the same device.

One of the definitions of revolution is:

> a dramatic and wide-reaching change in the way something works or is organized or in people's ideas about it.

Apple didn't change from a x86 CPU to a ARM CPU, they went from a third party Intel CPU to a complete Apple-designed/customized/built system on a chip (SoC) that includes the following: an Apple CPU (4 high-pref/4 high-effiency cores), Apple GPU that's using Apple's TBDR implementation, Neural engine (ML accelerators), ISP, media en/decoder, disk controller, Secure Enclave, Unified Memory Architecture with memory dies on the same package that allows all of the cores to talk to the same memory space without copying back and forth, and so much more. They did all of this in a familiar package that runs faster, quieter and last much longer without major impact on the customer's experience.

A complete package that was executed well on the first try and for some, this is considered to be a game-changer. Especially as it is changing many people's understanding of how a laptop should behave, specifically not having to worry about heat, battery life and having too low of memory as Apple's UMA actually makes swapping to disk a non-issue, there's no difference in the overall experience.

SPX isn't a complete package either, it uses Microsoft customizations on top of Qualcomm's Snapdragon 8cx CPU but solely Qualcomm's GPU, Adreno. I don't think there's anything else that MS has customized for SPX.

> From what you're saying the most important improvements have been the emulation and that it is better supported.

That's just one part. The overall experience matters. You can invent a technology that changes the world but if no one can use it well, it wouldn't matter much until someone take that technology and make it useful. Sadly, it's the latter that matters the most.

It's the same thing with iPhone being called one of the most important technological revolutions. iPhone was definitely not the first nor the second touch screen mobile device but it was a complete package that sold it, just like these Macs.


Just to clarify, I don't see it as a revolution at the moment but I do think my M1 MBA is a game-changer for me. The biggest change is that I haven't had to worry about battery and heat at all. My old rMBP wouldn't last me half a day (it heats up quickly within the hour) but this one can and have gone for two days before I had to charge it and even then, I only did it during work hours with the dock. I never had to use my charging cable outside of my desk with the dock. Plus, no fan either, god my old rMBP would kick up the fan all the time. (yes, I cleared out the fans every 6 months and had it cleaned by Apple as well).


Thanks for the well-written replies. From the benchmark mentioned SPX has score ratio x86/ARM (emulated/native) ~60%, thus ~80% claimed by Apple is indeed a big step.


Because Apple have made the first one that has real world performance gains over mid-range x86 processors on native apps, built an x86 emulation layer that delivers close-to-native performance AND delivered a huge uplift in battery life over even lower powered x86 devices.

I think you're trying as hard as possible to convince yourself Apple haven't done anything special here, but that just isn't true.


>real world performance gains over mid-range x86 processors on native apps

Does it? As I said, I ain't keeping up with hardware news.

>I think you're trying as hard as possible to convince yourself Apple haven't done anything special here, but that just isn't true.

I didn't say that. I specifically referred to calling it a revolution. If what you mentioned above is true then it will be one.


In some benchmarks the M1 is faster than any other laptop CPU you can buy (and even when it loses it’s usually close and runs at significantly lower wattage). In many single-threaded benchmarks (web/Javascript stuff for example) it’s faster than any x86 CPU you can buy.


I haven't been keeping up either. How does the M1 compare price wise to all the other laptop CPUs you can buy? Aren't the newest processors always the fastest?


M1 isn't for sale except as part of an SOC in Apple hardware, so there's no way to compare prices directly.

And it isn't just that the Apple hardware is some of the fastest available, it's that it does so with far lower power requirements. That's the really significant piece to all this.


The newest passively cooled super thin notebook ARM chips are not usually faster than Intels flagship chips, no.

But this is.


Three reasons:

- Quite slow relative to similarly priced Intel laptops.

- Quite expensive; started at $1500, so 1.5x the cost of the MacBook Air. And that $1500 didn't even get you a keyboard; if you wanted one of those it was an extra $150 or something.

- Very poor support for x86 software by Microsoft.

There was little convincing reason to buy one over an Intel equivalent. Realistic starting price was $1650 for something that was slower than Intel laptops for ARM software, was essentially unusable for x86 software, didn't have significant battery life advantages, and wasn't in a normal laptop form factor.

> Thus the real question is, though there are ARM laptops since mid-2010s why is Apple taking once again credit for making a "revolution"?

The usual. Apple took a thing that people had previously done, badly, and did it well. That is more or less Apple's thing; they rarely invent truly new markets, but they do take existing but irrelevant markets and make them relevant.


This is Hacker News. If getting Linux running for the first time on a completely new cpu/soc running just with the help of the barebones API from Apple isn't hacking, I don't know. When Parallels ships their ready-done product, it gets much less exciting, but for now it is pretty interesting.


Anyone can do this easily, the API is super simple but the resulting VM isn’t all that useful. The title implies someone got it working without Apple’s closed source APIs. That is not the case.


It's the bread and butter of hacking new hardware to the point that it can boot Linux in some form. I think there's great prestige knowing that everything can boot Linux eventually as it means it has a shelf life after the company owning the proprietary solution has thrown it away.

Given how Apple treats iPads and iPhones, I wouldn't be surprised if these M1 Macs have a much shorter usability period than Intel Macs.


iPhones are quite well supported in my experience. I have a 6 and a se, both running the latest OS.

My wife uses old MacBook Pro and air, pre retina, yet i5, running Debian stable, Firefox, zoom and some VMware horizon client for remote work. Stable, usable, just great. In my experience, intel macs weren’t supported as long as the iPads and iPhones but they are for sure easier to upcycle to something else. They are just fancy PCs.


> I have a 6 and a se, both running the latest OS.

The iPhone 6 does not run the latest version of iOS (14) nor the previous version (13): https://osxdaily.com/2019/06/04/ios-13-compatible-iphone-ipa...


Do you mean the 6s? I don’t think the iPhone 6 can run iOS 14. Which is unfortunate but the support that phone had was still far better than almost every Android phone.


Indeed 6s plus. It was replaced under warranty because of the battery drain issue a while back.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: