Hacker News new | past | comments | ask | show | jobs | submit login
Docker for Mac M1 RC (docker.com)
445 points by mikkelam on March 19, 2021 | hide | past | favorite | 339 comments



I wish Apple would give Docker some love like Microsoft does. Using Docker with WSL is a breeze, and it runs so much better than MacOS. And as an advantage you get access to Linux package managers which are so much better than brew. Brew is good, but Linux package managers like pacman and apt are so much better. Having a proper Linux distribution open in one window while I play 'Call of Duty' in another, is one reason why I've moved to Windows again.


The only problems I’ve had with docker slowness (though it’s been a very significant problem) is shared file system performance. It’s made many use cases and tools that are slow to begin with (like webpack) basically unusable for a development workflow.

So the main thing Apple could do to show some love to docker is build out full apfs support in the Linux kernel. I have no clue how much work that entails but presumably it’s pretty massive, and it seems totally unlike them. Maybe one day they’ll have a come to Jesus moment like Microsoft and start caring about developers (non-iOS developers) but I don’t really see it happening.


If you found this unusable, you probably haven't had to work with the Virtualbox-backed version of Docker for Mac :-)

Have you considered storing the npm_modules and running webpack on a tmpfs/volume (so that it doesn't have to go through the shared FS layers) and only copying the end product to a shared volume?


Or maybe we can realize that npm and webpack are both trash. And we should stop embracing them and move web dev away from this mess.

Don't get me wrong. They were good workaround years ago. But I hope we can do better.


Creating and mounting a loop-volume within the HGFS and then running NPM in there, would probably also be helpful. It certainly is when wrangling a directory with lots of tiny inodes over SMB. (It’s also what Apple themselves do for running Time Machine over SMB: they create a sparsebundle image on the remote and mount it locally, and write to that.)

(The efficient thing about this class of solutions, if you’re wondering, is that the client mounting the loop-image ends up owning + managing the loop-image’s internal filesystem’s metadata within its own local disk cache, such that it can coalesce filesystem metadata writes and only push a new copy of the disk-image blocks backing the filesystem metadata after a potentially-huge number of changes. With a network/host-guest filesystem, meanwhile, every filesystem metadata change must become its own synchronous message to the host, to be pushed to the host’s filesystem driver for linearization, so it can succeed or fail relative to other things going on within the host.)


I used to run my own Xhyve based VM just to avoid the brutal VirtualBox Docker system on macOS. Man that’s some bad memories.

The newer Docker for Mac is better, but file system perf still could be massively improved. Big node_module directories can still cause pain even today


> Big node_module directories can still cause pain

is there any other kind?


Bigger node_modules directories


Yep, using Webpack in Docker on a Mac is essentially unusable. I see around a 5-10x slowdown in Webpack builds. Using docker-sync seems to help the CPU out a bit, so my MacBook Pro doesn't do the jet engine space heater routine quite as much, but the performance is still nearly unusable.

https://docker-sync.readthedocs.io/en/latest/


webpack can be made use-able relatively easily if you do not mount node modules over the shared file system. I've been doing this for quite awhile with a volumes declaration in docker-compose that looks like this (running nextjs, assuming /usr/src/app is where your dockerfile has your node stuff):

(on service definition): volumes: - .:/usr/src/app:cached - node_modules:/usr/src/app/node_modules/ - next_artifacts:/usr/src/app/.next/

and then in the top level volumes key defining node_modules and next_artifacts as blank/default.

That means I mount everything except node modules and the build artifacts so the shared filesystem does a LOT less work trying to sync stuff. The downside, of course, is that I need to run npm commands both inside the container and outside if i want them in my IDE. A fair trade for decent performance. That setup is still not as fast as native but definitely usable and does not send my machine into space header mode much more than normal usage.


Interesting. I have always had the same docker-compose setup with a separate node_modules volume inside Docker, but I've always still had the jet engine space heater issue.


I've been able to improve performance in docker-sync by adding high-churn folders that I don't care about to sync_excludes


Virtualbox-backed Docker on Windows was my first introduction to Docker - it literally put me off touching Docker for 1-2 years!

After eventually falling in love with it on Windows and Linux, I later tried it on MacOS... oh my. Not fun!


The fact you need to partition off RAM and CPUs for Docker on the Mac is the killer for me.


Don’t you have to do that under windows too given WSL2 also runs as a VM?


no not in wsl2. (you can, but there is no need for that) (btw. disk storage can be too small, but that is the only problem that can occur https://docs.microsoft.com/de-de/windows/wsl/compare-version...)


Actually that's done for you, whether you control it or not. WSL2 is essentially a VM with better host integration.


actually under hyper-v windows is a vm aswell, it's just not visible. it's the reason why the hyper-v hypervisor platform was needed.


The shared filesystems are also quite buggy.

I was developing a database as a personal project that involves mmapping a file and the Docker shared filesystem had some peculiar behavior they was different from how normal filesystems work.


That _used_ to be a problem for me, until I found Mutagen. It is _surprisingly_ easy to set up, and my webpack builds in Docker are only about 5-10% slower than running natively.

https://nodewood.com/blog/how-to-speed-up-docker-on-macos/


Sounds like you like Linux but run windows ;) yeah outlook word


Apple (as you may know) did in fact have a spell as open-systemsy Unix-loving guys, starting from the NeXT acquisition but ending no later than the release of the iPhone. The usual pattern, basically: https://news.ycombinator.com/item?id=7525256 .


Ah, yes, the Unixness of the Lisa, the LCII, the Quadra, the System 7, Mac OS 8.1, Mac OS 9.2...

So much Unixness on the Mac OS boot ROM, the Pascal API, the lack of memory protection, and so on...


Please reread my comment.


> Brew is good, but Linux package managers like pacman and apt are so much better

I used starting fresh on the M1 as an excuse to give MacPorts a go, and I like it much better than Homebrew. There's some smaller packages that aren't on MP, but all the big stuff is there and to me it feels much more like a Linux package manager.


I’ve always preferred MacPorts to brew. I think you’ve nailed it, it just feels much more like the Linux package managers I’m used to. Homebrew has definitely won the popularity contest though.


IIRC my issue with MacPorts was that it insisted on recompiling the world from its ecosystem as dependencies because it kept everything super-stable, whereas brew would attempt to use system ones if they were viable. It was more or less FreeBSD Ports vs. Debian apt-get as the model at that point, and MacPorts was (predictably) FreeBSD-like.

Over time I'm not sure brew is any better now, though. I have unlinked versions of gcc, python, etc., etc. under /usr/local that are there just to handle brew packages that listed them as pinned dependencies. It's nice that brew doesn't expose them to CLI unless I want it to do so, but it's not less complex.

Assuming MacPorts has a good "bottle" type concept of precompiled packages now too (haven't looked for years) it's probably about the same as brew now, just more stable. If they still compile from source every time a la BSD, that would be my main sticking point.


MacPorts has binary packages for most things these days.


I used MacPorts ages ago, and honestly thought it went away, which is why I switched to Brew, which I've never gotten comfortable with.


Could you expand on what is better about macoorts over brew? I quickly got onto the Macports page and saw first off that sudo is needed for updating itself and second that I have to select the right version for my OS: Brew is a single install that does not require sudo to update so there’s a few things that already make Brew more appealing for me. What does MacPorts do better?


This is the article that prompted me to give it a try, and summarises my feelings better than I could:

https://saagarjha.com/blog/2019/04/26/thoughts-on-macos-pack...


Also, the manual of macport is beautiful, as well as the port file.


I'd like to be done with Homebrew. (I asked you to install pv, not thrash the world.)

Anyone tried Nix? I'm trying it soon. If it doesn't stick, yep, back to MacPorts like it's 2007.


Yes, Nix is totally viable. Steep learning curve, but many benefits.


I switched to Nix a few years ago and have had no regrets.


> Using Docker with WSL is a breeze

Have there been some updates recently? About a year ago we were trying to use Docker on a windows host at work, and dealing with things like file system paths was a nightmare


Several months ago. A year is a long time for the microsoft bleeding edge these days. You can come back to something a year later and it is completely revised in many respects. Bugs gone, new ones created and discovered, whole subsystems rewritten etc...


enable WSL2, install Docker Desktop in windows and it just works. you dont even need to install docker in WSL, its done automatically and kept updated by docker-desktop.


You might also need to update some settings in your BIOS to enable the hypervisor, but once you get it working it's a breeze to use.

WSL is really remarkable.


WSL is Linux. They run the unmodified Linux kernel in a pico-process an docker runs on top of that.

Technically there shouldn’t be file system issues right?


It's even mode magical than that : the docker containers run inside a Linux kernel that is not the one running your WSL2 environment, but the two are in cahoots such that the filesystem is shared and hence container mounts are as fast as they would be on a bare metal Linux box (e.g. -v $PWD:/workdir).


WSL1 used a syscall translation layer in a "pico process", WSL2 simply uses Hyper-V


WSL2 is now just a regular VM


There are some differences in the software thought.


True, but it uses a Linux kernel over hyper-v rather than a wrapper over the win32 api like WSL1 (from what I understand!)


Yes, but it is a slightly modified distro which causes some issues

And not all are minor. Not being be able to run LXC is a deal breaker for me.


You could with a little work in June 2019, not sure what the picture is like now but it might be possible. I've used nix quite a bit on WSL2 which was surprisingly good.

https://blog.simos.info/how-to-run-lxd-containers-in-wsl2/


What makes you think WSL2 isn’t able to run LXC?


WSL2 is a custom Microsoft kernel


If you can't do what you want in it, please open a ticket on https://github.com/Microsoft/WSL - there is also a repo for the kernel, but AFAIK technical feedback is best sent through this one.

(disclaimer: not directly involved with WSL, just trying to help)


How does it work with local directories? Is there a separate file system for WSL, or does the linux kernel translate windows fs paths or something?


It's a separate filesystem, but from WSL you can access Windows files in mounted drives via the `/mnt/<drive letter>` directory.


And vice versa: linux home dir is at something like \\wsl$\ubuntu\home\username


Yes, there is a separate filesystem for WSL, but you can still traverse the Windows filesystem as well, including mounting external drives. You can also browse the linux filesystem from windows explorer as expectded.

https://docs.microsoft.com/en-us/windows/wsl/compare-version...

https://devblogs.microsoft.com/commandline/access-linux-file...

https://community.openbiox.org/d/72-windows-subsystem-for-li...


It's the same filesystem with some kind of translation/adapter. I love it because it makes supporting our multi-platform OSS projects easy.

You do have the occasional issues regards line endings; that's the only negative I've had so far.

EDIT: H12 is more accurate. The Linux OS is separate, but the Windows FS is mounted to a logical place and "just works".


Yes line endings for configuration files was one of the issues we ran into


I do have locking issues with some sqlite files. I guess it has to do with the fact that the filesystems are shared. I wish I had time to investigate more.


It’s not an unmodified kernel.


It is basically an unmodified kernel. Like in any purpose-built usage of Linux, it is compiled with specific options and patches for the environment it is made to run in, but you can replace it with your own kernel image if you want.


It's not an unmodified kernel.


Can you explain what modifications you believe it has that typical installations of Linux don't?


Don’t try to rephrase what I said to put words in my mouth. I stated a fact, which you can verify here if you please. https://github.com/microsoft/WSL2-Linux-Kernel Please let us know what you find.


I am not putting words in your mouth. I am trying to understand what your concern is. Every major vendor of Linux distributions maintains their own source tree with patches specific to their usage. That is the way Linux is normally used. So what?

I don't see how it's useful to say that it's a "modified kernel" due to it being packaged for a specific application in a way that is necessary to use it.


Huh. I've seriously used... I dunno, maybe five Linux package managers, on workstations and servers alike, plus poked at a couple others, and the only one I'd almost rather use day-to-day than Brew is Portage, but even that, probably not.

I like having the system strictly separate from my crap, and I think the UI is fairly good. The variety of packages available out-of-the-box is outstanding. I miss it when I'm on Linux, now, in a workstation-not-server context (yes, I know, there's LinuxBrew, but the package set is much smaller and less well-maintained). I started on MacPorts but got sick of it borking itself every few months such that it was faster to nuke the directory and reinstall everything than to figure out what it'd screwed up this time (granted, that was about a decade ago, maybe it's great now).

Brew gives off all kinds of signals of being something I'd hate (cutesy; a system tool written in Ruby; breaks with norms) but I like it a ton.


Brew consistently upgrades things that shouldn't need to be upgraded. I've spent more hours than I care to count fixing broken Postgres installations which regularly get major version bumps as a result of a completely unrelated install or upgrade.

I've never used linux as a dev machine, but in general apt seems much more reasonable in this regard. My frustration with Brew is pushing me to seriously consider a linux machine, so if anybody has counterpoints here I'm definitely interested in hearing about your experiences.

I don't care that Brew "breaks with norms" I just care that it breaks my shit.


I think I must use it differently from how other people do. I use it to install tools I'll use directly, so I practically always want those to be at latest (or otherwise don't really care what version they are). If I need something at a particular version I'd use a container, or install it manually in some isolated folder, or something like that, since odds are if I need something at a particular version I'm going to need it at multiple particular versions and to be able to re-create the installation on other environments.

So I end up with:

System -> Apple-managed

My tools, as in programs I personally use -> pretty much entirely Brew, in fact I think on my current workstation this category is 100% brew-installed

Dependencies of anything I'm working on -> some language-specific version manager (which itself is may be brew-managed, actually) plus containers or VMs with scripted installs, probably.

On linux my experience is typically more like:

System -> package manager

My tools -> package manager, plus some sketchy extra repos that I hate to add but do anyway because I don't want to screw with manually updating things, plus several things installed manually, plus a bunch of things on older versions than I'd like but not worth the trouble/risk of finding some way to upgrade without it being a PITA.

Dependencies of anything I'm working on -> some language-specific version manager (almost certainly not available in the distro's official repos) plus containers or VMs with scripted installs, probably.

So for my use, Brew cleans up the "My Tools" workflow very nicely compared with Linux, excepting, kind of, my days back on Portage/Gentoo, which of course has its own problems.


I don't quite understand your argument. I use my distros package manager in a very similar way to how you describe brew for my tools, and also use containers in the contexts you describe.

Only difference I see is that my containers are running natively (eg no VM in the background) and that I've not had any random errors from my package manager in years. Not sure what brew is like these days but last I used it the experience felt like a half baked apt/yum to me (3+ years ago though)


GP and a sibling comment (quote: "Something as simple as "brew upgrade youtube-dl" could end upgrading dozens of _unrelated_ packages, such as postgres--which ends up breaking my local development environments.") seem to describe using Brew to manage dependencies of applications they are developing, which I do not do, and wouldn't do with a Linux workstation's package manager either, so I never have those problems. That's what I was pointing out.

For me, Brew is for managing my personal software I use that doesn't come from Apple. Project dependencies, including the version of the compiler or interpreter for the language you're writing, don't belong brew-managed in most cases, which seems to be what's tripping people up when they try to use it for that.

Yes, containers run better on Linux because they're native. No quibble there. I just find I'm much, much better able to cleanly manage my personal software (not project dependencies, which, again, I wouldn't try to manage with my workstation's Linux package manager, either) with Brew on macOS than in any Linux distro I've used. 99% of what I ever want to run (outside the base OS, and project dependencies) is on there, available at a single "brew install", after I do nothing more than install Brew itself, versus 50-95% on Linux (depending on the distro), where I find myself adding all kinds of extra repos and installing one-offs a variety of ways just to get to a baseline level of having all the stuff I need at new-enough versions. And the interface is above-average, in my opinion (but again, Portage/Emerge is my favorite package manager on Linux and maybe the only one aside from Void's that I've found pleasant to use, so I may just be weird)


I've had this upgrade probablem recently as well, with specifically your example. Even though I like brew much more than linus package managers, this seems like a massive issue. Why the hell is this a thing? Has nobody fixed it?


My example of what? I’m not sure what you mean. As far as I can tell my example is that I only use Brew to install software for which the question “which version do I want?” may always be answered with “the latest” or “I don’t care”, so I don’t have an upgrade problem.

Anything I need at a particular version gets installed some other way, unlike how I operate on Linux, where anything I need at a particular version plus a bunch of other stuff all gets installed outside the package manager (talking workstation usage specifically)


Indeed. I've run into this multiple times. Something as simple as "brew upgrade youtube-dl" could end upgrading dozens of _unrelated_ packages, such as postgres--which ends up breaking my local development environments. Perhaps that's the wrong command to use, but either way, it's still frustrating when it happens.


Why the hell is this happening? Happens for me too.


Why don't the brew developers change this/add an option/make it opt in?


If having a specific version of a tool is important for your work, it’s probably best to install that tool separate from homebrew and manage that install separately. The Postgres docs seem to recommend the EDB installer.

https://www.postgresql.org/download/macosx/

Likewise Python and other programming tools where running a specific version is important are best managed outside homebrew.


Yes, Ubuntu/Debian are very conservative with their updates. Especially if you're on an LTS release, so you're unlikely to have something break from an upgrade like that. Brew tends to be very up-to-date.


this right here. When I instruct brew to install something, I didn't instruct it to upgrade everything else. And I didn't instruct it to do a 30 day clean up.

apt and yum don't go doing things you didn't tell them to do (usually)


Yeah, let's talk about how brew still can't just remove the leftover dependencies or uninstall a package with its otherwise unused dependencies without some extra command or hacks.


‘brew autoremove’ does this.


Is this new? I swear I searched as recently as last week and the only thing that came up was "rmtree" and the "brew bundle dump && brew bundle --force cleanup" workarounds.


It’s not “last month new” for sure. It was in the list of commands in the manpage last week.


brew might have a discoverability challenge. I've been installing an rmtree helper to do this for years and didn't even know it was an option to autoremove them.


As a casual user I like Brew. As a long time mac developer and private cask maintainer(no longer thankfully), I wish there was something close to what the Linux ecosystem has.


As a casual user (install something once every 6 months), I can't even keep brew's insane terminology straight. If I was a REGULAR casual user, I'm sure it would be fine.


Have you tried MacPorts?


I tried it, but everything required sudo and that wasn’t an option at the time. I primarily develop in Windows(on a Mac) now because Unity runs much better on windows than Mac OS.


You can install MacPorts into a local directory without admin privileges.


I'm 100% with you. Moved from MacOS to Windows because of WSL1.

Tangent warning:

When WSL2 came out I ended up moving away from WSL and moved to running my development environment inside a VMWare VM and connecting to it via VSCode remote ssh development feature (the same one used for WSL). This is essentially a manual version of WSL2, something people have been doing for ages.

Now, whether I develop from Windows or MacOS, I am always connecting to my Linux VM. If I need to run docker, it's running in that VM.

I got to the point now where I forward my SSH port from my public IP to my desktop PC at home. When I am away from home (e.g. working in the office with my laptop), I connect to my desktop (via ssh on my public IP) and due to the seamless integration VSCode has for remote development, It feels like everything is running on my local device.

The PC is of course handling compilation, intellisense and all that good stuff while my laptop is essentially a thin client. The laptop now runs faster, has a longer battery life, doesn't spin up the fans and also doesn't need to be upgraded.

My home PC, electricity and internet bills are also tax deductible because I use them for work purposes.


I'm definitely interested in WSL for developing, but last time I tried it a while back (WSL2 just came out I think), there was basic functionality missing e.g. I couldn't write and run my own SystemD services. Has the situation improved since then?

When you say "proper Linux distribution", I assume that's still CLI only? Do you develop with e.g. emacs, vim on WSL, or do you have some IDE with remote running and debugging into WSL? Or have a missed a trick and in fact X/Wayland applications can be run on WSL?


> still CLI only?

With openssh in windows, it's been quite easy to run an x server in windows, and use ssh with x forward for gui access.

But rdp feels more native to windows, and both x and Wayland has rdp server backends - so you can do things like: https://www.nextofwindows.com/how-to-enable-wsl2-ubuntu-gui-...

But apparently ms is working on a Wayland compositor allowing directly running gui apps - I don't think it's quite there yet:

https://www.phoronix.com/scan.php?page=news_item&px=Microsof...

https://github.com/Microsoft/WSL/issues/938#issuecomment-763...

Somewhat related: https://ltsp.org/ I'm not sure about the state of a non-x rdp server for thin clients though.


Thanks for the links, some interesting stuff there! I think my end goal is to have a contained development environment whilst not being forced to use specific tools (i.e. VSCode remote debugging, eugh) and not sacrificing compilation times by running in a VM that's too slow.

From the look of it, if I were to set up the WSL2 Ubuntu GUI with RDP, that'd tick all the boxes, right? And as a bonus I'd be able to access files in WSL2 from Windows.

Any idea how WSL2 interacts with VPNs? I've seen some mixed reports around the web but this is also a must-have for me, if I can't use a VPN in my dev environment it's game over.


I have it working with vpn. We have vpn at work and it is forced tunneling mode. It modifies routes on my laptop. Since wsl2 is a vm on a different network the vpn client does not know about this network and doesn’t know how to route traffic. If you follows these instructions it solves the problem https://github.com/sakai135/wsl-vpnkit


YMMV but for me personally with Dell Sonicwall it basically does not work (even workarounds that mess with MTU sizes are unstable). In my case the behavior is that general networking speed drops massively as soon as both VPN and WSL2 are active. I had to revert back to virtualbox, which does not have such problems and as a bonus allows me to run proper systemd (another thing missing in wsl)


Interesting, that might be a dealbreaker then. Have you found significant slowdowns in working in a VM vs native, both in terms of compilation and day-to-day work, or has it been negligible/unnoticeable?

P.S. thanks for making an account just for this reply :)


Needed to create one for a long time ;) General cpu-bound tasks seem fine in a vm; i/o is definitely slower but much to my surprise linux’s ext4 filesystem is so much more efficient (?) than ntfs for small files that even inside a vm (vbox or wsl) git actions are noticably quicker than on native Windows! I’d prefer is wsl2 for convenience over vbox if I could but my VPN is indeed the dealbreaker and corp. won’t support any other VPN software


> sacrificing compilation times by running in a VM that's too slow

Fwiw wsl2 runs in a vm now (hyper-v). AFAIK it's in order to increase fs performance "inside" Linux (so faster compiles if compiling in wsl).

It's also possible to use cifs/samba shares - but I don't know what you're compiling? Chrome/Firefox sized c++ projects?

Might be worth it to try with a tmpfs/ramdisk either way?


> there was basic functionality missing e.g. I couldn't write and run my own SystemD services. Has the situation improved since then?

No. WSL2 uses it's own, minimal (proprietary) init, which basically launches default shell for the configured user and that's it. No service management in sight, and no equivalent of systemd's user scope or user session either.


You can boot the WSL vm with systemd but it requires some fiddling.

That said, I've been playing around more and more with WSL2 on a secondary machine as a current primary Fedora user (for about 10 years now) and I really haven't found a good reason why I would need systemd in the WSL2 VM vs the custom init.


A major use case I have for WSL2 requires systemd, full stop (it's critical for the package manager I use.) It's pretty painful not having it available, and this is really something that needs improvement IMO. To be fair, this could be fixed in the software too (to some extent, not fully but working) which would also be a solution. But I suspect this isn't the only problem people encounter. There's a lot people use modern Linux for.

In practice that hasn't been a roadblock for me, because VMWare Workstation 15.5 finally supports running on top of Hyper-V, so I can have both working at once. Moving everything to a single hypervisor API has some nice benefits...


One possible solution to this is removing systemd from all distros. That way you won't need it for your package manager anymore.


Yeah, and let's continue with removing Win32 from Windows. That way other folks do not need to bother with Wine anymore.

/s, obviously.


A more realistic option is to abandon WSL as yet more crap from Microsoft.


Please take the crybaby shit to Slashdot or something, man. It's not a good look.


It is like Excel - most people use at most 20% of the functionality, but any competitor that implements only 20% is not usable.

Most WSL2 users use it as shell to run occasional ELF/x64 binary that runs in console; for services, they would use Docker Desktop for Windows. Anything beyond that and you will quickly find out, that it is not really a standard linux distribution.


It's not full-on systemd, but you can run init scripts on WSL startup as of Insider Build 21286.

https://blogs.windows.com/windows-insider/2021/01/06/announc...


There's definitely a few quirks. My biggest annoyance right now is https://github.com/microsoft/WSL/issues/5762 .

Note this only impacts calling the Linux go toolchain from the Windows side (via the /$wsl/ path). Its not an issue inside WSL... but it comes up when you want to say configure Windows IntelliJ/GoLand to use the Go compiler hosted inside of the WSL VM.

> When you say "proper Linux distribution", I assume that's still CLI only? Do you develop with e.g. emacs, vim on WSL, or do you have some IDE with remote running and debugging into WSL? Or have a missed a trick and in fact X/Wayland applications can be run on WSL?

You can run X apps just fine in WSL2 as long as you have a display server running on the Windows side. I use X410. Important to note, Microsoft is working on native Wayland support long term. For what it is worth, I run IntelliJ/GoLand from inside of WSL and use X410 to render the GUI. This works great.


Your set-up sounds exactly what I'm after - from my comment to a different reply:

> I think my end goal is to have a contained development environment whilst not being forced to use specific tools (i.e. VSCode remote debugging, eugh) and not sacrificing compilation times by running in a VM that's too slow.

Are there any other gotchas to working like this? As it happens I'm a go dev using GoLand so your go-specific issues are of interest to me. I'm not too bothered about not being able to use Windows GoLand as I'd be just using it from WSL2 anyway, but I'd be interested to know if there are any other pain points. Any issues with VPNs, if you're using them?


I haven’t tried a VPN yet, but I have heard of some possible quirks due to how the networking is handled between Windows and Linux in HyperV

Edit: Im switching from Fedora to a WSL setup for work in the next week. Ill report on issues I encounter in that migration. I suspect VPN will come up because one of the key reasons im switching off Fedora to Windows+WSL is due to corporate VPN requirements.


This is the fix for vpn and wsl2 been running this for 6 months no issues. https://github.com/sakai135/wsl-vpnkit


That'd be super great if you could let me know how it goes, I'm tempted to make the same jump. Thanks!


I'll reply to this comment when I do know


I wish it was possible to use USB devices directly inside WSL e.g. for embedded development. I believe there are "tricks" like using USB over LAN, but I have not tried it yet. Once they make possible to use USB devices natively I'll be over the moon.


> I assume that's still CLI only?

If you install an X server on Windows you can run graphical WSL apps. It runs really well too. Years ago I used to run Sublime Text straight from within WSL.

I've been using WSL / WSL 2 for a few years now for full time web development. A while back I made a video going over all the tools I use and how I have Docker, WSL 2 and a bunch of other things configured at: https://nickjanetakis.com/blog/a-linux-dev-environment-on-wi...


I run VcXsrv X-Window on Windows (Multiple windows mode) and then from WSL2 Ubuntu I've run e.g. "terminator" and Jetbrains CLion C++/Rust GUI, Linux Version "~/bin/clion.sh &". It works very well.


Yep, that's what I cover using in the video linked above too. VcXsrv is so good. Nowadays I mainly use it to share my clipboard between WSL and Windows so I can use native Linux tools that copy to the clipboard without resorting to any hacks.


>When you say "proper Linux distribution", I assume that's still CLI only? Do you develop with e.g. emacs, vim on WSL, or do you have some IDE with remote running and debugging into WSL?

VScode directly connects to WSL. You can open folders/files with 'code <folder>' from the WSL terminal, save, run whatever from VScode etc.

As for Docker, you can also use the GUI from within Windows that automatically connects to WSL but admittedly I almost never use the GUI.


I did try VSCode remote running and debugging, but found it fiddly to work with and half-baked. Plus, I was sorely missing IntelliJ, so I quickly stopped trying it out - maybe I'd get used to it eventually, but it didn't seem worth the onboarding cost. The RDP and X410 solutions suggested in the replies to GP seem solid though.


Re: CLI, there is ongoing work on getting graphics support for over a year now: https://lkml.org/lkml/2020/5/19/742


There's this for running systemd inside WSL2: https://github.com/arkane-systems/genie


Where do you see the limitations of Brew? Despite it being a little wonky on beta releases of a new macOS iteration it works fine for me.


As others mentioned (it is slow, has problem with multiple users, cannot pin versions) it is also missing features the linux distribution have. For example, you cannot have Provides: alternatives as rpm/dpkg do, you must use packages for resolving dependencies as they are provided by upstream.

For example, when postgresql 12 was released, it took some months to appear in brew. Meanwhile you could not use alternate taps to resolve dependencies, if some package required postgres, it had to be the original one.


Why would you install postgres through Brew though?

Those times are way gone, that's the purpose of containers.

I've been using Brew for a while to just install "core" packages like python, curl, wget and such, and everything else like a postgres, nginx, whatever..a go to a container.


Because if you need to ingest some data, it is much slower with Docker Desktop for Mac :/

Also, I have some tools installed with brew, that have postgres as dependency (e.g. pgloader or mapnik).


> Those times are way gone, that's the purpose of containers.

Please elaborate on the claim that "running a SQL database" is the purpose of containers.


To be fair to GP, running sql database inside a container does have benefits for development, and in many situations, for deployment too.

However, if I do some exploratory experiments, where I don't care about repeability and where I use other local tools (like the mentioned mapnik, or jupyter), having it in container is needless complication.


Why wouldn’t you want to run a database under namespaces and cgroups from a dependency-bundled live archive file tree?

By and large, there’s no such thing as a container, there’s just sprinkles of housekeeping magic. To wit, Docker implemented in around 100 lines of bash:

https://github.com/p8952/bocker

Problems come when we think that today’s containers manage to actually contain anything, bring any security guarantees, or do much else than just slightly-more-successfully jump start a configurable bundle of dependencies.


I think you're being unkind to containers. Yes it's easy to say that "containers aren't a thing" and then list all the little tools that are used to implement them. That doesn't make them not real any more than any other abstraction.

Why wouldn't you want to run a database under VT-x, with random emulated hardware and a dependency-bundled disk image? By and large there's no such thing as a VM, there's just sprinkles of housekeeping magic?

Containers as specced and implemented do come with security guarantees. And if they fail to meet them it's a bug.


Not going to wade into the "should" or "shouldn't" of this, but I have used postgres-via-docker for ... few years now, and it is a DREAM. And I never have to worry about versions or dependencies (at least I haven't yet).


I've installed Postgres on my Mac with Homebrew, Docker, and https://postgresapp.com. There are arguments for each of them. On the pro side:

- Homebrew is a general purpose package manager, and Postgres is a package you might want managed.

- If you're using Docker/Docker Compose for a project anyway, that's the obvious way to do it.

- Postgres.app is a specialized tool just for managing Postgres installs, so it's hard to beat if that's what you need.

Some thoughts on the tradeoffs though:

- Homebrew really doesn't like the idea of "versions". It wants everything to be on the latest. That can be fine if you just need a tool locally, but if you want dev and prod to match, it is a pain in the ass.

- Docker isn't really very good at persistence. That's probably not a problem for local development, but you should be aware of it. Running it on a Mac introduces speed and memory issues you wouldn't otherwise have. And now obviously there's the M1 problem.

- Postgres.app is another thing to install. If you just need Postgres for one particular project you might not know about it or want to deal with installing something new.



On Apple Silicon it now installs to /opt/homebrew, a change they’ve been wanting to do for a while.


Does /opt/homebrew still end up in root’s PATH, I wonder? That has the same issue that /usr/local has I think. Letting users mess with root’s environment basically means there is no real distinction between root and non-root.


No, it doesn’t.


It’s not really insecure. See: https://security.stackexchange.com/q/187502


Last time I used a Mac was around, maybe 3 years ago, so this certainly may have improved.

I had a lot of issues with Brew, but the biggest one was how slow it was. Upgrading all packages on my Mac used to take hours.


Still quite slow. As far as I know, it's just a big heap of small ruby scripts that invoke each other, and ruby isn't known for blazing-fast performance. It also uses git internally, and quite heavily, so that probably adds some overhead as well.


It’s been a little while since I relied on it heavily, but you still can’t install/pin specific versions, right? That’s a huge limitation if you want to do any reliable development on macOS directly without using a vm or docker.

It’s also just so slow to update if it’s been more than like an hour since you last updated, the way it uses one big git repo under the hood is just chaos.


You can pin versions in brew (brew pin <package>) to prevent upgrades.

You can install specific versions, but it requires some gitfu - you need to uninstall, find the brew commit where the package is at the version you want, then install from that specific git blob.


Can you pin versions in apt? Last time I checked it involved a lot of work and swearing, but that might have improved lately.


Not really, it was pretty verbose, but wasn't hard. Pinning is for setting package priority between multiple repos.

What you're looking apt is apt-holding:

apt-mark hold libxfont1

That been around since 2013 IIRC. And there was dpkg way of doing it before.


I second that. On my M1 MacBook Pro, I have both the M1 version and Intel versions installed. I have found that if I use the Intel installation to install something like MIT-Scheme in a terminal running Rosetta, the it is available everywhere. It took me a little while to get that sorted out.


I think the main criticism of homebrew is that it's really slow.


Slow where? Searching could be faster, but it’s a few seconds. Installing is fast except where compilation happens. I have to admit I don’t know why it compiles when it does, but it isn’t all packages that get compiled.


It wants to update a few times a day by default when you call it, so running a simple brew install anything normally takes about 30 seconds (I'm in a MB Pro), even if it's just to say the package you want wasn't found.

If you don't run it daily, it takes about a minute or two to update.

But even when it doesn't update, it is extremely slow compared to any other package managers. It is disruptively slow and it takes a lot of resources, even in a powerful machine (and I'm not talking about compilation here).


I think I’m just not used to faster package managers and I don’t install packages often enough for it to feel disruptive to wait a few seconds.

I do like tools to be as fast as possible, though, and I’d forgotten about that update it does sometimes when I run it - that does seem to take a long time.

I’ll have a look at how it works and see what the things are that take time. I would expect network traffic for updates, perhaps whatever it’s getting updates from does some processing (I’m sure it said something about GitHub), perhaps there is some dependency resolution that needs CPU...

It would be interesting to compare its architecture to other package managers if they’re significantly faster.


The slowness is mostly due to Ruby and the git pull. I contributed in the past and reimplemented it in bash, and there isn't much going on, honestly.

99% of the time, installing a package consists of downloading a few zips from their CDN, decompressing and linking. For those cases Brew could just be checking an API instead of constantly cloning the git repository.

I'm quite surprised nobody has reimplemented it in Rust or Go. The architecture is quite simple compared to a normal package manager. Maybe it's just superstition: people see "package manager" and assume it's complicated instead of digging into the code and finding out how it works.


Well, a few seconds for package search in a local index is slow. It's instantaneous with apt on my 3 year old Linux desktop, where here is the famous M1 advantage? :)

Installing anything with brew involving multiple dependencies is also taking forever-ish, compared to mere seconds with apt.


It also has a fair share of problems. Like multiple users on the same machine, I had issues with permissions, etc


Am I the only one who, after installing some brew tool, every bin in /opt/homebrew is immediately kill -9’d at launch, and then I have to brew list | xargs brew reinstall?

Seems to be some signing issue but it’s only happened to me twice and I didn’t have time to properly investigate at the time...


As a daily brew user, I've never heard of that problem.


Do you use any other virtualization software along side HyperV? We have a lot of legacy stuff that integrates with VirtualBox, but wanted to start edging towards Docker.

However, when I last spent a couple of sprints attempting to get them working side-by-side, it was a pretty big failure. I couldn't get VirtualBox 6.0 running without falling back to soft virtualization which was painfully slow (booting a Ubuntu box took the better part of an hour).


Why not using HyperV itself instead of Vbox?


Because of a lot of existing internal tools that directly call and manage VBox to set up local testing environments and stacks.

It's a planned initiative for the future, but it means a fundamental change in a number of tools and would require a rather large set of rewrites.

On top of that, out of the large pool of users, only a small handful would actually need to run Docker side-by-side with these tools, meaning that it's a huge rewrite to allow a small number of people to use Docker on their desktops. That ultimately means it's getting very little traction and keeps getting pushed down the backlog.


> I wish Apple would give Docker some love like Microsoft does

It is my hope that Docker continues to accelerate down its current path toward replacement by other systems such as podman.

I suspect that Apple has little motivation to add a Linux kernel alongside the existing BSD-compatible kernel, even though macOS does have a built-in hypervisor framework/API.

BSD containers (e.g. jails) seem like a better fit than Linux containers (e.g. docker, lxc) for a BSD-based system.


It’s still a nightmare to run IntelliJ on the WSL mounts. Vscode support is much better but it’s support for JVM languages is definitely a downgrade


Have you tried JetBrains Projector? They have a special installer for WSL that sets everything up, and then you access the IDE from your local browser. It works very, very well.

https://github.com/JetBrains/projector-installer


Apple is not giving proper love to their breakthrough laptop chipset. They amazed the world with "neural net on chip" for ML, even mentioning Tensorflow explicitly in its launch. Here's the real deal: getting anything Tensorflow related to even compile on M1 is nothing short of a miracle. Yes, Apple has a binary version (yuck) of TF you can download, but that's not good enough as many projects need specific versions and not everyone (ie OSS devs) can spend that much effort on a fringe tech.

If I were Apple (ehem) I'd spend a small but meaningful budget supplying devs to projects like Docker or TF to help speedup M1 adoption. Given that the chip market is open for grabs I'd say it could give Apple a much stronger headstart with their in-house silicon strategy, even if that means helping improve products or the bottom-line for well-established corps, some of them competidors.


Give me a break.

The M1 migration is less than a year old and it’s already the single best/ most successful CPU migration in my memory. Far better than Apple’s PPC -> Intel Migration and massively better than whatever half-steps Microsoft has done to port over to ARM.

Apple absolutely should invest in much of what you suggest. In 6-12 months, after the platform is complete and mature enough to identify where the big problems are.

Most of us understand that you exercise caution when migrating mission critical work onto a 6 month old platform.

In the mean time, Apple has been hiring top Docker talent like Michael Crosby.

https://www.protocol.com/apple-hires-cloud-open-source-engin...

It’s possible they are hiring top container developers just to improve their internal cloud infrastructure. But what kind of hardware do you expect these guys will be running?

EDIT: Trimemd some repetitive stuff.


Unfortunately, I think apple only invests in things apple. So the "target market" is not developers for other platforms.

Too bad because a native apple docker would be really really useful. imagine:

  FROM macos:10.13.3
  RUN xcode-build
(I'm not talking about the current docker on mac which runs linux in a vm)


But where are you going to run the image? On a Mac Mini in your DC?

BTW macOS being a BSD derivative of sorts might benefit from Docker development on FreeBSD, which uses jails instead of cgroups, etc. At least, that could be easier to port, assuming that macOS kernel has the needed facilities.


I was thinking more of docker used for mac development, not docker used to deploy web apps.


It’s not too difficult to get a Linux VM running on top of MacOS on the M1. Then you can do whatever you want on your M1 Mac and have a full Linux distribution in another window.

Call of Duty is a whole other problem.


> Using Docker with WSL is a breeze

Not if your Windows users have to use Direct Access to connect to your container registry...


Are there any good iTerm like terminals for Windows available?

Last time I checked, the terminal was horrible to work with.


Not sure if it's what you're looking for, but Microsoft did recently release a new terminal: https://www.microsoft.com/en-us/p/windows-terminal/9n0dx20hk...


They’re investing in it but it’s no iTerm

https://github.com/microsoft/terminal


I don't think there is anything as good as iTerm. ConEmu is not bad.


Alacritty is cross platform and the terminal I use over iTerm on MacOS as well (it’s so freaking fast). It doesn’t have tab/windows management stuff but works great with Tmux.


Windows Terminal and Fluent Terminal are both good


WSL is probably easier to write for XNU than it was for Windows. Yeah, silly that Apple hasn't yet.



The problem is that Apple does not want to spend money where they don't absolutely have to. In their mind Docker is someone else's problem and if that 3rd party will also pay salaries and taxes, it's perfect for Apple. I am sure once Docker commit resources and solve it, the Apple will be boasting how great Docker works on their platform.


There's a great deal of cases where Apple went out of their way to help out 3rd parties with compatibility, including quite a few cases with the M1 Macs themselves. Is your comment based on anything or just typical HN "Apple = evil"?


The is because those 3rd parties matter to Apple's ecosystem. And hence it is of their interest, Adobe, AutoDesk, Panic, or any creative tools.

Docker and Developer Tools does not happen to be one of them.


I don’t know, supposedly [0] Apple provided patches to help get Blender running on M1.

[0] https://www.reddit.com/r/blender/comments/jsc03l/blender_on_...


And many others, transcribed list here https://news.ycombinator.com/item?id=23643425


And many of those were ported or tested on ARM before M1.


Should I use WSL 1 or 2? Coming from Linux trying to make something workable.


What kind of machine are you running windows on?



I heard WSL2 was no good.


Without more detail it's hard to say much that's useful but in my experience WSL2 has been nothing short of excellent. Also switched from Mac to Windows with WSL and the development experience is much better.


People were frustrated mostly with 2 issues...

a) It doesn't play well with deep sleep mode, and crashes. Perhaps it's been fixed, I don't know, I just disabled deep sleep.

b) The networking is different than WSL1 and requires odd workarounds for things like X11 to work normally. I had to use a 192.168.1.x address for DISPLAY instead of localhost. Which required some VB scripting to reliably work for me.


My main issue is the lack of support for serial (so, no USB support): https://github.com/microsoft/WSL/issues/2195.

Other that this I love Windows 10 + WSL2 as a dev environment.


It's a little clunky, but you could use a Windows RFC2217 remote serial server with socat on the Linux side to make a virtual serial port tied to a real one. I've done this (albeit not with WSL), and it did work.

https://gist.github.com/DraTeots/e0c669608466470baa6c


Too late to edit, but there's also an unapplied pull request that seems to make usb passthrough work:

https://github.com/microsoft/WSL2-Linux-Kernel/pull/45


I have WSL1 alongside WSL2 for this. WSL1 works fine with my FTDI adapters to flash ESP8266 and all.


VS Code with remote container dev on top of WSL2 works really well for me:

https://code.visualstudio.com/blogs/2020/07/01/containers-ws...


It's been flawless and I've been using it pretty extensively as a replacement for vagrant development VMs.

I believe one of the issues laptop-related, and I work exclusively on a desktop these days, so that was a non-issue for me.


I love Docker, but I gave up on using it on Mac. I put a lot of effort into fighting it, but ultimately, it seems some of the issues are just too fundamental. I like to call the Docker-for-Mac experience the MacBook Airplane. At some point, your fans will start spinning, your productivity will crash, and you'll spend the next three hours sifting through open GitHub issues from 2017 where everyone is still complaining about the same problems.

I'm all in on VSCode Remote SSH development now. It works extremely well, I barely even notice I'm not programming on my own computer, and my laptop no longer sounds like a passenger jet taking off. It was very easy to setup. Our stack is still very Docker heavy, but using the containers on a remote machine makes it much more tolerable to work with.


At work we used Docker for local development but not production, purely because we have some ageing internal systems and it made it easier to deal with different versions of PHP, MySQL, etc.

After numerous Docker woes on Mac I ended up just spending a tiny bit of time installing and configuring Nginx and various PHP-FPM and MySQL versions from MacPorts. It was easy, I learned a lot more about the platforms we use, and because they're all socket-based they can all be running at once. Just added a couple of bash functions to bring everything up and down.

Sure my dev environment isn't the same as prod, but it wasn't when I was using Docker either.


We did this at my previous job. We tried with docker-sync and that was already a good improvement (~5x faster web app load speed). But the real improvement was moving to running the app natively, another 10x improvement. I’m talking about ~5s load times (default docker setup) to <100ms.

I like the simplicity of setting up a docker project vs. having to figure all the bits and config for your machine. But the slow fs is unbearable.


Totally agree. Docker for Mac performance is just unbearable when dealing with a semi-large webapp. I recently moved to remote development – macOS with Ubuntu running in VM (VMware) via Vagrant. I edit code using VSCode & Vim (via SSH & tmux tabs in iTerm2).

Based on my benchmarks it's more than twice as fast as Docker for Mac – and only minimally slower than native Docker running on a Dell XPS.

I'm enjoying this setup so much that I'm considering moving all my dev-related tools to a VM (which will hopefully allow me to get rid of homebrew too).


Does Vagrant work on the M1 without issues?


No experience with M1, still on an Intel MacBook.


I did the same after constantly yelling at docker. It’s amazing what vscode has been able to pull off to make ssh feel local. I’ve tried other solutions before and they’ve all had noticeable lag. The only time I notice is if I’m remote and on a cell connection or airplane.

The one thing I wish they’d improve was re-establishing a connection after the computer sleeps. Really annoying to have to reload the entire window, sometimes.


Why is a reload annoying? All of your files are saved and reopened to exactly where you left them


Terminal windows get messed up. Oddly sometimes they resume, but most times they don't. Example docker-compose will still run in the background but if Im coming back over a weekend I might not remember and running a different project will error since another docker is using the ports.

I know not a big issue and I could use tmux, but I'm lazy.


Tmux solves this problem for me. I don't use the VSC integrated terminal at all. I have its scrollback buffer set to 1.

The reload isn't bad for me -- it even keeps my text editors open with undo history. I'm not sure if I had to do something to enable that. There is probably a way to make it retain the integrated terminal, too (though you should really use tmux, it's awesome).


VSCode added the ability to restore “local” terminals after a reload in the Feb update: https://code.visualstudio.com/updates/v1_54#_retain-terminal...

I’m guessing that doesn’t work for remote terminals though, I haven’t tried it.

Seconding what the other guy said though, tmux is perfect for this. If you use the iTerm2 tmux integration you don’t need to remember all the commands to switch tabs, scroll back, etc, it just feels local just like VSCode.


> At some point, your fans will start spinning

This problem has been solved with the M1 MacBook Air.


M1 MBP here, and my lap is freezing. I have no idea what the fans sound like, and I do lots of Go benchmarks using all cores pretty often (I know, it's probably not that strenous--but for a tmux+vim+Go dev this thing is impressive).


Original comment was not M1-specific - if you use Docker for MacOS on x64, then CPU usage is very high.


Some people are swearing by setting up a headless VSCode Remote SSH Devbox Intel NUC's.


I'm in this camp (though AMD, not intel).

It's so much better than Docker for Mac.


Does anyone know if there’s a parallel workflow in PyCharm? That is, running on a remote docker container. I haven’t yet been able to get this working but it’d be a vastly superior workflow for my use cases in ML/DS.


There are basically two parts you need to solve. You need files that you edit to save on the remote, and you need to be able to run commands on the remote. So a barebones setup might be SSHFS for mounting files locally, and editing them locally while running commands in an SSH session in your terminal.

Though honestly... you should really try VSCode. You'll get a lot more than simple editing and remote commands (e.g. integrated debugging, etc). VSCode actually installs and runs a headless instance of itself on the remote, and decouples the UI from extensions, language servers, etc. It's a lot more than just editing remote files.

Try downloading it, creating a $5/month VM, and setting it up as a Remote SSH machine. I know it feels like a cult but it's far and away the best editor experience I've had. I switched from Sublime and was up to speed in a day because I could import all my keybindings. You can probably do the same coming from PyCharm.


i have been using docker desktop on my trash can pro for a long time. No issues.


You setup a virtual machine then?


I use an actual remote machine (Scaleway in my case). VSCode remote does work with any VM or container though.


On Windows Docker just crashes at startup, skipping the CPU/fan part.


You're joking right? At my previous job we had dozens of devs who worked with Windows WSL2 + Docker just fine.

From what I remember the only requirement was that the dev environment stayed inside WSL2. Performance was native-like. With VSCode remote extensions it just works.

I still prefer Linux as a general development platform.


> Some container disk I/O is much slower than expected. See docker/for-mac#5389. Disk flushes are particularly slow due to the need to guarantee data is written to stable storage on the host.

Huh. This could be problematic given that Docker disk performance on macOS was already dreadful on intel machines. I would love to see Apple give this some attention.


Ouch ... ~10x slower ????

https://github.com/docker/for-mac/issues/5389

from a github comment: "Such a surprise every time I import a database to see it run about 10x slower than amd64."


Since a lot of use of docker on a desktop machine has no requirement for data resiliency (web dev work where all containers can easily be wiped and rebuilt), it would be good to have a flag for "no flush" which just ignores all flush requests.


You still want e.g. incremental rebuilds and file watching for tests and all that. It's really sad to hear this hasn't been addressed in years. Development with Compose on Linux is such a pleasure.


You can still have all that - flushing behaviour only affects things if the host machine suddenly kernel panics or loses power. For a laptop with battery backup that's probably a once a year occasion. In that case, I'd be happy to wait a few minutes for some docker stuff to be rebuilt.


You can't, when you're on a Mac and your code is bind-mounted into the container. This is the most comfortable method of development for me, and works perfectly on Linux. https://github.com/docker/for-mac/issues/3677


Yea, I think this is the same issue: https://news.ycombinator.com/item?id=26332732

Still an issue in the latest RC today


At some point, Docker switched from being an open source, free software company, to producing stuff like these Docker Desktop apps that are a) nonfree, b) not even source available, and c) contain spyware in them that report back to them your activities in the app silently and without your consent. (On crashes, it even uploads some of your network traffic in the form of pcaps.) Most people didn't notice this shift, as Docker Desktop (the app in TFA) still has a github repo, et c. It just doesn't have any source in it.

Not being open source I can't easily tell what sort of data it uploads during usage (but I did inspect the crashdump it uploads, and HOOOO BOY is it a fuckton of sensitive data about your running system), so being someone who usually works under NDA, even installing this on my machine is a liability risk, as it could transmit information about my customers.

You're better off using the actually open source docker command line client (installable from your favorite package manager) and setting DOCKER_HOST in your environment to something like "ssh://root@remotehost" (set up ssh key auth first, and install the docker daemon on remotehost) which will serve you a lot better, with the added benefit of running at full, non-emulated speed (and pulling images/packages/pushing/etc will happen from a datacenter pipe, not your puny leaf node on wi-fi).


I recently left the Mac ecosystem and bought myself a System 76 laptop. I do a lot of server-side development and running Docker at native speed is a big productivity boost for me. I really do hope they get this sorted out, it's a great technology that has measurably improved the local development experience.

I wonder if services like https://garden.io/ will see more business as a result of these issues? That or more folks will move to Windows or Linux as their primary development machine and reach for cloud-based Mac environments when they need to develop for Apple?


How are you liking the System76?


We switched over to Garden a year ago from a local docker-compose setup for dev. Garden has definitely had its rough spots but it works most of the time and it’s pretty amazing when it does.

Run your dev environment remote and instantly rsync file changes and hot reload services. I’ve had an M1 Mac since launch day and not missed a beat since we don’t depend on local docker.


Can you please share a link? When I search for garden container I get gardening results lol


https://garden.io was first result for "garden docker"


For some reason I thought I shouldn’t include docker because it’s not running docker


How is Garden different than using docker-cli with some random container runtime other than docker?


(Garden co-founder here)

Garden supports in-cluster building, using buildkit or kaniko.

This way, you don't need to have Docker or k8s running on your dev machine as you're working.

It also automates the process of redeploying services and re-running tests as you're coding (since it leverages the build/deploy/test dependencies in your stack).

We also provide hot reloading of running services, which brings a similarly fast feedback loop as with local dev.

The idea is to have a dev environment that has the same capabilities as the CI environment, and to be able to run any/all of your tests without having to go through your CI system (which generally involves a lot more waiting).


We already have some home-grown kubernetes dev environment in which every developer/QA can spin up all of our services in a dedicated namespace, but it's a bit tedious and spaghetti-code as it grew organically over time (from a 15 devs team to a +70 one). Garden looks like a nice alternative solution, do you think Garden Core is enough to get started? (we like to get our hands dirty)


Sounds like Garden Core could be a great fit here.

The motivation behind Garden was that, like you, we had built our own home-grown kubernetes dev environments, but felt like there should be a polished, general-purpose framework + tool for this sort of thing.


Hi, another Gardener here. Garden Core should indeed be enough to get started. I'm trying to keep this as factual and non-pitchy as possible for the sake of providing context—the enterprise product gets you:

• RBAC and secrets management (also makes it possible to control which users have access to which types of environments)

• Direct integration with GitHub or GitLab, so you could trigger something to happen in Garden based on a VCS event

• Automated environment cleanup (coming soon)

• Support and all that


Your site seems to target specifically teams with messy docker compose setups. Is there a simplified/supported migration or onboarding path?


Replying to my own comment: I actually forgot that Garden has the ability to run locally.

We only use the remote context. So basically it’s hot reload with all your services running on a beefy cluster somewhere else.


Is the creation of volumes as easy and straight-forward as with docker/docker-compose?


(Garden engineer here) You can take a look at the container module type guide to get a feel for how we reason with volumes. We have a persistentVolumeClaim module type which can used by container modules and that essentially creates a k8s pvc.

See more: https://docs.garden.io/guides/container-modules#mounting-vol...


Curious on how Docker runs on M1, it's well known for being an horrendously slow piece of software on Apple computers, draining battery life like crazy. Any feedback on M1 Docker so far?


As I said in a comment below, I've been using the preview version with M1 support for a while now and I have it constantly running in the background. I literally use Docker all the time, mostly building images locally to test/debug something or running things like Redis locally or nginx or something else during development and I have not had any issues so far.

And the M1 still amazes me every day. I code all day long, watch youtube, listen music, do zoom and slack calls and so on and don't charge my MBP even once during the day. I once forgot to plug in my laptop in the evening and the next day I was surprised that by mid day my battery was down to 10% after working on it for 1.5 days without charging. That's when I realised how long it lasts and that I got used to not charge it like my iPads or phone.

Also never gets hot and no fan noise yet.


I realized that, similar to you it seems, think of my laptop more like my phone now.

Until now, my laptop would be plugged in per default and every now and then I would run on battery. Where as now, my laptop needs to charge now and then, but most of the time it runs on battery.


This has been my biggest psychological shift with my M1 mac too. I used to use my laptop plugged in whenever possible, only using battery power if plugged power wasn't an option.

Now I almost never use my M1 laptop on wall power. I use it on battery power all day long, even when sitting right next to a power outlet. I charge it every few days when I go to bed or won't be working on my computer for a while. This is similar to how you use tablets and phones. You usually charge them up and always use them on battery power. Only in an emergency do you use it while plugged in. The laptop now actually fits into that category now and is used like a true go-anywhere laptop, not a portable computer.


Are you using an 8GB or 16GB M1? My 8GB air has been fine for everything I've done with it so far but I'm wondering if Docker will be the first thing that needs more than 8GB.


13 inch, 16GB, M1

I swear by 13 inch but I know I'm in a minority. I don't need a huge screen for my work. I like to look at code without bending my neck left and right all day long and if I need to multi task I four finger swipe left or right and feel extremely productive this way for many years now :)

EDIT:

I shall say I had an 8GB intel MBP before and 8GB was just about enough for everything including Docker.


13 really is the perfect size. I had a 16 inch forever before it because that is the size you needed if you wanted a capable enough computer to do what I needed. Now I can get a $1,200 computer that works better than my old $3,200+ computer.

I have fallen in love with the 13" size. I still have my 16" computer and I pulled it out the other day and it looked comically large on my lap. It seriously felt ridiculous. I couldn't believe that was my standard for so long. The 13" is still good enough to do most anything, but small enough to really be portable. My mom has an 11" air and it feels like a kids toy in my lap, too small for my liking. But the 13" MBP is right in that Goldilocks zone.

I will admit I turn down screen scaling down to minimum to get more stuff on the screen. The default screen scaling state makes things quite large out of the box.


I was using the 16" MBP before this but having gotten used to the 13" that now feels ridiculously huge. It helps that I'm not using XCode much though. That really wants a lot of screen real estate.


> bending my neck left and right

You can't see all of a 15" screen without bending your neck?

You think 15" is a "huge" screen?


I still wonder a lot about whether 16GB are worth the 200€ for a development workflow like this. Most sources I've seen say "no", does anyone have any personal insight?


If you are using a MacBook for work, get it with 16GB. 200 Euro is at most a few hours of salary for a developer in a western country and it will improve heavy workflows (JetBrains IDEs, Docker, etc.) a lot.

If it is a personal Mac, I would still go for the 16GB version, but purely for longevity. With 16GB you can probably use the MacBook longer. Also, less swapping means less SSD wear.

(16GB should really have been the default, at least on the MacBook Pro.)


Making the non cheapest option the default is bad for marketing.


I meant that they just shouldn't sell the Pro with 8GB. I know that the meaning of 'Pro' is somewhat debated, but if they want to address professional developers and creatives, 8GB is just too little.

Also, 8GB additional memory is not expensive at cost price. They could use the different amounts of baseline memory as a differentiator between the Air and Pro, especially now that the delta between the Air and Pro is so small (same SoC plus one more GPU core, Touch Bar that a lot of people hate, better screen).


I had the 8Gb MBP. I bought it the day they were released so it arrived on launch day. Because of the holidays we had extended return window, so we could use it until January 15th before returning it. I used it that whole time and fell in love with the computer, but returned it for the 16Gb model. But not for the reasons you would expect.

I originally bought the 13" MBP on M1 because I was in desperate need of a new laptop. I previously had a maxed out 16" MBP that cost me around $3,500. I had used it for about 5 years and was looking to replace it. But Apple's systems were in flux and I didn't want to drop $3,500 again on an intel macbook right as they were going out of production. So when I saw the new M1 macs released, I decided to buy the $1,200 13" MBP with 8Gb of RAM and a 256HDD. Just the base model. The idea was that I would use this computer for a year, until Apple released the 16 inch "big boy" models. Then I would sell the 13 inch and get the real 16 inch that I was waiting for.

But when I got my new mac in November, I started using it and was just so amazed by the performance that I realized it could do what I was asking it to do, plus I loved the size, the epic battery life, and the no fan noise. I essentially fell in love with that computer. When I went back to my 16" macbook it felt so large and heavy, I wanted nothing to do with it. Even if Apple fixes the fan noise (that the 16" is horrible with) and the battery life, I still hate the size. I had truly fallen in love with the 13 inch computer I already had. And it was less than half the price I had planned on spending during my computer upgrade.

I originally bought the 13 inch macbook pro as a stop-gap until the 16 inch models were available on M1. But after using it, I decided that this was going to be my new long term computer. So at the end of the return window I had about 6-8 weeks using the computer. I had never had any trouble with the 8Gb of RAM. BUt my mind just kept telling me that 8Gb wasn't enough.

Since I was already upgrading the computer for more SSD storage, I really went back and forth on whether to upgrade the RAM as well.

The reason it was such a hard decision is that I couldn't pinpoint a single time when I felt that the 8Gb held me back.

But I kept going back to the idea that if I keep this computer for even 3 years, the $200 becomes insignificant (to me at least, I recognize I am very fortunate). So I couldn't really identify a good reason to upgrade to 16Gb, but because I decided to do it anyway because for $200, it was worth future proofing it. So I ended up returning my base model 13 inch macbook pro and upgrading it to a 1Tb SSD, with 16Gb of RAM. It is now my daily driver computer. Even fully loaded it is still a fraction of the cost of my old macbook pro. I couldn't be more happy with my computer right now.

The performance is just incredible. No fan noise ever. Epic length battery life. Perfect size. I just really love this thing.

So do you need 16Gb of RAM. No. Not at all. I have not yet identified a time when the 16Gb has helped me or made a noticable difference. When I had the 8Gb, I never felt like it was slowing me down. But with all that being said, if your budget allows for $200 extra, I would get it just to future proof your purchase. But if you are already penny pinching, then don't worry about the 8gb. You probably won't ever notice it holding you back.


I have the 8GB MBA I bought for my wife and feel somewhat similarly. There are a few times when things lag a bit, but mostly it just hums along.

I'm holding out for the next gen to release for my personal machine and will go with 16GB and whatever the next CPU is. Mostly, as you suggest future proofing, but there are a few spots in my workflow where it does hang a bit which I think extra memory would help with.


M1 Docker works and it rocks solid. I was skeptical at first but I upgraded to M1 because my old Air is too slow and I cannot open Chrome+Slack at a same time.

Suprisingly I have no problem with docker. I just download the docker preview and it works flawlessly. One minior issue is that if I build a Docker image that support multi arch, then push to docker hub, then that image is arm64 by default. I have to do `docker build --platform amd64`.

I document that process here: https://axcoto.com/notes/2021-03-13-docker-apple-m-nginx-and... I think that's the only gotcha I got so far.


I guess I'm in the minority here, but I'd say it's slow. Not SLOW, but slow, compared to the Linux box I have right next to the Mini. It's tolerably, but I'd say starting up the app that I work on takes about twice as long on the mini, and it runs maybe 1/2 as fast? It's noticeable, but not so slow that I can't use it. Maybe this new release will speed things up for me. Nothing else is slow about the Mini, but Docker is noticeable slower.


Since Docker only runs on Linux, it's always going to be faster/ better on Linux. On the Mac you have to run Linux in a hypervisor, then run Docker on that instance. Things can get a better, but it's always going to be a bit of an alien.


It's been pretty fast since oxsfs was replaced with gRPR-fuse. Was in the beta builds for quite some time and now is switched on by default in 3.x. Idle CPU has been cut from about 100% down to about 20% also.


I'm running a bunch of Drupal sites in docker on a M1 macbook Air and it's running much much faster than docker for mac on the 2014 Macbook Pro I had before. The docker images we're using (wodby-drupal) only recently started supporting arm64 properly though so I had the new laptop a few weeks before it was actually usable but now I'm very happy.

Haven't been able to run a php 5.3 container yet though for this one project that is pretty much on life support.


It's phenomenal. I use an M1 Air to develop a webapp with Rails + multiple independent large Webpack builds (Make sure your `node_modules` and other speed-sensitive folders are all in named volumes not bind mounts). It is insanely fast compared to my 2017 15", and battery can last all day even when using Docker, something unheard of on my last laptop.


Some rare crashes here and there (before the RC, I haven't used the RC too much yet), some images don't work, some node packages require extra packages where the same image on linux worked without them. But after the initial hurdles of setting it up, it actually works pretty great for my use-case. I'm using it for 5 different projects 2-4 containers each. If you develop React and Typescript with VS Code remote though, you should definitely up the memory limits, because the initial 2 Gb was hit all the time (every 10-30 minutes), which made Docker crash on me every time. Once I set this higher, my experience improved tremendously. I only have to charge it once a day with 10-12 hours average daily use.


The previews I've used work fine but a few images are not updated to work yet. My intel laptop died this month and forced me to use the M1 for all my development work (was previously using it for iOS only), I ended up setting up a remote ubuntu environment on a Hetzner box to do all my Docker work. It works great and I might end up keeping this setup.


It works reasonably well and doesn’t seem to drain a lot of battery. My only complaint is that running x86 images is super slow... By super slow I mean that running tests on a medium size Django project takes 10 minutes, as opposed to a minute on my Ryzen Linux desktop.


I’ve only had one issue where it crashed, but otherwise its been stable for my lightish use running compose with three ubuntu images.

Battery life was still better than my i7 2015 mbp


>Curious on how Docker runs on M1, it's well known for being an horrendously slow piece of software on Apple computers, draining battery life like crazy.

No, it's not. Perhaps that's an old wives tale from the era (2-3 years ago) when it had bad fs performance and didn't use the native hypervisor directly?


Running Docker on native Linux machine is vastly superior; the "old" TR1900X I have under the table is much more performant (for Docker) than M1 MBP on the table.

But then, M1 MBP is 25W part and not 180W, it is also silent, unlike the TR. So pick your poison.


> Running Docker on native Linux machine is vastly superior

I don't know about "Vastly", but it is better and will likely always be better because of the structure of Docker. Docker is by definition, a container that runs atop Linux. Since Linux is already Linux and the Mac has to run Linux in a hypervisor, bare metal Linux is almost always going to be faster.

But, Docker is just a part of the workflow. Dealing with the limits of the Linux Desktop to get better Docker isn't worth it a lot of the time.


> Dealing with the limits of the Linux Desktop to get better Docker isn't worth it a lot of the time.

That's just your personal preference; I'm using all three (macOS/Linux/Windows 10) and all three are fine for desktop.


Absolutely. Should have been more clear it was preference.


Needing to use Docker is a great reason Linux on the developer laptop makes total sense.

> horrendously slow piece of software on Apple computers, draining battery life like crazy

Exactly.

Thankfully there is Docker on Mac: how would my colleagues with shiny laptops get work done otherwise?


>Thankfully there is Docker on Mac: how would my colleagues with shiny laptops get work done otherwise?

This "shiny/for the clueless" Linux-edgelord meme must die. Might as well write "MS" with a dollar sign in 2021.

Go to any programming conference and check the speakers. Over 50% use an Apple laptop. Check major developers people follow, from old Unix hands like Rob Pike, to every major JS cat, to admins, all the way to the creator of Gnome, Gnumeric and Mono, and their preferences, and you'll find they use a Mac laptop with macOS.

(And hardware wise, even Linus Torvalds had an Apple G5 tower as his main driver, and later an Intel Macbook Air he praised as the best machine he had used (though he used Linux on those).

In any case, there are benefits and tradeoffs, but "lol, Mac is teh suck" is inane.

To correct you: no, Docker is not used because "you need to". It's used (also on Linux) for reproducability, isolation, the ability to write code with different dependencies with the whole system at your disposal, and to not mix your driver machine with your development environment.

It is the same use if you run Linux distro X and deployed on another version of it, or on the same version with some tweeks/different libs, or to whole other distro.

And no, it hasn't been a "horrendously slow piece of software on Apple computers, draining battery life like crazy" for ages, and when it was it wasn't because of some macOS limitation, but because the company had done a half-arsed job with the fs layer.

And as far as "needing to use Docker", macOS is not any different than any Linux/FreeBSD distro on that front. If you prefer a local, mix-everything-in, not discliplined approach, unlike what Docker offers, you can install anything you like, from Brew, MacPorts, Fink and so on. You can even have Nix for reproducible builds under that scheme.


> because the company had done a half-arsed job with the fs layer

Exactly.

And I know Mac reigns in dev land. Just the moment Docker comes around (and it does quite a bit lately) all those shiny (and that's a compliment) pieces of hardware become heated vacuum cleaners.

I'm not saying "lol, Mac is teh suck" (what you apparently want to read). I'm saying: here's something that my Linux laptop wins at. Docker.

You come across rather angry, even saying that Linus uses Apple hardware without running the macOS... How does that work as an argument against my experience?


>You come across rather angry

Nah, just replying to the rather snarky? "Thankfully there is Docker on Mac: how would my colleagues with shiny laptops get work done otherwise?"

>I'm not saying "lol, Mac is teh suck" (what you apparently want to read). I'm saying: here's something that my Linux laptop wins at. Docker.

Well, sure. There are other things a Linux laptop wins at. Tinkerability for example, part replacements, etc.

>even saying that Linus uses Apple hardware without running the macOS... How does that work as an argument against my experience?

It works as an argument that it's not something merely "shiny", but a good piece of hardware for a par excellence technical user.


> Thankfully there is Docker on Mac: how would my colleagues with shiny laptops get work done otherwise?

Nope. I replied to someone saying Docker on Mac is a joke, and I agreed saying: one of the last points where using Linux compared to Mac is advantageous.

Saying Macs are shiny is a compliment. You took it as snarky, but it wasn't (before the new keyboards I preferred --and owned-- Macs).


> ... all those shiny (and that's a compliment) pieces of hardware become heated vacuum cleaners.

> I'm not saying "lol, Mac is teh suck"

Biggest eye-roll post of the day. Literally one sentence after the other you suggest Macs are worthless and that you aren't saying they suck.


And the "You come across rather angry" was such a good touch.


> I'm saying: here's something that my Linux laptop wins at. Docker.

No shit. It’s running the same kernel. Docker on MacOS requires full virtualisation, which is a lot more overhead.


>Go to any programming conference and check the speakers. Over 50% use an Apple laptop. Check

In the US. Not in Europe. The iPhone is barely testimonial there, while in the US, the iPHone has a good market chunk, and thus, more developers.

The old Unix hats mainly use Acme and MacOS as a dumb client against a 9front cpu(4) with Drawterm or with Plan9port.

The could use whatever it has a GUI to run drawterm on, as they did with Windows 2000 back in the day.

Also, OSX still has HFS as case insensitive by default. On modern Unix environments, OSX is useless. Period.


> This "shiny/for the clueless" Linux-edgelord meme must die. Might as well write "MS" with a dollar sign in 2021.

It works both ways; it will live as long, as "linux is useless, because I had a problem with wifi in 2002".


I mean, its 2021 and you still cannot fractionally scale the resolution without slashing your battery life and losing 30% of your processing power (and some random stuttering on all applications).

Having a 15.6" 1080p screen basically means i need to choose between using my glasses all day or using Windows.


Use Wayland. Fractional scaling works fine there. If you are using X11 and xrandr scaling, and see performance impact, maybe that's the reason why it is not supported in GUI.

On the other hand, I'm have no perfect eyesight, but in Linux, 1080p at 14" is as usable as Windows 10 at 125%. Here, the design choices made by Gnome are hitting its strong points, it is perfect resolution for @1X scale.


Also, any modern DE could scale fonts up to 14pt and far more.

And OFC choose whatever theme, icon and font you like in order to match your resolutions.

Windows 10 is useless without scaling.


The problem with Linux on the Desktop is that it's still a horrible, time-wasting experience.

I have to unplug/replug my mouse every time I boot into Ubuntu because it's not recognised otherwise. Another wired mouse I have is not recognised at all (was working just fine during install).

As somebody who works 8+ hours a day, I don't have time for this shit.


> The problem with Linux on the Desktop is that it's still a horrible, time-wasting experience.

In your experience - that's not uniformly true - I'm a counter example - I installed Fedora on this machine when I built it a few years ago (and have upgraded to each release) and have had zero hardware issues and that's running an RTX2080 with the binary driver (historically a pain point on Linux).

As someone who also uses a recent generation work issued mac the different for me is stark.


I'm well aware my experience is not uniformly true.

My point is, that an inconsistent experience is not something I have time to debug anymore. I ran Arch 10+ years ago when I had more time than sense, but those days are long gone. I'd rather spend my non-work time AFK.


I have a Ryzen 3700X desktop and have literally zero hardware issues with NixOS and Fedora. The Intel NUC8i5 I had before that also worked flawlessly.

I also purchased a ThinkPad T14 AMD. It works fine with Linux and all the hardware works out of the box (including the fingerprint reader, WiFi and webcam). Additional benefit: upgraded it from 16GB to 32GB for under 100 Euro.

I used Macs from 2007 until 2020. But in my daily work, I have experienced far more issues with macOS than with Linux in recent years (I was a very happy Mac user from 2007 to ~2015).


I hated linux desktops, it was unusable for me. I gave a try again 3 years ago, since then, no issues, it’s been super smooth for me. I used 5-6 laptops along three years, I’m still surprised it’s that good. I use stock ubuntu. Maybe I’m just lucky that I picked laptops/hardware which work smoothly on linux/ubuntu


I installed Pop!_OS on my gaming PC. Didn't have to do any fiddling at all.

I was so impressed by it that it has replaced my Mac laptop as a development machine at home.


I'd argue that the experience is significantly worse on laptops than desktops where it has been more or less fine.


I have done and still do all my development on a Mac and deploy to Linux.

I run 3-5 rails servers, redis, postgreql, mysql, jetbrains ides, slack, meet, spotify, vscode, and probably two or three other apps I forget now. I do all this on either my 2014 15” MBP i7 or my 2019 13” MBP i5. The only thing I lack is drive space.

I have reinstalled my 2014 OS never, and I have a truckload of usb devices (mostly music production) attached.

You simply cannot match that with a Linux laptop. I love Linux, but not for desktop use.


> Needing to use Docker is a great reason Linux on the developer laptop makes total sense.

So do you always run the same server distribution with exactly the same package versions on your laptop? Otherwise, you miss the point of Docker.


That’s not the point. Docker is native to Linux, it is much faster and simpler on Linux where it does not have to use virtualisation.


>where it does not have to use virtualisation

You'd be surprised.


Go on..?


>where it does not have to use virtualisation

Container isolation is still OS-level virtualization. It just doesn't use a hypervisor.


It might use a hypervisor though, as the pendulum swings back

https://katacontainers.io/


They are saying that they need to run Docker precisely because they need different package versions. Given that Docker is required, Linux is attractive because then you can run Docker without running a Linux VM.


Windows has native containers, and I bet the Docker GUI will eventually become a thing of the past.

No, I don't use WSL, and still get work done.


I wish apple would support a couple extra kernel features (like bind mounts) so we can have native macOS 'containers' instead of this nonsense. Running MySQL by running qemu inside a Linux VM is just insane. Nix can fill some of the same roles, but it doesn't work on M1 yet


Wouldn't that also require full namespace support, not just "a couple extra kernel features"?

At that point if you want to bind mount in a container you're talking about macOS binaries running inside the container, which means a full macOS docker architecture port (like they did for Windows, which AFAICT didn't make them much if any money, but I think M$ paid for it anyway).


Namespaces are useful for security maybe, but for macOS the biggest reason to use containers is to have a controlled and reproducible way to run certain pieces of software. With chroot and bind mount you can already achieve that.


> native macOS 'containers' instead of this nonsense. Running MySQL by running qemu inside a Linux VM is just insane.

A ‘native’ container would be running the MacOS kernel. You could run MacOS software in it, but it would be incompatible with Linux docker images.


Wouldn't some kind of "syscall proxy" or a wrapper possible? There is gVisor [1] which if I understand correctly re-implements Linux kernel in userspace for security, pretty interesting. Such layer would have to re-implement missing pieces in Mach kernel though so maybe it would not be as easy.

[1] https://github.com/google/gvisor


> Some container disk I/O is much slower than expected. See docker/for-mac#5389. Disk flushes are particularly slow due to the need to guarantee data is written to stable storage on the host. This is an artifact of the new virtualization.framework in Big Sur.

If I don’t care about persistence of my containers (e.g. I’m just running ephemeral tests), is there a way to disable Docker for Mac / Virtualization.framework’s cache-flushing behaviour entirely? I.e. to get the same behaviour as mounting Linux ext4 with -o nobarrier,data=writeback?

Does Virtualization.framework maybe have first-class support for swap volumes — i.e. inherently ephemeral volumes, that don’t need to be flushed to the host?


I don't understand why Docker Desktop e.g. Docker for Mac and Docker for Windows is available for free. I think it's a value added service and is executed beautifully and would be a fair way for the company to generate revenue.


Because if you can't do `docker build` locally when learning or building software or studying IT then you will look for alternatives which allow you to achieve the final goal in a similar or "just good enough" way and then you will never start using Docker, not even in production.

For many developers Docker alone has a huge cost of entry in terms of learning. If you also ask them to pay for something which many dread to learn then even less people will adopt a technology which is actually one of the best innovations in software delivery from the last decade.


A lot of products e.g. CAD Software are available for free for personal/educational use or for a limited time. Alternatively I like the approach JetBrains (IntelliJ) are taking by providing the software for free or a reduced price: https://www.jetbrains.com/de-de/idea/buy/#discounts?billing=...


That not generally how developer infrastructure tools work these days.


Without it being free it would likely be replaced by a free alternative that would further limit Docker Inc from directly influencing their users now the core Docker daemon has been thoroughly commoditized.


There is no reason for Docker when all OS just have native containers support.

In the end it is just a bunch of APIs to abstract OS APIs, which end up being the minimum common denominator, as each OS offers different container capabilities.


There is no reason for Docker when all OS just have native containers support.

Indeed. On Fedora Docker has already been replaced by rootless Podman containers, which are great for development. I wouldn't be surprised if Podman will take over on Linux workstations pretty quickly.


Docker/Containerisation is open source software with a free to use license? If this was restricted on Mac and Windows it would just add another reason to avoid these for infrastructure use. Avoiding OSX for this is already a no-brainer, given the lack of cloud availability and otherwise pricing, but at least if you're stuck with that, you can learn the tools for free. More or less the same for Windows, although there you at least have some cloud offerings.


Considering how badly Docker runs on Mac, I don't think anybody would pay for it. I had to set up a Linux machine I ssh into locally just to be able to work.


My experience was generally different. Yes, it gets hot and loud on Intel. But I can still do web development that involves PG, Rails/Django, and Redis.


With 2-3 containers there isn't an issue. When you need to run 8-10 at once, the machine becomes unusable. Same setup on Linux doesn't even register as load.


There must be something wrong with the OSX implementation then? Container images should be static and won't be duplicated across multiple instances. Maybe I'm wrong about how it should work. My impression was that if I had a 1 GiB image, I could spin up 50 of these, and still use roughly 1 GiB of memory (assuming running process es don't need to allocate much themselves).

Edit: From testing by spinning up 10 MySQL servers, sure looks to be the case. Each runtime allocated approx 215 MiB of memory, which is what the available system memory was reduced by for each. The container image itself was approx 400 MiB.


Subjectively speaking, I would happily pay thirty bucks for a native alternative to Docker Desktop. I find the UI gross and disruptive.


How do people use Docker on Mac? It's so slow for me I've started to use my X220 (Linux) as it's faster than my 2018 MBP.

We use containers inc. a MySQL container and accessing it is incredibly slow, with a request taking 5-10seconds that's instant on the production server.

I've heard that Docker Sync can improve this, is it worth a try?


I use docker on a mac, and... while I do feel it 'slow' in some sense, I've never seen 5-10 second access time to a mysql container. MBP 2019 here, but never saw that on MBP 2015 either. Is there some other mysql config that's trying to do some network lookup on the incoming connection? I vaguely recall that being an issue with mysql on bare metal servers years ago - if there was some specific network name as part of access control, the mysql would try to resolve the hostname, and that would sometimes be very slow, depending on external factors. (--skip-name-resolve and reverse DNS seemed to be things I'd found along the way, but I'm doing this from memory and it was 10+ years ago - haven't hit that issue in... a long time).


So, i just got my Macbook Air M1 16GB a few days ago. What's the best way to run Linux on it in a VM? Is it Docker?


Depends on what out of box experience you want and if you want a GUI. For an easy out of box experience, open source, GUI based tool I suggest checking out https://github.com/utmapp/UTM. There's also an app store version that supports the author I believe.


Interesting. Can UTM run Windows also? I have a few critical domain-specific commercial apps that are Windows only, and need that in a VM...


Haven't tried it, but think so:

https://mac.getutm.app/gallery/


OK. I need to look into this. Thanks for the heads-up!


That's great news. I've been using the Preview with M1 support for a while now and have had no issues so far.


I ran into [this](https://github.com/docker/for-mac/issues/5208) bug just today - mentioned in the release notes.

If you're using a VPN (in my case, the NextDNS client), you might want to de-select the option to start Docker on boot. Start it manually instead once your VPN client is loaded and connected. In my case a failure to do this would completely bork the ability to connect to the internet - either over ethernet or WiFi. Took me a while to figure out what the cause was.


I've never found it reasonable to virtualize on Apple systems. Apple optimizes for security, specificity, and bubblegum usability.

Technical limitations aside, from a security perspective it is not a good idea to run servers on the same system that you write code from. I humbly suggest taking the time to push code to your dedicated Linux server, otherwise you might inadvertantly be putting your company out of compliance by exposing your dev system on any given network.


Docker is for running Linux apps. I honestly don't see the appeal of abstracting away the Linux VM via Docker for Mac, especially if it has issues like filesystem performance. I have been running docker inside a Linux VM in VMWare Fusion on my Intel MacBook. Surely it would be possible to just have a plain Linux VM on an M1 Mac and run Docker inside it?


The big thing Docker solves is repeatability and isolation. If you create a Docker instance on your Mac, you are more or less guaranteed it will work the same in production.

If you were to create a Linux VM to replace Docker, you would need it to also recreate the Docker build tools for that VM so you could recreate that VM on the server. At that point, you’ve more or less come full circle and essentially recreated Docker.


I am not creating a Linux VM to replace Docker.

I have a Linux system (the VM). Docker is installed on that Linux system and I do all docker related work on the Linux system.


Ah, obv I misunderstood your above post.

Makes a ton of sense. It’s been a while since I used Docker, but I’ll have to try it out next time I’m using a container setup. Particularly since HyperKit seems to make running a VM so straight forward.


Yes, this is really the only way to tolerate Docker on a Mac. Basically it's the same thing WSL does on Windows. Linux in a VM then you run Docker in the VM. Not many people talk about doing it this way though.


I wish there was CLI-only version of the Docker Desktop. I can't even launch Docker Desktop unless there is a monitor connected and a user logged in.


Run containerd and buildkit in a vagrant VM. You'll have a lot more control--you can pick what kernel you want (and even upgrade it as necessary), you can expose host filesystems and devices to the VM, etc. It's the same thing Docker Desktop is doing behind the scenes but now you control the full stack.


Very excited about this, been wrestling pretty hard with the previous release, it had lots of issues, to the point where I was SSHing into my old laptop to run builds.

Really hoping those days are behind me with this, it made me feel a bit foolish for springing for the mini as quickly as I did.

Edit: Nope, segfault yet again. God damn, well you get what you pay for!


File access on preview7 was atrocious. It would take a seeded postgres-debian container 3 minutes to start (maybe 64 megs of data inside) if you kept the postgres volume on the local disk inside the container!

Intel -- instant. Hope they fixed it!


Can this version run the official MySQL images under QEMU? I had everything working on the last preview version except for MySQL; they would immediately crash with a Go error.


I’ve been using the experimental version for a while now and haven’t run into any issues. Glad to see a release candidate.


Amazing discussions here seems more on running wsl2 than mac


I thought M1 is deprived of hardware support for virtualization? Wouldn't that botch dev workflows considerably?


No, it is supported in hardware and MacOS has frameworks to support virtual machine implementation.

https://developer.apple.com/documentation/virtualization


No, I think it was the difference between Intel-derived virtualization and ARM-derived virtualization. Virtualization on ARM Macs still exists. Docker just had to convert their Intel-specific code to ARM-specific code.


This was only on the dev kits, which were released without virtualization support.


Gotcha, thanks.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: