Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How can I make local dev with containers hurt less?
96 points by mieubrisse on Jan 11, 2024 | hide | past | favorite | 53 comments
Containers are great for shipping code to Prod, but my friends and I find them frustratingly painful for local dev: I have to wait on an image build to do anything, it's easy to accidentally invalidate the Docker layer build cache, I don't get my language's build cache unless I jump through extra hoops to mount it into the build image, I sometimes need to deal with file perm mismatches when mounting, attaching a debugger becomes a remote debugger incantation, and sometimes the language itself just seems to make containerization painful (looking at you, Rust).

Am I missing a tool or something? Shouldn't I be able to run my server in my IDE and proxy it into a Compose network or Kubernetes namespace, so I get my IDE tools for free? Or at least have my Docker container run in "watch" mode, where a change to one of the files the container is based on restarts the process with the new files?




You prepackage an image with everything you need to run your app, then you mount your local directory as a volume where you COPY it into your container (the volume mount will override that file system path) and now you can run your container, edit your code, see live-reload do its thing (if you have that) and when it’s time to deploy - simply don’t mount your local directory and let the COPY do its thing.

Also, any 3rd party things (database for example) can be done with a docker-compose.local.yml omitting your app image and instead building it from .


This is directly parallel with non-containerized workflows. Dev servers often incorporate hot-reload and debugging. Production servers are polished with fewer entry points

Honestly, unless you have a requirement to use docker containers in local, I would simply use the IDE directly and containerize the other parts of the stack that you can. then test with containers


The question was “how can I dev with local containers”. I agree with you. What I described is that halfway point between “I need to debugger” and “Throw it all in a container and ship it to dev”. There are legit reasons to develop out of a container like this. Keeping the layers audited is one…


It's a lot to learn, but Nix solves your issues precisely.

You can build lean, layered Docker images with ease with it. And deploy those to container services like any other.

But you don't have to use those containers for development. You use Nix to set up your dev env (a lot will come for free after you have your code packaged for the container).

Nixpkgs has support for most mainstream languages nowadays, with varying levels of popularity. The more popular ones will have more polished Nix integration.

Now, if you _do_ want to use the container locally, you can do that too. And it will benefit from non-fragile caching thanks to Nix.

But tbh, if you need to replicate prod precisely locally to do local dev, you should probably consider..figuring out how to build and test your components with confidence in isolation. Local simulation of prod can be useful sometimes but if it's your default, you can do better.


+1 for this - I've essentially replaced all of my local tooling for my own projects as well as where I work with nix. Consistent, stable environments that include every dependency have been a breath of fresh air for development across the board. Being able to build lightweight containers from these nix derivations that match the local env perfectly is a huge bonus - you can even stream directly from the nix store into remote docker registries with something like nix2container!

https://github.com/nlewo/nix2container


Yup, I haven’t tried it but there is https://devenv.sh which is built on top of nix and makes it simple.


100% this. Only use containers for remote execution: CI, staging, production. Run native code locally. Manage dependencies and isolate project environments with NIX.


You don't necessarily need to run the application in a built container image every time. A container can be a repl (shell) where you just use it like a regular terminal shell, run your app. Building a container image every time you make a change in your editor doesn't sound optimal at all. Alternative you could also just run containers for all other things your app is using (db, caching server, etc) and the app runs in a regular terminal, with container stuff being bound to local ports your app can talk to.

Sounds like you need to re-access how you're using and thinking about containers when doing dev locally.

Look into running a container with the stuff you app needs installed, but running a shell instead of your app directly. Then look into mounting your source directory in the container using docker's or whatever container tool your using. Then things like auto-reloading (if your app supports it should work using inotify-tools).

And by "app" I'm referring to whatever your developing, most likely some kind of backend server?


Some IDE's support development in a container. The IDE becomes a thin UI client and sends commands over a socket to the container where the files are and any builds/commands execute.

I've only used the VS Code version, but it appears the Jetbrains IDEs support the concept as well.

VS Code injects a binary into your normal development container definition to create the "bridge". Local development files can be mounted into the container environment as well if you want the container to remain ephemeral.

https://code.visualstudio.com/docs/devcontainers/containers

https://www.jetbrains.com/help/idea/connect-to-devcontainer....

https://www.jetbrains.com/help/idea/remote-development-overv...


Very good questions.

Here are a few pointers:

Podman runs rootless and avoids (some of) the problems with permissions this way.

It's possible to mount the source directory (when running a container) over the place you copy it to (when building it) so you can start a container once and rebuild and test inside it while you edit outside of it.

I think that containers are a good reason to make a technical distinction between unit tests and integration tests. The former should work outside the container to facilitate quick development whereas the latter can rely on the environment the container provides. That setup saves a lot of headache for configuring paths and dependencies.

Finally, i find it very important that building the software and executing the unit tests should be possible outside the container. This way you can always use your local setup, maybe after some tweaking. This tweaking is the (small) price everyone has to pay every now and then. That way the build environment doesn't run stale. Imagine developing software with a frozen tool stack packed into a container ten years ago. Because that's what happens when everyone just uses the image.


> I think that containers are a good reason to make a technical distinction between unit tests and integration tests. The former should work outside the container to facilitate quick development whereas the latter can rely on the environment the container provides. That setup saves a lot of headache for configuring paths and dependencies.

> Finally, i find it very important that building the software and executing the unit tests should be possible outside the container

Bingo. Containers allow you to brush lot of issues under the rug and make picky software that has involved/specific requirements for its environment. I would recommend resisting on relying that too much and try to make the software flexible and with sane defaults so that running it outside the container is not huge pain. Yes, it might slow you down and you might miss out on some niceties, but the benefits imho are there in the long term.


VS Code takes an opinionated view of this with "dev containers". Other IDEs (including JetBrains) have support as well. It's probably worth looking into a little, whether you decide to use them or not, to understand why they made some of the trade-offs they chose. I wrote a little blog post as an intro a while back: https://www.mikekasberg.com/blog/2021/11/06/what-are-dev-con...


Thank you so much! Ive been needing to set this up for a project of mine and this helps a ton.


On routing, make sure any endpoints used between containers are (1) configurable, and (2) using the docker internal network naming conventions when working locally.

For example I have a compose with 10+ containers in it. Each container that needs to talk to another has some kind of environment property to tell it the name of that other container. So the "api" container might have a property called DB_HOST="db", "db" being the name of the db container.

Now, when developing i.e. the "api" image locally, your local dev server should be configured in the same way, providing the DB_HOST property to your local dev server environment. By doing this, you can completely stop the "api" container, allowing the local dev server to take its place, configured to talk to your other containers running in the docker network.

This way you are maintaining the local dev server setup that we've been using for ages and not developing directly on a docker image or dependent on its build cycle, etc.


The idea would be to build a base image that has all the dependencies for your app and then treat it like a VM. Code would get mounted via shared volume into that container. So as your code changes, it changes in the container, and does not facilitate a rebuild.

IE, instead of building a fresh container on every code change, you only build a fresh container when your python version changes. You start a container and then from within it, you install your python packages. Or take it a step further, and your container will get baked to include dependencies and only rebuild when the dependencies change. The production container would inherit or be downstream from this, so that all the prod builds contain everything and are artifacts.

Replace python with rust, golang, etc. Doesn't matter.

The key is that you will need to abstract a base image, and then fork that into the dev image and the prod/stage/deployable images.


Nix shells are excellent for this. See simple [1], intermediate [2], and complex [3] examples.

[1] https://gitlab.com/engmark/engmark.gitlab.io

[2] https://gitlab.com/engmark/mypy-exercises

[3] https://github.com/linz/emergency-management-tools


Containerless dev is just better, because it's a ton of extra work to get things as well. Any chance you can not use containers for local dev?


It is, when you know what you are doing, and your peers know what they are doing. If you are a Python/JS shop, even if you get your dependencies right, one dev has the wrong Python version (because they didn't update), or wrong NPM version because they just joined the company and downloaded the latest obscurely incompatible version (because one of your deep dependencies breaks the build process). True stories from the battlefield. Of course other environments might have it easier. Containers can solve some of that pain (with other pain!).


Its really not that hard to teach even barely competent devs to get all the right requirements for working on stuff. I don't understand how people have this many issues with it. If its that big of a deal, just write a basic shell script to set up the environment for them. Or better yet, just do a better job hiring.


I gave up mostly.

Nowadays I use containers for the services, like redis, postgres, etc. But the app runs locally for dev. Works fine for standard web stuff.


Same for me. Locally I try to use asdf-vm for handling different versions of stuff needed.


Would recommend checking out mise. It's a newer clone of asdf. I switched due to bugs I was running into with using asdf to manage versions of python.

https://github.com/jdx/mise


Like others have said you don't need to have the application itself be a container locally. As long as it builds properly to an image it's fine. The only local container I use is one for a DB.


>I don't get my language's build cache unless I jump through extra hoops to mount it into the build image

This is what you should be doing, and you should not be building your artifact with docker build during development. If you can help it, you don't want to compile your application inside of a container at all. Build it outside and COPY it when you're ready to ship, or use a volume during development (docker run -v)

If you cannot rebuild outside of the container, you should be able to build your build environment as an image once, then exec into the running container to rebuild there, but you should NOT be rebuilding your docker images for each compile loop. It sounds like that's where you're encountering the pain.

If you are rebuilding your docker image every time you recompile your application, you're doing it wrong.


Why not just use FROM and multistage builds in your dockerfile?


Look into nix-shell. For example all I need to do to get a shell with access to node/npm is `nix-shell -p nodejs_21`


It would be a smoother transition for most I imagine to use nix via https://devenv.sh/ even if only for it's excellent documentation.


My pain ended after doing the following:

- Install my own Git using Gitea

- Install my own Repository instead of using Docker Hub

- Install Portainer

- Configure Gitea to use workers + actions

- Write the needed YAML to build the image, upload to local registry

- Configure hook on Portainer to recreate stack if image was updated

Of course there is a slight delay while the image is building, but I don't have to touch anything at all, just code, commit and a couple of minutes later image is up and running.


I’m not even sure they are so great for shipping code to production.

Slow build times, slower execution times, annoying keeping them updated, especially with k8s.


What platform do you work on? We've had this issue with mac until mutagen-compose made the containers feel like they weren't there


Skaffold does much of what you are looking for.

https://skaffold.dev/docs/

K8s manifest autoloading works, and IDE support is somewhat there. Not sure about build caches, should be possible I think.

Only problem is the Kustomize overlay syntax is a bit hard to grok. You can also use Helm or raw kubectl deploy commands.


He asks simplicity and you threw k8s at him :)


If you need to use k8s, skaffold makes it a lot better to develop locally. Also the tool doesn’t just help with k8s.


My ideal is a starter that offers a nice blend of microservices and configures them for me just enough to get them working in easy-to-manage and organized way. Most importantly, they are all optional and easily removable.

I do this with npm scripts for "compose", "start", "stop", and "reset" for every service and tie it all together with dotenv for environment vars. Currently, I have dockerized Traefik (partially), Webpack (dev server only so far), Pocketbase, PostgreSQL, PostgREST, Swagger UI, PgTyped, and MongoDB under this and will soon also dockerize the Express-based RESTish API feature.

https://github.com/dietrich-stein/typescript-pgtyped-starter


Tilt is pretty good for that, it will sync files into containers automatically (no rebuild) and can rebuild the image if some other files change (configured by you).

https://tilt.dev/ (no affiliation, just a happy user)


I am far from a containers expert.

I have noticed however that systemd containers (nspawn) don't have layered images but seem to simply run against a root file system hierarchy that you put on the disk.

This seems to me much simpler than dealing with diffed layers or whatever other container solutions do.


Layers are a mechanism for de-duplicated storage, as well as incremental distribution, builds, scanning, and more. This is related to OCI being a full ecosystem of integrated solutions. Layers provide value across several concerns in OCI. In contrast, systemd-nspawn does not attempt address any of these aspects, and does not even have the same concept of a container image (just a disk image). systemd-nspawn is a single tool that does one thing well, which is not a bad thing, but also means it requires many other pieces to provide the same functionality as one would get with a typical set of OCI-compatible tools.

I will add that layers are an imperfect solution, but they are, IMO, simple and practical, and have provided value in a way that is relatively unobtrustive. In other words, many people have gained value from layers without having to think much about them.


I use docker (compose) a lot for my daily dev (on linux) to create and maintain web applications. Mostly Go backend, Svelte frontend, Mysql or SQLite db, Traefik or Caddy proxy, ...

I avoided a lot of your troubles by coding/running/debuging the main program (app server) outside of a container and letting "only" the infrastructure parts inside (db, mail, ...)

Only at release time that I embed the server part in a container.


1. With Docker you can create derived containers by a kind of inheritance. You can add your own packages to the container. For instance, I add Vim and whatnot. Use the customized image for your local development; your CI builds will use the stock one.

2. You can step into Docker containers so that you can work inside, iterating on builds and such. If you have a scriped workflow that launches a Docker image to do a build, crack it open and develop a more interactive alternative.


Have a look at https://www.bunnyshell.com/#cde

It allows you to use your local IDE to edit the code, but the actual container runs in the cloud. It allows the user to define and create thin or full environments (any number of services) running in the cloud, so no load on your local. Full support for debugging.

disclosure: I work at bunnyshell.


You should check out Devbox (https://jetpack.io/devbox) if you want local dev without the container overhead.

It provide a nice interface for creating native, local dev environments using the Nix package manager, which is especially helpful if you or your friends struggle with the Nix language. It also lets you use your local tools with your dev environment.


Nix dev shells are far better than container for dev work.


My gripe with Docker / Podman is that it's unlike the VM - no init services, no SSH.

Incus (and LXD) make containers work in pretty much the same way as the VM, just without emulation overhead. You have prebuilt images with rich standard toolkit, systemd and services, SSH, networking is configurable from within in familiar ways.


I haven't personally found any advantage to using containers in a local dev environment. I probably never worked out how to do it right, but my experience is that using them just adds complexity, inconvenience, and additional points of potential failure without giving any noticeable benefit.


I installed OS similar to prod and done task with it, that help to reduce research time. Only something does not work on QA, then I run container to test what is the difference. The answer is just to love your prod OS, not put it on the dev container for a fear.


Don't use them.

Use systemd in prod to contain your apps automatically on launch. chroot the app and mount only the paths it needs with nearly everything as read only.


do the tools you use to ship containers to prod and other stages not work locally?

IME a monorepo is nice here. all app code and infra code live side by side, and while running the containers locally is not an ideal dev experience, it's at least accessible and enables consistency across environments.


The most correct answer is that you need to build a base image, as many others already told you.

Another thing that I would question is why would you be running containers locally so much it becomes a problem?

As you said, containers are great for shipping code; use them for it. Locally, run your code in the current environment.

You should only run a container locally if you need to debug an error in production that you suspect is related to the environment.


If it weren't painful, then not running in a container locally would just be adding pointless differences between dev and production environments, which is always asking for trouble.


I think you should update your question with what are the specs of your computer setup, how fast your internet is and how much bandwidth you get etc


Tilt.dev. You’re welcome.


Use devcontainers?


distrobox?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: