Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I get that docker makes this simpler but it’s not really a feat only docker could accomplish, a makefile or bash script could have accomplished the same thing.


A bash script would have to automatically install all the dependencies. Over time, there is a growing chance that some of the versions required will conflict with whatever is already installed on the machine, and someone will have to go in and fix them... then fix them in a way that it works on everybody's machines.

With docker, you can just ignore all that. As long as there is a single person capable of updating the dependencies on a single machine with docker, it'll work the same everywhere, always.


A bash script would be nowhere near as practical. First of all it would be much more complicated to deal with various environments, and in practice, docker run/docker stop is much easier to upgrade.


If everybody in your team uses same distribution of Linux (say latest Fedora, for example), then all dependencies can be packaged into RPM or installed from system repository. RPM and docker are very similar in general idea: they are both image of result of installation of a program(RPM)/system(Docker).

Every member of team will need to run same version of Fedora directly or in Vagrant or in (ta-da) docker container.

From my experience, it turned out that for small Dockerfile's, RPM packages are unnecessary burden, so usually I just start with just docker. But later, when Dockerfile growths, it much harder to track dependencies and installation instructions, when they are interleaved, so it much easier to return to individually packaged programs and replace almost whole content of Dockerfile with single "dnf install meta-package" command.


You could also move bits with a magnet over the circuits but it's more error prone /s.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: