The sales pitch I usually give people is that any ops person can read a Dockerfile, but most devs can't figure out or help with vagrant or chef scripts.
But it's a hell of a lot easier to get and keep repeatable builds and integration tests working if the devs and the build system are using docker images.
You are doing it wrong then. People run containers in production using orchestration platforms, like ECS, kubernetes, mesos etc. The docker for mac/windows are not designed to serve containers in production environments.
They help you build and run containers locally, but when it comes time to deploy you send the container image to those other platforms.
Using docker like that is like running a production rails app with "rails s"
Which security problems are you referring to? Our containers run web applications, we aren't giving users shell access and asking them to try and break out.
Over large layers: Don't run bloated images with all your build tools. Run lightweight base images like alpine with only your deployment artifact. You also shouldn't be writing to the filesystem, they are designed to be stateless.
Credentials capture in layers. Environment variable oversharing between peer containers (depending on tool).
And the fact that nobody involved in Docker is old enough to remember that half of the exploits against CGI involved exposing environment variables, not modifying them.
Yes, you can capture runtime secrets in your layers, but it's pretty obvious to everyone when you're doing that and usually people clue in pretty quickly that this isn't going to work.
Build time secrets are a whole other kettle of fish and a big unsolved problem that the Docker team doesn't seem to want to own. If you have a proxy or a module repository (eg, Artifactory) with authentication you're basically screwed.
If you only had to deal with production issues there are a few obvious ways to fix this, like changing the order of your image builds to do more work prior to building your image (eg, in your project's build scripts), but then you have a situation where your build-compile-deploy-test cycle is terrible.
Which would also be pretty easy to fix if Docker weren't so opinionated about symbolic links and volumes. So at the end of the day you have security-minded folks closing tickets to fix these problems one way, and you have a different set that won't provide security concessions in the name of repeatability (which might be understandable if one of their own hadn't so famously asserted the opposite http://nathanleclaire.com/blog/2014/09/29/the-dockerfile-is-... )
I like Docker, but I now understand why the CoreOS guys split off and started building their own tools, like rkt. It's too bad their stuff is such an ergonomics disaster. Feature bingo isn't why Docker is popular. It's because it's stupid simple to start using it.
Regarding secrets in builds, I think a long term goal would be to grow the number of ways of building Docker images (beyond just Docker build), and to make image builds more composable and more flexible.
One example is the work we've experimented with in OpenShift to implement Dockerfile build outside of the Docker daemon with https://github.com/openshift/imagebuilder. That uses a single container and Docker API invocations to execute an entire Dockerfile in a container, and also implements a secret-mount function. Eventually, we'd like to support runC execution directly, or other systems like rkt or chroot.
I think many solutions like this are percolating out there, but it has taken time for people to have a direct enough need to invest.