Since many people are still not too familiar with Vagrant (it doesn't seem to have the cachet of Docker for local dev environments), here are a few example Vagrant configurations that I use to build different servers for local testing/debugging: https://github.com/geerlingguy/ansible-vagrant-examples
There are plenty of other great examples of Vagrant usage around the web, too, from Laravel's Homestead to (disclosure, I maintain it) Drupal VM.
It seems a lot of older applications / communities migrate towards Vagrant as some of the things they do are harder (or at least not as straightforward yet) to implement in containers.
> Since many people are still not too familiar with Vagrant (it doesn't seem to have the cachet of Docker for local dev environments)
That's only a recent trend. Up until Docker became the standard a couple years ago (or whenever Docker for Mac was made stable), Vagrant was the standard for dev environments. The primary reason Docker succeeded Vagrant for dev environments is speed. Docker can have my dev environment up from scratch in seconds, but with Vagrant it took minutes.
Also the early reliance on virtualbox was a performance issue for some build environments even once they were up. I have at least one C++ project that (for windows) takes 8x as long to build on vbox vs. a real machine.
Wow, it doesn't rely on VirtualBox anymore? I remember using it before for the development environment of an old company, and that being a small nuisance.
Nowadays, Docker is really miles ahead in terms of usage in development environments. Vagrant was pretty easy to set-up and use back then, but now Docker is much, much easier.
Vagrant calls the backends "providers." It ships with 4 providers IIRC (VirtualBox, Hyper-V, VMWare, Docker), and there are plugins for dozens of others (it's so large a list because there's one provider each for pretty much every cloud service out there, even ones I've never heard of).
I think that VirtualBox and VMWare are the only two providers that work everywhere, unless docker runs on windows now.
The problem with Vagrant was always that the other backends besides VirtualBox were second class citizens because of image and config file incompatibility between backends. And VB is pretty bad, the driver side was always crashy on Linux and it conflicts with the native virtualization drivers on Mac, Windows and Linux.
So there were good reasons to avoid it before Docker too.
i used vagrant with libvirt/qemu at my last job and it worked just fine though...
the only problem i had was updating base boxes, but that was self inflicted because it was easier to make a function to delete them from the libvirt cache than maintaining proper versioning.
/edit: i just realized from your other comments that you're probably constraining yourself to prebuild baseboxes. you really shouldn't. its trivial to built them with packer [1] and there are lots of config files on github[2] to do just that. This makes it possible to really tweak them for full integration right after 'vagrant up'
I once worked on a project which used Vagrant as its officially recommended local dev environment.
Every time I had to set it up from scratch, it took me -- and I was not a novice, and knew the stack well -- a day or more and many, many failures to get running. Thankfully, I think the project has since abandoned Vagrant.
The point of vagrant is that you type 'vagrant up' and you have a working environment.
All of my projects use vagrant to ensure compatibility. You can 'git clone' and 'vagrant up' and have a working environment as soon as the provisioning task completes.
Vagrant encounter 1: it would always exit immediately after barfing some garbage that messed up the line discipline. It wouldn't even print help menus. Reinstalling, 32 vs 64 bit, slightly different binary versions etc didn't seem to affect this behavior.
Vagrant encounter 2: on a nearly virgin Windows box, "vagrant up" on a bog standard centos image stalled out for an entire work day. No stdout, no stderr, no logs, no exit status, it just sat there.
Vagrant is very bare-bones, so you need plugins, but those are ... picky.
Also, handling Linux / Windows / OSX with the same Vagrantfile results in interesting things. (Let's say you want to use NFS for Linux, so you put there an if, and you want to set up bridging for a local interface, you have to guess the interface name - or do shell and cmd.exe wizardy .)
I don't think I've ever encountered a Vagrant setup that was flaky and wasn't just down to the Devs not actually understanding how to write a vagrantfile. In my experience, those who have hate for vagrant are folks who either didn't know how to use it, or were burned by others who didn't. It's a real shame as well, vagrant's a really good tool.
When I started learning Ansible, I kept running into your stuff on the Ansible Galaxy [0]. Such a peculiar name, it stuck with me. Thanks very much for your contributions!
Vagrant is probably the best way to go about learning automated configuration management with ansible, and especially Puppet. And I've never tried it myself, but I hear people setting up local OpenStacks with Vagrant, too. Not a bad way to get your feet wet.
We use Vagrant with docker containers inside running our web apps. Vagrant functions as a disposable virtual dev environment (on Mac) to do our docker development in linux and know everyone's machines are identical when developing for production. A new dev can just download our latest vagrant repo, vagrant up and it'll provision their development environment to be exactly as it should be.
For Go-users wanting a quick way to get a basic Vim setup with Fatih's awesome vim-go plugin [1], you might want to checkout https://github.com/samuell/devbox-golang (Just updated to Go 1.9).
It is a little different from some other vagrant boxes in that it uses Ansible for provisioning. This means you can reasonable easy re-use the ansible roles (perhaps with some minor modifications) elsewhere too, like locally, or on some cloud image.
I am the maintainer for a dev env for our team. We are on Mac OS X but we develop for RHEL so the vagrant image is centos. Building an image takes a few minutes but we love it.
Amazing, I was just using your templates for Packer PoC stuff at work today. Your Ansible, Packer, and Vagrant stuff is topnotch and a great cliffnotes for quick experimentation. Keep up the good work!
Just from the version number I thought there'd be a few major changes, but really it just seems like another point release that happens to increment the major version number. (Which is totally fine by me. I rely on Vagrant a lot and I'm glad the new version doesn't introduce breaking changes). Change log is here: https://github.com/mitchellh/vagrant/blob/v2.0.0/CHANGELOG.m...
Its a focus on stability and we don't want you to be thinking about Vagrant either when you upgrade. We just want it to work. :)
However, I want to be clear its not a marketing tactic in any way (re: hosh below, with an excellent comment!). It ends up being that implicitly but we waited and developed Vagrant 1.x for 5 years prior to calling it a 2.0 because we had a lot of goals we wanted to achieve: multi-provider, fantastic Windows support, stable installers, etc. We feel we've now achieved that in a very stable way, so its time to call it 2.0.
This breakpoint for us allows us to begin planning and executing on larger changes. Of course, we'll do all of this thoughtfully since Vagrant is definitely a tool you want to "just work" today and not think about breaking your envs. I admit this does happen from time to time though and I'm sorry about that, but we're getting better.
The actual nature of the product itself is what ultimately matters, of course, and I hear your's is outstanding, so good work.
It's a less-important issue and just a convention, not a law, but normally v1.36.12 tells me "focused on stability and just working--boring but rock-solid", while 2.0.0 tells me "first release of great, new features--amazing but don't put too much weight on it yet". I wouldn't ordinarily think of 2.0.0 as the most-stable version of 1.x with 2.0.1 being the less-stable introduction of the great, new features.
I would think the opposite can often be the case. Usually a major version is where you get to remove a load of unused features since you can make breaking changes.
Removing features could itself cause issues, and when you put that with adding new features/backwards 2.0.0 releases are almost never as "rock solid" as a 1.0.0 with a bunch of minor/patch versions as long as you're using semver.
Yup, seems more like a marketing thing to me. I couldn't find any compelling features. On the other hand, the stability improvements are great to have -- my team doesn't really want to be thinking about vagrant when trying to work on our projects.
There are still times I look to Vagrant instead of, and alongside, Docker.
The reason being Docker for Mac uses a VM anyway (an xhvye machine) - it does try to hide/abstract this away, but inevitably this leaks. The xhyve VM has the usual parameters memory, diskspace, CPUs, and not least a kernel. There are limited options to fiddle with these parameters, though you can log into it and poke around there. I thus find it easier to just have setup Vagrant machines with Docker - then I have better control over those things.
If I were on a Linux distro though, I'd probably use Vagrant a lot less.
The community is built on Vagrant-VB configs and images, so you don't really get the main benefit of Vagrant if you want to use it with something else than VirtualBox.
I disagree by saying that I don't think you really get the main benefit of Vagrant if you want to use it with other people's boxes.
Also, since the rise of virtio (thank goodness), you can easily make boxes that work on VB/VMware/KVM, and probably others too, and indeed, many boxes work like that.
I use linux, and while I use docker-compose for running many services together, I rarely bother with it for other work. It can misbehave pretty badly and making changes can mean long build times, so I still keep a surprising amount of stuff on vagrant setups.
Yeah agreed. I just wasted 2 hours this week compacting my qcow2 docker image for my mac (by filling it with zeros). It was totally unintuitive that deleting images didn't free disk space.
In the change log, I’m happy to see “improved the resilience of some Virtualbox commands.” At Airbnb, we used to use Vagrant with Virtualbox and the Chef provisioner to create our dev environments, but we migrated totally away from Vagrant after struggling with Virtualbox bugs, and strangeness with Vagrant’s internal state and locking.
Vagrant is a fantastic tool because of its flexibility, but that flexibility comes at cost: there are sometimes bugs and performance issues where the different Vagrant components don’t quite mesh perfectly.
Vagrant's really falls short of its promise of "the exact same dev environment for everyone" in my experience, especially because of VirtualBox issues such as, for example, relative symlinks breaking if done in a shared folder on a mismatched guest/host OS.
It's been such a source of frustration that there is no better shared folder alternative. VirtualBox is the only usable cross-platform backend, and vbox shared folders are the only way to have two-way syncing between guest and host. I don't understand why it's so poorly supported :/
Shared filesystems of any sort are too slow for most application development purposes, especially if you've got a process like NPM on one of the sides trying to read and write 2.5gb of Javascript files.
We saw a 30% speedup in one of our apps by switching from NFS to two-way syncing between "native" filesystems using Unison.
We ran into sym link problems with node/npm, but these were completely resolved by keeping the node_modules folder in the VM, not sym linked out to the host:
Vbox shared folders isn't the only way. You can use any tech that you can with "real" devices too if both guest and host has a support.
For example, when I started with vagrant, after few days of just getting bored with slow thoughput of vbox shared folders, I added nfs sharing. But cifs works as well.
We re-implemented Vagrant's Chef provisioning feature on top of OpenSSH with ControlMaster. This cut about 3 mystery seconds mystery time from our provision process.
Instead of targeting local Virtualbox VMs, we use AWS boxes created by the internal tool we use to manage our production fleet.
In practise it's much easier to just trust well-known developers by whitelisting their code-signing certificates.
You could still get owned, of course, but the benefit here is that you're excluding everything not explicitly whitelisted, including drive-by downloads, crap on portable devices or random programs downloaded off the internet that someone thinks will solve their problem of the day.
When people do not code-sign their software every software update is painful. At work, where we run https://github.com/google/santa, it frequently happens that companies with code-signed software forget to code-sign their auto-updater, or random binaries that run during installation. Most of the time the application crashes/hang during the update (because some piece weren't allowed to run), only to remind to you update the software again when you restart the application.
Personally I've managaged to avoid using it so far. But yes, you can whitelist individual binaries or even directories. The lack of code-signing doesn't prevent whitelisting, it just makes your life harder than necessary.
"Vagrant 1.0 was released in 2013 as a stable release. Vagrant 1.0 only supported VirtualBox as a provider, only supported a handful of Linux operating systems as guests, and supported a simple up/destroy workflow. Since Vagrant 1.0, we've added support for multi-providers such as VMware and Docker, guests such as Windows, macOS, and complex workflows including snapshots. These major changes are followed by hundreds of improvements and bug fixes."
I think the person you replied to was asking about major changes from the last 1.x release, to v2.0. Of which, there don't appear to be any major ones.
Vagrant abstracts virtual machines. If you want to simulate multiple machines on the same computer you need VMs. You can have multiple docker containers running in those VMs that communicate across VMs. You can then deploy to real hardware having tested the communication across machines.
You can also run multiple docker-machine instances which can run on say VirtualBox. This is a how we have developers learn about and test Docker Swarm on a single machine. I've really run out of use cases for Vagrant at this point.
Vagrant lets you provision in a codified manner full-fledge VMs. This code can be pulled down by your developers to deploy a development environment on their machines. Inside of this you can choose to run containers of your apps with proper tooling built around it.
This is the best definition I saw posted. The best sales pitch I've seen for Vagrant is that it lets you create a config ("Vagrantfile") from which you can build a virtual machine. Now you, or someone else, can take that vagrant config and recreate the same vm (on multiple platforms btw - virtualbox, kvm, etc) for testing and development.
If you've recently migrated your stack to a fully docker containerised setup, docker-compose can be a great replacement for an old vagrant provisioning setup, as the startup time will usually be a fraction of that needed by running vagrant up. It won't make sense for every project, but it's definitely a viable option for some.
The equivalent tool is docker-machine, not docker compose.
Though they de-emphasised that tool in favour of Docker for Mac and Docker for Windows which interact directly with the platform hypervisor to create a Linux VM.
You can get close but as someone else said, they are mean't to be processes. You can get more control with Vagrant for a lot of things and have actual VMs.
Vagrant is useful if you want to simulate a whole box, or a cluster of boxes - for example, you can spin up a cluster of 3 VMs to run Mesosphere DC/OS [0]. If you build and deploy just containers, and don't have any full VMs in your stack, then you might not need Vagrant.
Vagrant also claims to provide a "good workflow for writing Dockerfiles"; it can provide a nicer user abstraction over `docker [many, many args]` for running your biz [1].
This is exactly where I began getting the most out of Vagrant, that is, simulating a distributed cluster locally. It gets me as close as possible to how it would work on the provider infrastructure, e.g. AWS. In particular, spinning up an automated Consul cluster and then using it get a group of RabbitMQ nodes to converge[0]. The same is certainly achievable through container orchestration, but that adds more layers of abstraction, especially on the networking side.
This is, in my experience, not true. Anecdotally, xhyve has performed much better for Docker; especially when choosing the appropriate method for volume mount consistency[1], which has solved most FS performance issues I have run into.
Completeness of stack. If you're under the impression that your runtime ends where the Docker container does, that will eventually bite you--it's very rare for my workstation to be running the same kernel version as my servers and I've been bitten multiple times in the past by kernel bugs due to differences in version, so even if I use Docker (which is rare) I run it through a known-good virtual machine that mirrors production as closely as possible.
(Vagrant also has a Docker provider, but I can't think of a good reason to use it.)
Docker is a fragmented mess network wise if you need to support developers on both windows and macOS. Also, docker requires hyper-v and admin privs on Windows. And you can't run VirtualBox with hyper-v active so you need to choose either of the two options. Granted, you could run Docker in virtualbox but it wouldn't be very clean.
I use Minikube with the VirtualBox driver, and it is the only way that I have been able to deal with my slightly unusual networking requirements and do local dev with Kubernetes.
VirtualBox lets you forward ports from within the VM to your localhost directly, bypassing iptables and any default routes.
Our network uses a "no-split-tunneling" VPN, so most of the Docker networking solutions are completely unusable for me.
Kubernetes fortunately provides easy ways to enumerate the services that you intended to expose (via ingress, or similar) so it's absolutely trivial to script forwarding every exposed service or ingress to the localhost IP. I still am editing /etc/hosts file if I ever need to use a host-based route, and I have some interesting issues with SSL certificates that sometimes did not have the server name that I expected on them, but for the most part this works great for me.
I am a Mac user and showed my coworker who is a Windows user, we tried to do the same thing on his machine and it was even easier because there is no notion of privileged ports below 1024. So, it works the same way but with one less workaround.
The whole Hyper-V thing with the default Docker for Windows stuff has put me off - is there a workaround to go back to using a VirtualBox VM again? I do a fair amount of development on Windows machines along with Mac/Linux so I need something that works consistently among all those platforms.
Vagrant is filling the void for some of those projects since it just works with no fuss on Mac/Windows/Linux without forcing me to use Hyper-V.
Absolutely, you can boot your preferred distribution for running Docker on[1] (using Vagrant even! setting the provider to VirtualBox), forward the port that Docker is running on (2376?) to your physical host/localhost[2], and set a few variables on the client machine to get your docker client talking to that ("remote") docker daemon[3]
Or, as tmzt mentioned, minikube and minishift will also let you set --vm-driver=virtualbox on start. Those are nice even if you don't want to use Kubernetes (but there are plenty of options.)
You can still run Docker Machine (part of the old Toolbox) for VirtualBox support. minikube/minishift can also be used as docker VMs and download up to date iso images automatically.
Vagrant is also used as an easy way to spin up development environments e.g. for embedded systems via virtualbox. I am not aware of a simple way to create a GUI with docker. With vagrant it is just one line: v.gui = true
Can you elaborate on your first use case - for what kind of embedded development do you use virtualbox and how?
Regarding your second point - you can either attach a shell for testing (e.g. I often build a Dockerfile first interactively via /bin/bash inside the container) or use the hammer (a web-UI) and connect your localhost to the docker network interface.
Data exchange between host and container is also simply done via bind mounts - which might be more elaborate in production however.
I'm a big advocate of Kubernetes and I don't know the answer to this question, for sure. If you want to simulate a machine that has a kernel of its own, and you need to be able to make that kernel version different from the kernel version on the host, you definitely need Vagrant instead of Docker for that.
My team has no such requirements and IMHO uses Vagrant solely because of inertia. We've always used Vagrant, it's what most people have installed on their machine, there is a Vagrant box with some of the moderately difficult to configure things already done, so we all can use the same configuration, like the Oracle Client libraries and the nginx frontend with a self-signed localhost certificate (required so your local development can talk to our auth server).
There's absolutely no reason we couldn't do the same thing with Docker. We just haven't.
I would argue that if you aren't using Packer or if writing your own customizations into Vagrantfile, you aren't really using Vagrant and it's a somewhat harmful black-box for us. Those steps are baked into the box file, not done in a Vagrantfile as provisioning steps, not able to be inspected inside of an Ansible playbook; so that knowledge of how to do these things could easily be lost and it would be a headache to reproduce. Packer is roughly what we need to make it better. For my team, Vagrant is just a thin wrapper over VirtualBox, so the team does not need to know that they are using VirtualBox.
The second reason you might want to use Vagrant instead of Docker is if some leadership in your org has declared that you still may not use Docker for anything. This is the case here; you may use Docker but not without a good reason and not without having your usage reviewed by a panel of experts on various subjects (it's the Design Review Board.)
We got our usage of Docker approved so that we can manage Jenkins via Helm. The kubernetes-plugin for Jenkins creates pods as build slaves, and when they complete their jobs they go away. You want your builds to run in a clean environment, you want your slaves to be disposable; pods are ephemeral, may group containers together, and they go away when they complete the job. That's just exactly what problem this tech was meant to solve.
DRB thought that was a great justification and it was approved. I am still the only place that I'm aware of across the entire institution where Docker is used in an approved way though.
They're often complimentary, for example using Vagrant to spin up multiple VMs to run a Docker swarm mode cluster.
Vagrant is also great for learning about clustering technology. In minutes you can have dozens of VMs running on a single machine.
That said if you don't have specific OS requirements, then http://labs.play-with-docker.com/ works well for simulating a multitude of machines.
Both Docker and Vagrant rock for the same reason: image distribution. You don't have to know anything about building a VM or container image to be able to benefit from the tools.
Of course they both make it a breeze to manage the respective runtime environment too.
So in my mind Vagrant is to VMs what Docker is to Containers. Of course the use cases overlap, nonetheless they both are indispensable tools.
It's great for complex apps with multiple dependencies, but if you plan on never needing full VMs likely not worth it.
I'm used to liberally using crontab, iptable rules, multiple languages, and not deploying separate containers / vms if I need something like Redis. For some of them I'd end up with 6-8 containers if I went that route.
I like Vagrant as a concept but on Azure at least, it's pretty bad at clearing up after "vagrant destroy", and people I know report similar with VirtualBox. To the point at which you might as well just bake an image and clone it, which is very easy.
Not to pull down the emotions here, but if the whole blog post is not saying more than "Hashicorp Vagrant 2.0" I doubt that there will be some meaningful content in that version. Was Vagrant bought by Hashicorp or something? Why not announce that?
I see, then I don't understand this blog post at all. Is my expectation weird that an anouncement blog post for a major version +1 should contain some features? Or have I just missed that part in the article?
I actually use it quite a lot for personal projects, at least those that I intend on deploying to a server. It's nice to work in the exact environment the application will end up in, not to mention that certain tools only exist/work properly in a particular environment. To me, it only enhances the development process.
You need to provision one or more dev environments (for me two. On my Linux desktop and on my MacBook Air). The ansible configs are also used to provision production servers. If you end up with a failed hard drive it can save a lot of time rebuilding
Wrong way around. Hashicorp decided to release Vagrant as an omnibus installer that packages an embedded Ruby, which (apparently? I haven't checked, I'm going off the earlier post) isn't Ruby 2.4.
They also killed off the gem-distributed version--which is still, to this day, a huge pain in the rear, to the point where I build my own Vagrant so it doesn't use its own weird out-of-the-way Ruby.
since i am using docker-compose locally, i have little use for vagrant nowadays. For staging and production the docker containers run inside a Kubernetes environment which i don't want to replicate locally. Am i missing something ?
It seemed like a pretty legitimate question: is there a compelling reason to give Vagrant 2 a look if Docker/Kubernetes is your current workflow? It seems to me you took it as more of an us-vs-them comment.
You probably wouldn't look at Vagrant if you are using Docker/Kubernetes. Vagrant (virtual machines) and Docker (containers) fill different niches. If you can accomplish your work with containers then you don't need to manage virtual machines.
We have had a number of people want to replicate locally for offline use cases (train/plane), reduced bandwidth use cases (home/plane/train), and cost reasons (laptop VMs are always cheaper than cloud). If you like Vagrant and Kubernetes checkout the Tectonic Sandbox[1] or if you don't want to use vagrant checkout minikube[2]. We (CoreOS) invested time into both projects for these reasons.
I think you're correct. There's no reason to use vagrant. It's self titled as a way to do dev environments. You don't want dev environments. You just want config management. Like ansible or salt. I don't see why you don't want a dev environment tho. It's nice being able to code while on a train or a flight.
There are plenty of other great examples of Vagrant usage around the web, too, from Laravel's Homestead to (disclosure, I maintain it) Drupal VM.
It seems a lot of older applications / communities migrate towards Vagrant as some of the things they do are harder (or at least not as straightforward yet) to implement in containers.