Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
My experience of using NixOps as an Ansible user (wearewizards.io)
60 points by Keats on May 25, 2015 | hide | past | favorite | 20 comments



Rolling back with Ansible (Puppet/Chef/Salt) is best done by provisioning a new server. Yum/Apt do not support "rolling back" packages in any reasonably sane manner.

I would love to get to the point where I could do immutable deploys in real life. We're getting closer by using Docker, but we're encountering a number of hard problems, one of which is simply finding ways to keep all of the images up-to-date with all of the recent packages.

That said, convincing a company to use Nix (which is still relatively immature, in the grand scheme of things) is going to be an uphill battle. When you can stand up a fresh VM using Ansible in a few minutes, the value of being able to roll a server back easily is hard to justify.

It's also hard to justify when you have to wait for a third party to decide to update their core packages. It took nix two days to build new packages for heartbleed, Ubuntu and CentOS were updated that day. How much longer could you be waiting on a fix which wasn't so broadly publicized?


With Nix it seems like the technology is there. It works. It has a superior model compared with other systems. However, it lacks a corporate sponsor or large community to make it "professional-grade". Canonical and RedHat make their money from people depending on their packages being continuously up to date. Debian has an enormous community ensuring that stable is stable and secure for the duration of its life. Nix doesn't have either, so it is up to the users it does have to be reactive.

That said, I think the benefits of Nix are real. The existing configuration management systems do not have the ability to fully understand the system. Nix does, because everything about the system is contained in the configuration file. As a consequence, you can up and down grade servers without fear of breaking everything. You can check to see exactly what changes between different versions. There is much more control and more assurance that the nodes are in the proper state.

One part of your comment which I do not understand is:

> When you can stand up a fresh VM using Ansible in a few minutes, the value of being able to roll a server back easily is hard to justify.

Isn't this also true for Nix? You just launch the image and push your desired configuration? If you mean time to write a new ansible playbook/role then in my experience, although I am fairly productive with ansible, I would not put it at "a few minutes" from nothing to a new service. Maybe you are talking about the learning curve? Nix definitely seems steeper so a clear win for ansible on that front.


> With Nix it seems like the technology is there. It works. It has a superior model compared with other systems. However, it lacks a corporate sponsor or large community to make it "professional-grade". Canonical and RedHat make their money from people depending on their packages being continuously up to date.

Check out bosh: http://bosh.cloudfoundry.org/ . It has a somewhat similar philosophy with respect to immutable instances and systems at large, and has strong corporate backing.


Both Canonical and RedHat seem to be doing their own things in this space, Snappy and Atomic.


I haven't looked at Atomic yet, but the design of Nix is so much better than Snappy.


Yes its a pity they had to reinvent this.


I've been using Nix at a startup for about a year now. For us, the value is easy and exact replication of an application's execution environment. Setting up a development environment for a new hire is just a matter of cloning the repository and running nix-build. (Though for some apps, there's also a third step: run a script to set up the database.) Once an app works in the dev environment, it just works in staging or production.

That's not to say that we never have configuration issues, but mostly they come from weirdness in 3-party packages. For example, recent versions of `node-pg` (postgresql bindings for node) have an implicit dependency: if you want to use the native library instead of the pure-Javascript implementation, you must also install `pg-native` alongside pg. That's just in the README, though, there's no peerDependency declared in pg's package.json file, so npm2nix can't generate the appropriate nix expressions.

Using nix definitely gives you the sense of swimming upstream, because the rest of the world doesn't understand this stuff and doesn't care. But for us, it's been worth it.


Glad to hear that you've managed to get Nix adopted at your workplace. Gives me hope for the future.


Nix (which is still relatively immature, in the grand scheme of things)

Nix predates all the other technologies you listed, with the possible exception of Chef, which I can't pinpoint a concrete date or even year of origin.


why are you simply not adjusting the versions of the package in chef and allowing yum to manage the dependencies ? A staging environment and proper continuous deployment processes tends to eliminate problems.


This was a good read. I'm currently designing the deployment tool for GNU Guix (a package manager and distro based on Nix) so I find stories like this very valuable. The author didn't seem crazy about the NixOps state file, which is something I'm trying to think hard about for Guix. They claim that the state file isn't easily shareable across machines. Couldn't they just version control it and clone the repo on every workstation that does deploys or are there complications?


There is Upcast[1], which is a different tool by Zalora and does this - it simply adds a file you can check into the repository for developers to use. It has some beta-ish limitations but the idea is what you described, basically.

The remaining build step, then, is copying the new closure of the environment to the machine on deploy, which can take some time depending on your network. Instead I think what you'd probably do is use Hydra (a CI server) to actually build the packages/closure of your network on every commit to your repository, and then add that server to your actual deployment server's binary caches - that way when you deploy things after CI allows it, the needed assets will already be on the remote (faster) binary cache, within your network.

[1] https://github.com/zalora/upcast


It's an SQLite database, so you wouldn't be able to merge changes made in different clones.


Ah, I see. Thanks. My initial prototype will use a flat text file (a serialized s-exp) instead.


Another issue is full paths to the nix config file encoded in the state file itself. The state file should not point to the config files used IMO.


Totally agreed. The user should pass the config file to the program each time.


One of the things I don't understand about the newer automation tools like ansible is lack of ability to go back in time.

I used Radmind and for all it's old school complexity and faults you could roll back production changes really easily. That was hugely powerful.


How do you "go back in time" after upgrading a package in a sensible manner?


This is what makes Nix special. The "functional package manager" paradigm means you don't overwrite the old version when installing the new version.


To clarify, the question is about Radmind, not Nix.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: