It seems like a lot of people are interested in fixing this, and would be keen to see a solution. I believe StratifiedJS is precisely that solution (for JS at least), and it has existed in working form for years: http://stratifiedjs.org/ (it's not just an experiment - it's remarkably stable).
StratidiedJS completely eliminates the sync/async distinction at the syntax level, with the compiler/runtime managing continuations automatically. A `map` function in SJS works just like you would want, regardless of whether the mapping function is synchronous or not.
In addition, it provides _structured_ forms of concurrency at the language level, as well as cancellation (which most async APIs don't support).
Disclosure: I work on StratifiedJS, and just wish more people knew about it.
It compiles down to JavaScript that you can then run on Node or the browser.
I once wrote a big long rant about the mess that JS and Node have made trying to cope with async code and got tons of comments proposing X or Y library that would "fix" the issue. Not a single person mentioned StratifiedJS. I wonder if there was some history to it that prevented it from getting momentum.[0]
1 - SJS effectively 'solves' the concurrency problem, but it is not a problem that is on the top of most people's mind when they write an application. To a first approximation, the concurrency problem in JS looks "solved" to people already (promises, generators, etc), and it is only when you get down to it and look at it in detail you see that SJS is actually a substantially more complete solution to the problem.
2 - Many people see it as a 'cute' solution that doesn't scale to big applications. To counter that point we've developed a complete SJS client/server framework - https://conductance.io - and are writing big complicated apps on it (such as http://magic-angle.com/ ). It's still rough around the edges, but we're pretty confident that the upcoming release (scheduled for end March) will show
just how powerful the SJS paradigm is. There is a presentation on it here: http://www.infoq.com/presentations/real-time-app-stratified-...
Do you wish nix's syntax were more like jinja? Or do you actually want to use jinja with nix somehow? I don't really know what the latter would mean, since nix is a programming language and jinja is a (string-based) template system.
Discarding the old server and replacing it with a new one is a very brute force way of dealing with the problem. That's not to say it's bad (it will obviously work exactly as advertised), but I don't see it working well for me. In particular, I want something with a quick turnaround.
I very frequently deploy to a virtualbox VM during development. With NixOS this often takes <10 seconds, and that still feels slow. I cannot imagine that you can do immutable deployment anywhere near as quickly or conveniently (but I'd be excited if you can tell me I'm wrong).
That's a fair question. I'll let you know if I find out ;)
My hunch is that it could scale well, but would require some integration work in order to use it nicely with existing orchestration tools. I don't know exactly what that would look like, or if it would be worth the effort of going off the beaten track.
OP here. I've heard of terraform, although I've not investigated it much further than that.
It sounds like it's mostly a provisioning tool, and doesn't really help with configuration management once your machines exist. From the "terraform vs chef / puppet" page:
> Terraform enables any configuration management tool to be used to setup a resource once it has been created. Terraform focuses on the higher-level abstraction of the datacenter and associated services [...]
So it sounds like like a very nice provisioning tool, but doesn't really compete with NixOS itself. Perhaps you could even use it to provision NixOs machines?
You probably want to check out HashiCorp's full stack they recently released dubbed 'Atlas', in which Terraform is one small component: https://atlas.hashicorp.com/
I don't want a tool that does both provisioning and configuration management as I feel the latter should be a real-time concern rather than a deploy-time concern. Using Consul (https://consul.io) and Consul Template (https://github.com/hashicorp/consul-template), we're able to keep configuration centralized, secure and have it deployed automatically every time it changes. And it removes the distinction between configuration changes that are triggered by some event (machine failure, network partition, monitoring, auto-scaling, etc), changes that are triggered by a developer commit and changes that operations wants to make (maintenance, DDoS response, etc). Terraform provisions all of that and then configuration management happens on an ongoing basis.
I'm really not trying to sound like a shill for Hashicorp, but we use a bunch of their tools and find them to be, overall, very worthwhile and focused on accomplishing a single logical task which makes them easily composable with tools from other vendors. I also don't want it to sound like I'm criticizing Nix or NixOS...they sound like excellent tools. My only point was that there are other ways to solve the problems expressed in the posting and that each solution has tradeoffs that DevOps needs to consider when designing infrastructure. Your blog struck me as being a strawman criticism of somewhat dated tools without consideration for newer options, especially since your discussion of Docker was so narrowly focused on the actual Docker tool without any consideration given for Fleet, Swarm, ECS or any of the host of orchestration options in the Docker ecosystem.
If you'd written it more from a position of "here's how NixOS has made my life easier," you'd probably find that people would be more receptive to it. But, instead, it had a "here's why NixOS is better than the alternatives" feel to it which is going to rub people the wrong way when it's pretty clear that you're not aware of all the alternatives. NixOS is one good option, but it's by no means the only good option.
> Perhaps you could even use it [Terraform] to provision NixOs machines?
You definitely can, so long as your servers are virtualized. Terraform is significantly less useful in a bare-metal world. However Terraform is really about provisioning specific machines. For example, you might write Terraform to provision 1 SMTP server, 3 web front-ends behind a load balancer and 2 database hosts. But it's pretty crude at doing provision-time tasks...it basically allows you to run shell commands. Where you'd do the bulk of your provisioning would be in a tool like Packer, Aminator or other such tool that creates VM images that can be deployed. That's where you'd start with a base NixOs image and then declare what's installed on an SMTP server, a web front-end and a database server. Terraform would just reference those images and size the machines.
Fair points. I started the post by stating that I (personally) wanted to use NixOS in the future, but admittedly didn't maintain that tone throughout the piece.
I definitely have humble requirements in terms of deployment size, so (for me) any orchestration tool is likely to be way more effort than it's worth. I compared NixOS (not a deployment tool) to those other (small-scale deployment and/or configuration management tools) because that's what I and plenty of developers I know have used, and I think the comparison helps illustrate the issues that NixOS can solve. Hopefully those who _do_ have experience and need for larger orchestration software can tell from reading whether the problems NixOS solves are relevant to them.
I am a contributor, and I use it all the time. I publish a lot of my own stuff (mostly small utilities / libraries) at http://gfxmonk.net/dist/0install/index/, as well as a bunch of third-party software. Making distro-specific packages for all of these would be a stupid amount of effort - with a single zeroinstall feed I can run cross-platform code without having to package it N different times.
It is fairly easy to package / distribute portable code (e.g python, ruby, perl, js, etc). Stuff that requires compilation tends to be trickier, unfortunately (both to use and to distribute). Compiled software that is not relocatable (i.e hard codes paths) is sadly not possible to package in ZeroInstall.
It can use system packages (e.g apt, yum, etc) when available. So you can make a simple feed for your code that just depends on system versions of your dependencies, rather than having to "package the world". Of course, this only works on linux (and maybe OSX with fink / ports, but those tend to be less reliable than linux packages).
The big downside is of course that users don't generally know about ZeroInstall, and aren't likely to care why you think it's great. I generally recommend using ZeroInstall whenever I can in my own READMEs, but I have had little feedback on whether this has actually convinced anyone.
It has some definite upsides though, which is why I don't expect to stop using it myself regardless of general uptake:
- end users who aren't comfortable with terminals can use it just fine, yet it's as easy to publish as something like `pip` or `npm` packages.
- Since ZeroInstall feeds don't install anything globally, using zeroinstall dependencies during development means you never have to bother with tools like rvm, virtualenv, etc again.
- The savings that I personally get from having my own software & tools immediately available anywhere I go is already worth the effort of packaging it for ZeroInstall. E.g I have a "tim's custom vim" feed, with dependencies on all my vim plugins and configuration, and running it doesn't touch the host system's .vimrc or anything. That's pretty damn cool, even though it's only useful to me.
- I really like the notion that if you can _run_ some code, you can _modify_ that code. I despise the jump in most software between the "install & run" steps, and the "oh, you have to do these 10 manual and rather invasive steps to set up a development environment for this code". 0compile fixes this, for feeds that make use of it.
I've contributed to both the python & ocaml versions (and I made the logo :)). The ocaml version has not existed for all that long, I've only sent a few patches - I believe the porting itself was entirely done by Thomas, who's done great work.
These days, if I'm writing something in bash (or batch) it's often because there is nothing better available - like kicking off an installer, or some other wrapper / bootstrap script.
For those kinds of simple tasks, the overhead of fully understanding the nuances of bash _and_ bash is way more trouble than the inconvenience of having fewer features available. This is not for bat/sh lovers, it's for those who have to use bat/sh even if they'd rather not.
Having said that, I wouldn't use batsh myself until it does something sane with errors (at least the equivalent of `set -e` in bash).
The front page is very blank on a large monitor - I got the impression it was waiting for a big chunk of content to load, until I returned some time later to see it still mostly white.
Mostly out of frustration for PyP not being lazy (on large inputs it reads the entire file up-front, or at least used to).
But it was quite interesting to implement all the standard python idioms (like slicing) in a lazy way, and it's not that complicated a tool. I still use it a lot whenever I have a nontrivial pipeline to write.
Yea, fortunately pythonpy supports lazy iteration over sys.stdin when you really need it. Just like in python, the syntax won't be as nice as using a list. But it works:
However, the number of times that you need this are surprisingly rare. Most lazy operations don't require that each row be aware of the surrounding row context, and using the much simpler:
py -x 'new_row_from_old_row(x)'
will get the job done in a lazy fashion. Usually, when you need rows to be context aware, as in:
py -l 'sorted(l)'
or
py -l 'set(l)'
it's just not possible to accomplish your task without reading in all of stdin.
Cool :), glad it's supported, at least for the simple case of line-wise transforms.
Some things can't be done without reading everything. But there are still a number of operations on "all of stdin" that can safely be done lazily. I'm particularly fond of "divide stdin into chunks of lines separated by <predicate>" [0]. Which does need context, but only enough to determine where the current chunk ends (typically a few lines).
`py` seems to be aimed at a single expression per invocation (nice and simple), while `piep` recreates pipelines internally (more complex but also means pipelines can produce arbitrary objects rather than single-line strings). So I'm not really sure how you'd do the above in `py` anyway.
StratidiedJS completely eliminates the sync/async distinction at the syntax level, with the compiler/runtime managing continuations automatically. A `map` function in SJS works just like you would want, regardless of whether the mapping function is synchronous or not.
In addition, it provides _structured_ forms of concurrency at the language level, as well as cancellation (which most async APIs don't support).
Disclosure: I work on StratifiedJS, and just wish more people knew about it.