Let me make sure I understand this. I'm going to pay you to help you to develop a commercial software service that allows me to run scripts in written in my favorite open source programming language connecting over my open source SSH client to talk to my open source web server that makes connections to my open source database all which runs on my open source operating system?
I'll stick to shmux for these tasks thanks. It doesn't require me to hand you my SSH keys, runs on the command line, already supports parallell SSH execution, and is free both in terms of speech and beer.
So, tell me - what would this tool offer over, say, RunDeck or mcollective? Why only use SSH keys? (What if I'm using LDAP across all of my machines?)
I'm not trying to knock your product - I really do hope you get Kickstarter funding! But I just wanted to say that, to me, as a systems engineer, I only see a pretty interface, but not much value add over the other two products mentioned above
I also feel that comparing Commando.io to Puppet, Chef, or even Capistrano is disingenuous - Puppet and Chef are configuration management systems that are meant to keep your systems 'in policy', and Capistrano is meant for repeated tasks over SSH (mainly for deploying software, but it can be used for other things too.)
(Creator of Commando.io here) - Thanks for the feedback. Commando.io is very young, and while currently only supports SSH, LDAP could easily be added. The alpha build is extremely minimal, and serves as a proof of concept. However future features are parellel execution, interactive sessions, scp transfers (maybe even Murder transfers [bitTorrent]), and provisioning.
You have to think back, when GitHub first launched, they probably had a minimal product. It takes a bit iteration to add the "eureka" features.
I applaud what you're trying to do, but I think you hit the nail on the head in your last sentence. You need more iteration in order to find something that sets you apart. I wouldn't contribute to this kickstarter because I don't see anything here that really differentiates you from your competition in a way that would make people want to pay for your service.
You can use LDAP logins and passwords with SSH by using pam_ldap - but since you only allow SSH keys, it'll be a non-starter for many orgs to use your product.
I can't help but think that for a tool like this you are trying to sell to a crowd of command like jockey's. For this crowd, there is no better interface than the command line.
There are cases where I do prefer web interfaces to command line. The AWS web console is an example of this, but for me this is a case of balancing control vs convenience. Installing the AWS libraries to use the command line is a greater PITA than the need to use the command line.
With Git, I prefer the power of the command line over a graphical client. However, in this case there is little or no convenience in the difference because I have to install Git either way.
The only usage I would be able to get out of this would be to give my non technical managers, clients, co-developers, etc a way to do deployments if I'm not available. One scenario would be for contractors to be able to setup their own dev servers. So, I can see a situation where I could use it, but it's a bit of an edge case.
I work with clusters of hundreds of machines every day. When I watched the video I cringed, a lot. I can't imagine using a web app for this task. I thought there is no way in hell I'm ever going to enter in the hostnames of hundreds of hosts into a fucking web form. Lay off the bogus terminology too. I write scripts and programs, not 'recipes'. A script with a version number is still a damn script.
I watched most of the video, and I had a visceral negative reaction to the product and the presentation.
> The real power comes from creating more complicated recipes.
This is my sticking point: if the power comes from the recipes, how does having a web interface help any of that? I think this is a great piece of work, but I just can't see what the advantages are over any of its competitors given that all the actual work is still being done in the same way (e.g. shell scripts).
I would add that you bring up GitHub a number of times as a comparable service, but I think that the key difference is that a number of core git features are actually implemented by GitHub (e.g. fork, log, pull) and although this all looks great it isn't the reason (IMO) why it's successful, it's because it makes a common operation easier (i.e. git repository management). I think that you would need to make common dev-op "recipes" easy to generate before the GitHub comparison is correct.
This would be a better project if you sold a license to company and not a saas service. I don't want to put ssh-keys on my network from another machine I have no control over. If anyone ever got access to your service they could have a field day.
I would give it a run through if you offered it as software though.
Yes like I said.. I would give it a try. If you do like what github does and offer up like a vm image or something or just a tarball with instructions.
OK one feature you should add and will make this stand out even more.
Two phase commit.
Admin enters commands, another had to enter same commands on another console and only then are they executed or passed to a third admin to approve actioning.
No more rm -f in root or other slip ups. No more silly mistakes. Added accountability and in that added security.
You would also get x10 the amount of backing from alot of institutions alone for that feature and even if you don't have it now, factor it in down the line as the quality of admins has gone down hill over the years and in that more mistakes get made. This will reduce that and the impact as well as protecting jobs as well as the machines from human shortcommings.
Logging aspect you have outlinesd is good. Get it robust by design and cater for remote logging servers and with all that you could go far.
Neat idea. I'd definitely donate if the whole thing was going to be open sourced so I could run it on my own infrastructure.
Currently we use csshX for this sort of thing, and it does the job. It's an open source tool that opens a bunch of ssh sessions and sends each character you type to each session. You can launch it with command line args that specify the hosts you'd like to connect to and optionally the commands to run, so it's pretty easy to set up scripts that mimic the recipes in this video. If you're working with less that 20 servers at a time (about the limit of what you can read on screen at once) it's a great tool.
For simple multiplexing, if you need to work with more than 20 servers at a time, take a look at omnitty. It has a different layout that makes it much easier to work with larger groups in interactive sessions.
csshX works great for small tasks on a small number of servers, but it lacks a few main features. Commando.io stores responses from each server so you can traverse back and see exactly what was run and the response. Additionally, csshX does not have the concept of stored recipes which are fully versioned.
What if we had an "enterprise" version like GitHub which you could run on your infrastructure? Would you back then?
> What if we had an "enterprise" version like GitHub which you could run on your infrastructure?
You might want to look into how c9 ( http://c9.io/ ) does it. They have a similar stack to yours. The core of their product is open source. They accept contributions, which lowers their cost to develop. They keep a few choice features (integration with github, etc) for their hosted/paid version.
If you open sourced your core product and kept a few features for your paid solution (versioning, saved output, etc.), you may gain the momentum you need.
What is the difference to http://rundeck.org/ which can be integrated with foreman or puppet?
It also looks like (not sure if i am right) but your worker/server has to have direct ssh access to the server. This means you basically have root access as most configuration management tools need root to administrate the server.
Does RunDeck keep a log of every response? With Commando.io you can flip back through previous executions and see a snapshot of which servers got which recipe, and what the results were.
It definitely does: when I launch a job in RunDeck, I can see the live output (essentially a form of tail -f in the RunDeck web interface) and I can see the output of every previous job as well.
I don't want to seem rude, but I really am amazed that you've got to the point of asking people for money without bothering to research existing potential competitors.
I'm not saying your product doesn't have differences to things like Rundeck that might set it apart; but, as a potential backer, the lack of research is a bad smell IMO.
Feedback from someone that's previously (long before the devops scene exploded) looked at commercializing a system like this:
Your UI should integrate monitoring and control using an actually visual interface detached from any underlying model used for state collection or implementation of state changes (because this stuff changes constantly, regardless of how much myopic VC money is thrown at it). This means instead of talking about "recipes" and suchlike, you talk explicitly about the objects being managed: services, servers, racks, PDUs, switches, routers, coolers, links, and so on.
Make it visual, like actually beautifully visual. Let me specify a floor diagram that colours each rack according to the mean health of the machines it contains. Clicking the rack should show an exploded view of each machine coloured by the mean health of each service they contain, and those services' dependencies. Provide instantly selectable overlays showing different kinds of topological relations (application/network/power/trust/routing/OAM/TCP connection state(!)) existing within the view (complete with colouring).
I want a tool that lets me study a floor diagram and instantly notice a set of racks are down because they are in a row supplied by a single PDU. I want to correlate crashes visually due to cooling hotspots, preferably even by relative colouring due to temperature sensors in the machines. I want to batch shutdown a set of machines connected to the NFS share I just found 0day on.
I want a generic visualization of a service that displays a set of vital metrics (error count, load, cpu usage) and a set of generic actions (restart, stop). I want a simple editor that allows me to assemble widgets ("CPU load gauage", "requests/sec gauge", "dependency health indicator") into some meaningful representation of a service using nothing but a mouse.
Don't bake any topology into it: make it useful for systems as small as 2 cores, or as large as having presence in every country. Racks aren't first class objects, they aren't containers, they're relations with some useful attributes. Don't bake implementation or buzz technologies into it: your codebase could be less than 10,000 lines JS+high level language backend.
Don't freeze out keyboard cowboys: there's no reason a highly visual UI need require a mouse. An idiom involving a handful of standard shortcuts ("object select", "object query", "object manipulate", "back", "bookmark", "assign shortcut") are all that's needed. I imagined a system involving keys 0-9 being redefinable (think Command & Conquer, not Emacs) with a few letters preassigned (think Gmail/Google Reader)
I want a UI no less visually beautiful than Google's search globe ( http://data-arts.appspot.com/globe-search ) for monitoring the state of my service. I want to notice the bandwidth spike occurring on the dark side of the earth.
Prior to a system like this that can be assembled with a minimum of fuss, and can be integrated with existing data sources and systems (most medium sized companies already have an asset database, etc.), I'm not going to be impressed by some crappy Ruby on Rails jammed together with a bit of JS as a fancy editor for some DSL.
Give us something with wow factor, not just another bland excuse of crap offering zero benefits over a command line, that'll end up lying dead in a Github repo in a few years.
[Footnote: for anyone with cash interested in systems like these, feel free to get in touch. I can barely enunciate correctly when talking about this stuff, my vision for what devops should look like is so far removed from the current popular state that it pains me to pay it much attention]
Thanks for the feedback. Our goal is certainly to make a visually beautiful product. As far as floor diagrams, racks, and specific hardware, we want Commando.io to be used by everybody, including AWS, Rackspace, Linode, so building those specific features is probably over-scope.
Monitoring is a market that already has some amazing companies (ServerDensity, NewRelic). The missing piece is management, provisioning, automation of servers within a web-interface.
Nodes deployed in a virtual environment are still assigned to a location. Linode for instance provides different datacenters, AWS provides availability zones. A grouping can take care of most of this problem, however relations between the nodes need to be visualized.
For management tools do exist, just not fancy looking but get the job done... it should be a full package; monitoring and deployment, etc etc go hand in hand.
I agree with most of these suggestions: the one thing that can land you all the toughest customers in this market is a user experience that is unquestionably better than Vim ;)
Founder of NodeSocket, creator of Commando.io here. The steep learning curve was geared at Chef and Puppet, but wanted to list some other popular options.
I guess what I mean is, simplifying server access is great and it looks like you are doing that well. However, I guess I don't like the idea of marketing it because the other automated methods are too hard to learn; market it because it is convenient and provides a nice interface. Just my 2 cents.
Did the creators of this use any of the tools in this domain? How does this compare to mcollective/capistrano/vlad the deployer/fabric/ssh and a for loop? It's a web app, how is that superior to something CLI based?
Thanks for the feedback. Right now, since Commando.io is so young I'll admit its not the most advanced tool. However, recipes are versioned and every execution and response is logged and stored for compliance and accountability.
When GitHub first was released, I am sure there was a skeptical group of developers who asked the same sort of questions.
Well, to repeat - how does this compare to tools in the domain? Why are you opting for a web interface driven approach over a command line interface?
Why are you opting to build the entire thing from scratch instead of using an existing orchestration tool and building an interface to that? Wouldn't that save you time and money?
I've most recently hammered on mcollective, and it can easily support multiple users, parallel requests, full auditing of all requests and responses, authorization, server discovery, etc. There's all your advanced features, already implemented.
In regards to the interface, our thought is this is what is currently missing from other tools in the domain. Imagine if GitHub had build just their API first without their interface. A fully RESTFul API with shell wrappers (node.js, python, perl) will come with time.
I like the idea of this but some visuals like graphs or meters would make it a lot more interesting.
I wrote a little monitoring tool like this for our servers. instead of the monitor using ssh command "recipes", it just hits API endpoints that return JSON. The monitor doesn't really care what the API is checking, it just expects that service to "neutralize" whatever metric it is checking into something that can be graphed. I think the same could be done if you have a specific data format that recipes are required to output. Those recipes that returned a common data format could then be graphed. Of course it takes a minor amount of programming to parse the output from "uptime" (for example) into formatted data, but once one person did it that could be available as a pre-buit recipe for all of your customers.
Our tool is pretty much hard-coded to our system but I had always thought it would be a cool thing to have. I've looked at other monitoring systems that seem to do that already but many of them are so complicated and none of us have the motivation to learn them. We're a typical team of programmers who have to be admins by default.
It looks like it runs very similarly to our current server management systems, but the web UI is a huge plus.
I agree that open sourcing it to run on your own infrastructure would be a big benefit - people tend to be a bit protective of where they put their root ssh keys :)
Sigh. Please actually use other tools in orchestration space before building another wheel.
A nice ui over pluggable backends is a great idea. Building (another) sub par orchestration tool seems like a waste of everyone's time. Mcollective has had the prototype REST bridge for what, 2.5 years?
Have you considered to develop this on the top of chef-server?
I think most features (and more) you mentioned in the video are already covered by the chef-server (recipes, role, ohai, search, provisioning etc), and they have large community and more thoroughly tested.
However, their webui is too simple, and in particularly not extensible - no plugin mechanism so it is difficult to add some minor feature on the top of their webui.
So I think there is a huge market for providing a better UI for chef-server, especially they are one of the leaders in the field?
Our company uses a combination of Capistrano and Puppet. It is actually pretty easy to get going with Capistrano and Puppet if you are comfortable with Ruby. Our Capfile specifies the server recipe which is propagated to Puppet using Environment Variables. The whole operation is easy to understand, and very little learning curve required.
We do the same, except we've started phasing out Capistano because it's model doesn't fit well with Tomcat. Once you have your keys distributed everywhere via Puppet, it trival to use your favorite scripting language to just SSH around and run commands.
This just doesn't seem like its so hard a problem that I need a cloud based solution.
dev-ops for me also includes provisioning - and based on " ... you can use shell, perl, ruby, node.js ..." I don't see a strict system for that in place.
(Creator of Commando.io here) Commando.io is extremely early, and while no such provisioning features are built yet, they are certainly in the pipeline. The beta users will dictate features and direction.
Yes, and I think it's great, that people seem to be already interested in it on kickstarter.
Just one thought, but I'm far from a dev-ops expert. I worked with Chef and Puppet in the past, looking at salt now. There are two large benefits for me in dev-ops: 1) automation and 2) documentation - and still, chef, puppet and salt have quite some rough edges. And those are mostly command line tools - I just can't see, how a UI oriented system could simplify things and could come near those. But then again, I'm mostly thinking idempotent provisioning - for monitoring and similar, a web-frontend is a great plus.
I'll stick to shmux for these tasks thanks. It doesn't require me to hand you my SSH keys, runs on the command line, already supports parallell SSH execution, and is free both in terms of speech and beer.