Hacker Newsnew | past | comments | ask | show | jobs | submit | cpsaltis's commentslogin

The main issue with virt-manager is that it's a desktop application and you can't really collaborate with others when managing infrastructure.

Cockpit solves this issue. The feature set in slightly different but mainly it is limited as to what you can manage.

When running different types of infrastructure at the same time, e.g. KVM + AWS + Azure + ... it won't help much. In such cases it would make sense to check out Mist (https://github.com/mistio/mist-ce), which does something similar to Cockpit but for ~20 infra techs.


Virt-manager can connect simultaneously from many different terminals to many different libvirt servers. What do you mean by "you can't collaborate with others"?


I mean that you can't really have multiple people, from multiple teams, accessing the same infra with different types of rights and with centrally managed authentication/authorization/logging.


That's exactly what libvirt lets you do. It uses policykit to handle the authorization, cockpit doesn't change that and still requires you to use policykit to control who has access to what.

https://libvirt.org/aclpolkit.html


: /


LXD is indeed nice, but it still isn't very widely adopted. I believe this comes down to 3 main issues:

1) It's strongly connected to Canonical and Ubuntu. This is mostly a matter of perception and it is an actual community project. However, I can understand people not feeling comfortable with "snap install lxd".

2) It sits somewhere in between k8s and docker engine. Over time, it will probably get more k8s-like features but still it is a weird position to be in.

3) It lacks a reach ecosystem of tools supporting it and a web UI. This makes it hard for newcomers to adopt. We're working on a web UI ourselves as part of our open source cloud management platform (https://github.com/mistio/mist-ce) and love to hear your thoughts.


Indeed, we have major issues with snap (doesn't work with $HOME on nfs, and auto updates with very few controls are a terrible idea on servers) so avoid anything dependent on it. Otherwise, I really like the concept.


It's unfortunate that the effort to get LXD packaged natively in Debian [1] seem to have stalled as this is definitely one of the last remaining drawbacks for me.

For example, when LXD was still available as a .deb from upstream, it was possible to run LXD inside a container and do nesting, but with the snap, that isn't possible anymore. (However it is now with the --vm flag, though that's really a different mechanism.)

As I understand it the big remaining difficulty is dqlite.

[1] https://wiki.debian.org/LXD


I did write a bit about my experience packaging LXD for Arch Linux a while back.

I got a ping from a Debian dev telling me the Debian dev working on LXD enjoyed the article, and has been trying to pick up on the work again. The linked wiki page is the outdated one.

https://linderud.dev/blog/packaging-lxd-for-arch-linux/


it's packaged for void, though I'm not sure what that entails. will say it works well enough I didn't find out about the snap mess until I tried to install it on another distro later


Debian has stricter rules regarding packaging and vendoring dependencies. Void and Arch packages LXD in a similar fashion by separating all the C dependencies. However, none of us actually separate out the different Go dependencies, they are all vendored inside the LXD package.

Debian separates out all of these into own packages.


I made a GUI too, but dont actively develop it anymore. https://github.com/dobin/lxd-webgui


As it requires docker probably difficult for us to try. Do you have any instructions on how to install it on bare metal? Using them can convert them to LXD container images to run, can reverse engineer Dockerfile to create LXD containers but try to avoid environment variables for configuring and running services as they are kind of security vulnerabilities.


Unfortunately we currently don't. We do have the option to install it on k8s with a helm chart though https://github.com/mistio/mist-ce/tree/master/chart/mist


I wonder how does Mist compare with OpenNebula, which seems to be in the same area? From a quick glance it looks more complex. (OpenNebula supports LXD directly.)


At a high level, Mist is a more "general purpose" platform which supports more providers (20) and more workflows. This increases the perceptual complexity, however it isn't more complicated to use than your average public cloud console. In fact, we strive to keep things as simple and as agnostic as possible. I'd be happy to arrange a demo if you like to see it in action. You can reach us on Github or from our website at https://mist.io.


OpenNebula definitely needs to revive the libcloud interface, or similar; I was thinking of complexity in operation more than using it. I ask mainly for general interest, as I can't think a free software management system would end up expensive enough for an institution which is too broke to retain staff. I might have a play sometime, and thanks for the response. I haven't found anyone with experience to compare these sort of systems in operation after seeing the need a while back.


> 1) It's strongly connected to Canonical and Ubuntu. This is mostly a matter of perception and it is an actual community project. However, I can understand people not feeling comfortable with "snap install lxd".

Last time I tried it on Fedora it did not work (less than 6 months ago).

Also it offers nothing I want over podman with --rootfs


I've tried both podman and lxd with success but I'm curious, what do you use a tool like that for, mostly?

Not to seem like a hypeman for Kubernetes and similar tools, but I actually seem to only ever use containers combined with something like Kubernetes or Docker Swarm. What do you do that you want to do specifically on one machine? Hosting something? Automation à la CI/CD?

Again, I am actually asking for good use cases without orchestration platforms, I am just curious.


I use it as a replacement for situations where I used to use KVM for Linux VMs. For example, my tvheadend and Zoneminder servers are both running inside LXD containers so that I don't pollute my host machine's environment. It's also a nice way to try out another distro other than what the host machine runs with close-to-metal performance.


One thing podman is used for is bootstrapping environments for package building under mock, which at least affects people doing Fedora maintenance.


> I've tried both podman and lxd with success but I'm curious, what do you use a tool like that for, mostly?

I use podman for dev and test environments, CI and CD workers and testing OCI containers that I eventually deploy in K8S. No production use cases. Hoping to soon see K3S working inside podman though, and then I would use it for deploying K3S :)


That isn’t a good comparison. While podman can run systemd inside a container, it isn’t widely adopted in the images in docker hub and elsewhere. There is probably just fedora supporting this. Whereas in LXD it’s normal to run a full systemd inside a container.


> There is probably just fedora supporting this.

How do you mean? I've used podman on Void Linux, openSUSE, GitLab CI, and I think some others that I'm forgetting (I distro hop a lot) and it's worked great.


> While podman can run systemd inside a container, it isn’t widely adopted in the images in docker hub and elsewhere.

With podman and rootfs it's also normal to run a full systemd inside a container and you don't need special considerations from OCI images for rootfs to work just fine.

> There is probably just fedora supporting this.

RHEL is behind podman and I will take RHEL support over Canonical every day.

Podman is also available on most major distros and easy to port to new ones without requiring someone to use some proprietary crapware solution like snappy.


It is sad that so many servers are still vulnerable on an issue that has been reported through all major mainstream news networks. We've witnessed several attacks many days after Shellshock was fixed, and chasing down the botnet scripts we saw hundreds of servers compromised.

The story and scripts were published on a blog post in case anyone wants to check out a standard botnet attack:

http://blog.mist.io/post/100582053116/anatomy-of-a-shellshoc...


While it is true that most tests are different from production, having a solid test suite can go a long way in moving with startup speed but breaking stuff as little as possible. We (as in me and a couple of others from our startup) have written a howto on using Docker and Ansible to build your test suite a few months ago. Strangely enough, it has almost the same title:

http://blog.mist.io/post/82383668190/move-fast-and-dont-brea...


It was about time to see an alternative to Jenkins.

Does it provide the fine-grained workflows Jenkins does?


The workflow is pretty basic right now, however, we plan on adding matrix and parallel builds in the near future. Could you elaborate a bit more on your workflow? I definitely want to make sure Drone supports more than just simple use cases.


From my experience with Jenkins, as a build/deployment/release engineer the past 6 years, you probably want to:

- chain jobs - needed for larger projects; ideally this should even allow composing jobs to have nice, modular jobs which can be launched standalone or chained

- some kind of powerful templating system - needed for reducing configuration duplication; ideally this would keep track of all the "children" in case of updates

- you also probably need enterprisey features later on, like SSO using AD/LDAP, fine grained ACLs based on groups, etc

But job chaining and job templating should be higher priorities for the workflows since they affect the overall architecture. Jenkins has been struggling for a while to re-architect to allow this, not entirely successfully.

You also want a plugin system if you don't have one, especially one with dependencies (i.e. the Git plugin can server as a dependency for the Github plugin).

My 2 € cents :)


Chaining jobs and parallel ones are both very important. Especially the last one since it saves you a lot of time waiting the tests to complete. Also a big plus is to be able to run certain set of tests only when a specific event is fired eg ran test A when somebody pushes to branch X


If you liked Cloudkick I think you should also check out https://mist.io. In fact we started mist.io when Cloudkick was about to close down. We like to think of it as mobile friendly Cloudkick with a twist.


oops;) we're fixing it right now -- in the meantime try stripping the "pricing" part of the uri.

thanks for getting back to us!

edit: Should be OK now! Feel free to ping us at support@mist.io to if you need any further assistance.


Great job fixing it so quick but sadly, seems to be down again.


Assuming an average valuation/yearly revenue ~= 20 it means that for $3.5bn the should make $175m a year. If they have 350m photo shares a day (!!!) this number seems very well within their reach. They can even afford to monetize less aggressively and thus more user-friendly. I think it's one of these cases where numbers are so huge that they work in their favor whatever they do.


20X revenue would be a really high valuation even for a SaaS business. 6-10X would be more likely in the long run.


When Twitter had ~$140M in revenue their price was ~50x revenue.

Source: http://www.quora.com/What-are-revenue-multiples-for-technolo...


The company is very hot right, basically selling expectations thus I think well above 10x. In any case $350m with these kind of numbers still seems very feasible. Won't you agree?


Nice write up, do you know which receivers support GPX?


Sorry, I have no idea. I posted this to the community, because I myself often lack good map sources. Hopefully this project will help people in similar situation.


The biggest pain I've experienced with similar web interfaces was mobile. For the desktop there are a few that are decent, but on mobile they generally suck. So do the native email clients for Android and iOS.

Do you plan to stick with a desktop version? Will you always design 'destop first'?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: