Hacker Newsnew | past | comments | ask | show | jobs | submit | more bartvanH's commentslogin

Well, yes and no. On modern enough hardware you can do nested virtualisation, so you can run kvm in esxi on hyper-v if you want. Not that it's very usefull, but it is a fun weekend project nonetheless.


I use nested virtualization for Windows Desktop -> Linux VM -> minikube. Works great.

Obviously you could just run minikube under Windows, but then from the Linux VM you can't "minikube ssh" and whatnot, so nested virtualization makes everything a lot simpler.


You can even do nested virtualization without hardware support, it's just super duper slow and not a lot of hypervisors think it's worth the complexity increase.


Thanks for sharing, I've always thought cable lacing is one of the most beautiful ways of doing permanent installs.


We still use lacing cord for most of our aircraft harnesses. Nobody but the engineers and technicians ever get to see it, which is a shame because it really does look beautiful.


My favorite was the massive wire harnesses spiral wrapped in teflon, looked so good. It always reminded me of bonsai wire-wrapping for some reason.


pictures, please!


Here is an example: https://starkaerospace.com/wp-content/uploads/2016/11/Wire-H...

The black ties is the lacing cord. Sometimes its just small pieces that bundle the wire every few inches, sometimes its a long continuous piece that is looped at those intervals. The black nylon overbraid is for any wiring harness that is exposed or handled routinely. They typically don't use it on anything that gets tucked away. Also notice the heavy harness supports on those connectors. This is thick gauge cable and a lot of it. Large aerospace companies have shops and techs dedicated to making these all day long.


Unfortunately the factory floor has a no camera policy, so I'm not able to share. I'd be hoarding the karma in /r/cableporn otherwise.


Color is not the only variable here, there are many variables in picking a tape for a job. Of course usually just plain tape will do, but the article gave a very good example:

“Imagine you are a stage technician and you need to tape a lamp that gets very hot to a pillar,” Ghouneim says. “If you go to [DIY store chain] Bauhaus, they wouldn’t know what to do. But I can tell you that you need a polyester film with a special type of silicon glue that can stand up to 300 degrees. I have sticky tapes in my store that mere mortals would never dare to dream of.


As a user of feedity I'd just like to take a moment to say Thanks!


Hey thanks!


I may be old, but if you have a network loop inside your own infrastructure wouldn't it be way more sane to handle that on layer 2? for instance with properly configured spanning tree?


Spanning tree just prunes links from the network to eliminate loops. Sometimes the link that got pruned happened to be the fastest path from point A to point B. So, sometimes you can get a more efficient network if you leave those links in, but that requires more sophisticated routing.


A routing protocol will avoid loops more efficiently


Routing is a Layer 3 issue. Loops are a Layer 2 issue.

Multiple paths on a Layer 3 network provide redundancy when used with a routing protocol such as BGP.

Multiple paths on a Layer 2 network provide redundancy when used with a protocol such as LACP. Without proper design, 2 network segments connected with multiple links will trigger Spanning Tree and shut down one of the links. And depending on the configuration, it may or may not unshut if the active link pair goes down.


Also, it's allows loops to be used as multi paths from different points in the net. Say you have a rack at south and north and there is a loop to a rack in the middle. For failover, you'll take any path that is up. But for normal operation, you'd prefer to take the shortest path.


Layer 2 technologies like MC-LAG do that as well (though I agree that it's better to route, if possible)


Looking at the graphs on github you could say development has slowed down, but i'd like to argue that since there is no commercial incentive anymore the project seems to be much more focussed on stability. And when bugfixing you write less lines while spending the same amount of time.

To me focus on stability is the most important thing in a database so this makes me happy as an ops person, get it stable first and then if needed add more features. though i think rethinkdb is already quite feature complete.


In my personal opinion (based on heavily using RethinkDB for two years), many of the core issues of RethinkDB are related to performance, not bugs. I had bet on where RethinkDB would end up performance wise, had development continued with people who really understand how it works doing fulltime development.


I'm guessing you mean for a work machine as an entertainment machine(like gaming) would easily use more than 256G. My dev machines have 120G ssds and that's enough. The real number crunching happens on a machine that has 2 960G ssds in raid-0. I know, that's not exactly safe, but all the data on there is ephemeral anyway.


I really like the core of gitlab. It's a really good git hosting system that is easy to maintain and work with. My self hosted instance has broken only twice in the last 1.5 years and it was always because of an upgrade and it was always easily fixed.

But now i have to say i'm afraid gitlab is getting bloated. Why not keep the core product as a seperate thing from things like CI? a simple plugin style system would be enough for it to not feel like bloat but feel like extra options.


I just updated the NixOS gitlab package from 8 to 9[0]. It was a nightmarish experience. There are 5 microservices :

- gitlab (the core)

- gitlab-sidekiq (a work queue)

- gitlab-workhorse (provides the frontend)

- gitaly (a git wrapper that caches stuff)

- gitlab-shell (a shell spawned when doing your git clone)

Those are written in either go or ruby. Sometimes mixing the two in the same repository. In the main gitlab repo, there is also some unvendored js dependencies for the frontend, necessitating to jump through some more hoops.

Gitlab has a bunch of hardcoded paths to logfiles and config files. Some config files are toml, others are yaml. Different services need different configs, sometimes duplicating the config entries in a different format.

I love gitlab as a product. But as a sysadmin, it's one of the worst thing I've ever had to deploy.

[0]: Well, it's not pulled there, but it's at https://github.com/NixOS/nixpkgs/pull/27159 if you're interested.


Thanks for the feedback. I'm on the build team here at GitLab, and we try and keep installation as easy as we can, but GitLab is a complicated application.

One of the benefits of the omnibus package, is that all configuration is done through a central gitlab.rb file. So the different configuration formats for the individual services is not noticeable to the administrator. Unfortunately, this doesn't necessarily translate to source installations.

We try and provide good documentation for building from source when our omnibus package or docker image isn't an option: https://docs.gitlab.com/ce/install/installation.html. Any feedback on issues you might have following this guide is welcome. I don't know that we can fix the different configuration formats, but we should be able to fix hardcoded paths. Any examples you could provide would be appreciated.

We do provide docker images for those on operating systems we do not support. https://docs.gitlab.com/omnibus/docker/. If you're just looking to run the latest version, this might be the easiest way for you to proceed.


We have a patch for gitlab in nixos that fixes a lot of our problems. I don't know why we never tried to upstream it, but I guess it's just that nobody bothered. I'll try cleaning it up and upstreaming it when I have some time.

For the omnibus, I'll try looking at how it works, see if it fits our needs better. But I suppose gitaly, which is written in go, still has its own configuration elsewhere, in which you have to replicate the storage list ? That's the sort of thing that I find sort of annoying. It's not a huge issue, but having to manage a bunch of configs, making sure they're all synchronized and up to date sucks as a user experience.

I do have to say, your upgrade guides are very nice and have been helpful when upgrading our instance :).

As far as docker and other things go, I considered it while fighting to get gitlab 9 to work on bare nixos. But the thing is, I manage all my services through nixos, having a central place for all my configuration. This[0] is my configuration (well, a very old one). Everything is in one place, and everything can cross-reference each-other. See my firewall for instance[1], that cross-references various other service's configuration to get their port.

With docker, this is much harder to achieve.

[0]: https://gist.github.com/roblabla/8d1555ceb202eddb1b77

[1]: https://gist.github.com/roblabla/8d1555ceb202eddb1b77#file-c...


The omnibus package is similar to the nixos way of things, except focused on one application rather than the whole system. Once the package is installed, it uses a configuration file, and a set of Chef cookbooks/recipes[0] to configure the individual components.

So yes, Gitaly does come with it's own configuration file that needs to be managed. It does use the same paths as the main gitlab-rails application, so our cookbook has a method to convert the configuration of one to the other[1]

[0]: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/fil...

[1]: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/fil...


This was my same experience. I spent three solid work days trying to install it from source since it's on a server with other services and their default nginx etc. setup couldn't work. All the different microservices made installing from source very difficult. I ended up finally giving up and using the black box omnibus installation and fiddling with the configuration file for a while.

It works now but I can't hack the source to make custom improvements, so it ends up not being as open source as I would have hoped. There's one particular issue in trying to push that results from being on a URL subpath (http://.../gitlab). I can't fix it myself because of that.

Don't get me wrong--it's a good program and has worked for the team I support. But click and deploy is not useful for people who don't have bare bones servers they can spin up and instead have to work with shared servers.


I gave up and ran a docker container with reverse nginx vhost proxy to the docker port.


Please use the official Omnibus packages to ensure that upgrades are less likely to break.

I think we use toml for the Runner since it is written in Go and should be deployed on another server. The rest is mostly in yml files although we trying to move as much as possible to the UI to make it more user friendly.


> Please use the official Omnibus packages to ensure that upgrades are less likely to break.

That would be almost precisely against the whole idea of NixOS.


> I think we use toml for the Runner since it is written in Go and should be deployed on another server.

Couldn't you just parse YAML in Go, or TOML in Ruby? (Or why not give the option of multiple formats?)


This is the kind of software one expects from a team who has never seen how it's supposed to work in real life.


GitLab is envisioned as a platform for the full SDLC - from idea to production [1][2].

If you don't want to use the CI, it shouldn't get in your way. We'd love to hear any specific suggestions you might have, since we're always looking to improve. You can open an issue about them in [3] if you want.

[1] - https://about.gitlab.com/handbook/product/i2p-demo/

[2] - https://about.gitlab.com/direction/#scope

[3] - https://gitlab.com/gitlab-org/gitlab-ce/issues


As GitLab user, I think the concern for me is split focus. For everywhere there's a tightly integrated built-in tool (issues, CI, container registry, etc), there also needs to be a separate, parallel effort to create, maintain, and document a sufficiently rich plugin interface that a third party can create a similarly tightly integrated experience when plugging in something external.

Basically, they build the hooks but aren't dogfooding any of it, since their own integrations don't have to.

And speaking from the perspective of someone using GitLab with Jira and Jenkins, it just isn't the same. Is that necessarily because of problems with the interface or is it just those specific plugins? I don't actually know.


GitLab CI started as a separate project, but team saw more benefits from integrating into a single product as it's much easier to provide tightly integration without the overhead of communication and APIs. There are places in the workflow that can't just support a hook for an external integration.

While CI is integrated, it does not force you to use it, and there are ways to integrate with external solutions.

While CI is part of the same product, you don't run the tests in the same machine, so you still need to provide the workers in additional ones or use the autoscalling mechanisms and have it bootstrap machines in the cloud as needed.

So what I mean here is that while it is part of the product, it's mostly the "frontend" and the "APIs", the heavyload part of running it is totally optional.


Sure, and I get all that— it definitely makes things easier for the creators and maintainers of the software, and it's easier to deploy it too, so it's a win for small shops who aren't yet committed to a lot of other tools, or are fine with being committed to an "omakase" experience where everything is okay and nothing is best-of-breed.

But there are definitely some customers who are sidelined by this approach. Atlassian is pretty committed to tools that talk to each other with documented APIs (Jira, Confluence, Bamboo, Bitbucket/Stash), and for large organizations, that's often a better fit.


It serves different business purposes. Atlassian make money by selling each individual product license. Also there is a reason why they are not part of a single unified platform, some of then were aquisitions, etc.

There are many players doing "unix" application, and very few trying to build a suite. You need to pick your fights.


I would prefer different tools for different stages in the software lifecycle that can be changed independently from each other because they only talk in simple protocols. Everything-Included tools converge to IBM or SAP products that do nothing really well and trap you into their solution because the integration is too tight.


We hear your concern about GitLab getting bloated. We first made GitLab CI as a separate application. When Kamil (CI lead) proposed to Dmitriy (co-founder CTO) to add CI to GitLab itself he said that was a bad idea, we need sharp tools. After Dmitriy became convinced and when they proposed it to me (CEO) I had the same response: http://redmonk.com/jgovernor/2017/06/21/how-gitlab-abandoned...

But when we integrated it the sum was more than the parts. The benefit of deep integration and not having to switch applications are becoming more apparent to us. It is hard to articulate why the same can't be done with plugins, the video in the article is maybe a good start.

Review Apps, changes metrics in the merge request, the container registry being aware of your permissions. All this can be done with plugins. But having it in one applications is so much easier to set up, upgrade, and use day to day.


I also really like Gitlab but the self-hosted instance I manage broke twice in the last month due to faulty updates :-/.

First, the automatic notification e-mails stopped working until I did another upgrade. A later update broke issue merging: merging one merge request closed all work requests (event the ones with WIP).

If you also consider the sudden removal of the backlog from the issue board a while ago (they added it back since), I'm now afraid to update our Gitlab instance.

This is very unfortunate because Gitlab is a very nice and useful tool but if QA doesn't improve, it will be very difficult to prevent a migration. These problems are show stoppers for us.


My Gitlab instance broke 3 times in 5 months, and my questions/issues haven't been answered. I had to manage to backup my data, fully uninstall Gitlab and reinstall it. I then hosted it on a dedicated server using docker and it has been running much better since then.


We run Gitlab in an LXD container with an Omnibus install. Everything works pretty well but those issues were caused by regressions in updates. There were new updates fixing those regressions within a few days but it still caused us quite a few headaches.


That's why many people add extra wood when using a lack table as a rack :)


I want to build a set of tools to build a full private cloud on bare metal with zero single points of failure. Essentially a full replacement of Fuel and Openstack.

I've already started on a full multi master dhcp server to assign ip adresses to hosts and instances.



Tbh it's been a while since I looked into Triton, but it should operate in the same "feed it hardware and run virtual machines" space. I might experiment with running the components on top of smartos because of all the niceness that brings (i.e. crossbow, dtrace and zfs), but for now I’m building on top of Ubuntu.


well please make it so that it's truly easy to install. And I mean REALLY easy. I have tried at least 10 cloud solutions and I havent managed to successfully install any of them. The best one yet was tectonic but their error messages were too non-existent or vague at the end and the emphasis here is on non-existent.


Yes! I've been thinking of ways to do this, and the idea on top of my list is a live usb image that asks for stuff like network parameters, then configures the live image as a 90% functional node from which the first real host can be pxe booted. After the first host(s) have been installed the admin should be able to reboot the live host and add it to the cluster for real.

Right now i have a running Openstack cluster with Fuel for deploying new nodes over pxe. It works ok-ish, but it has some strange glitches every now and then. nothing production critical going wrong, but it still doesn't inspire confidence in me.


Yes I think your idea sounds very convenient, I basically just want to type the IPs of my machines in somewhere and the rest should be explained to me on screen. If SSH cant be established, tell me why. If i didnt set up ipxbe properly or at all, tell me why (heck, even tell me how you got that information ala 'we tried establishing an ssh connection with ssh@184.4882.1 -v but it resulted in this error log: '. Any other connection problems, tell me why. Generate all the SSL certs automatically, I dont want to type in any commands myself. I dont care if it's a test SSL setup but frankly I dont understand why these tutorials always give me test SSL certs. Just generate something that makes sense for production or tell me what I need and why for production. If you think that my nodes should have DNS names, go start some internal DNS server for me and set it up in the background for me. I dont get why I should have to do any of that stuff myself. At the end there should be a screen where all the configs were saved down to in a textual format, so I can have a look at what was done, which processes were started and which ports are now open, what the firewall looks like. For example CoreOS I believe has some cloudconfig stuff and I dont want to figure that file format out myself but I still want to see it after it's been generated. I like magic but I also like to see what it actually did / is doing.


Personally i don't even want to care about the IPs of the physical machines, just put together a box, plug power and network in (given a properly configured switch) and boot it up from the rest of the cluster.

in the "boot up first box" scenario you should be able to enter all subnets you allocate to the cloud like "my wan range is 15.26.37.0/24, router at .1" And "for the host management network i want you to use 10.67.0.0/16 with router at 10.67.0.1" from there on you should be able to plug in and boot up machines. Just be careful to not plug in a laptop that boots from the network :P

Basically my philosophy is sane defaults, some magic where i know i wouldn't want to care, and introspection everywhere.


Have you seen MAAS (Metal as a Service) from Canonical? Paired with Juju, it's fairly easy to deploy OpenStack on metal servers...


Yes i have, and i'm running MAAS on a colocated box as well to install ubuntu on virtual machines. But when you pair it with Juju the licensing becomes a bit expensive as far as i remember, So that's the layer I’m working on to replace first.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: