Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Gitlab was down (status.gitlab.com)
143 points by vickyonit on Nov 28, 2019 | hide | past | favorite | 135 comments


It is up again.

> [Monitoring] GitLab.com is now recovering. We found 2 last DB nodes which had not reverted their change. Apologies for the disruption. [1]

[1] https://status.gitlab.com/


Back in the days GitLab made it clear that it aspires to be a software company rather than an infrastructure company.

The reason I favor GitLab over GitHub is that I can install it on a dedicated server and have blazing fast performance, which helps development performance on an every day basis.

I've been maintaining a bunch of deployments of all sorts (k8s, docker-compose, baremetal) for the last years for several customers of all sorts of size, the upgrade process has always been pretty smooth, sometimes I ran in edgecases yes but always found a solution.

My deployments are always up and kicking, unless I'm messing with the configuration and doing some mistakes.

I highly recommend hosting your own GitLab instance, even on a single server.


> have blazing fast performance

Sorry, but GitLab has never been blazing fast. The gitlab.com instance is notoriously slow — supposedly improved a lot over the past few years, but still feels pretty sluggish. My self-hosted instance isn’t much better.

In fact, a couple of open source maintainer friends looked into migrating to GitLab when GitHub was acquired by MS; they did not precisely because GitLab was too slow.


Let me clarify: git clone/fetch/push on gitlab.com or github.com is sluggish, but on my private instances it's blazing fast, because I get great and un-metered bandwidth from my dedicated servers which might not be the case for everybody / every country.

Another thing: CI starts the very second I push, and since I configure dedicated runners extremely well GitLab makes it easy keep every pipeline stage under 5 minutes, which is critical to me, as such I recon I have acquired countless tricks for that matter.

Also, the ChatOps integration (GitLab to Mattermost or Slack) seems instant for me.

Those factors matter for me because I'm an extremely fast iterator and often iterate on several repositories at the time.

The cherry on the top is that the mirroring between my gitlab instance and github is also blazing fast.

As for the web interface, well they all feel slow to me so I don't spend my time waiting for them so it's ruled out of the equation for me anyway, I'm just far more comfortable with command line anyway.

But, pushing to gitlab prints out the URL to create a pull request from that branch, or to view the existing one, also a time saver that's not available on github, not sure why though.


> pushing to gitlab prints out the URL to create a pull request from that branch

You can use push options to automatically create a pull request and set it up when you push:

https://docs.gitlab.com/ee/user/project/push_options.html


Had no idea that existed, super cool, thanks!


> pushing to gitlab prints out the URL to create a pull request from that branch, or to view the existing one, also a time saver that's not available on github

IIRC I’ve seen that recently with GitHub too. Not very useful to me though since I use magit for almost all git-related operations.


Yea GitHub has had this option for at least a couple months now.


You should look into git request-pull. I haven't gotten it to work but I'm probably doing it wrong. It should work.

https://git-scm.com/docs/git-request-pull


From my experience a custom GitLab instance is much faster that GitHub.com. But it is apples and oranges and I suppose your friends weren't comparing the two.


My custom GitLab instance is still slower than github.com. (Yes, I’m giving it enough memory.)


If you are a customer you can file a support ticket and our Support Engineers can help troubleshoot. One of our mandates is "performance is a feature" and we want to better understand what's happening in the wild.

If not, and you are interested, we built a tool we use called "fast-stats": https://gitlab.com/gitlab-com/support/toolbox/fast-stats

it might be able to pinpoint what controllers are slow for your own curiosity.


I'm a community user, and the sluggishness seems to be a common sentiment so I'm not sure profiling my instance will add anything valuable that's not already known.

(Sorry if my comments sounded entitled.)


It’s slower than github.com, but I wouldn’t call it slow.

What’s slow is out bitbucket datacenter edition.


Not to mention consuming way too much memory. I prefer Gitea, or cgit if I need even less features.


Indeed, you can use drone-CI with Gitea that's also an option. But for now I find Gitlab-CI still slightly better: graphical representation of the pipeline, editable environment variables, review-stop jobs to clean up review deploys, automatic review-stop on MR merge ... Anyway, I run both because some people really want to keep their main repo activity on GitHub "for visibility" (for me that doesn't stand: I use gitlab->github mirroring feature as such my work is still published on github)


> the upgrade process has always been pretty smooth

By contrast I've managed a self-hosted install a while back, upgrades regularly broke as much as they fixed. For example the time when LDAP logins all failed, the gitlab-pages stopped working, and more.

That's on top of the constant UI churn with the side-menu, and similar changing from release to release with no rhyme or reason.

I wanted to like it, and did like the runner/CI support, though even that had as much pain as love. (Hard to script installation of agents, etc.)


For me they got the UX/UI pretty right since last release, I enjoy it both on mobile and desktop devices.

Anyway, for CI-runner setup, I have a one-liner that worked for me: bigsudo install lean_delivery.gitlab_runner @somehost gitlab_ci_token=yourcommand gitlab_host=yourlabs.io gitlab_runner_limit=4 gitlab_version=11.6

But also a docker run does the job... Well, there's still registration of runner that I did manually, but I just checked ansible documentation and it really looks like I got a fair chance to achieve that too, given all the new modules ansible got since 2.8 to deal with gitlab, including gitlab_runner and gitlab_runners modules: https://docs.ansible.com/ansible/latest/modules/gitlab_runne...

So yes, Gitlab strives to keep development and releases coming, that means sometimes foundations move, and that there's a learning curve too, I understand that some people may not like that, but as a dev myself I like it this way, and at the end of the day my team iterates great with GitLab-CI, we got automated review-deploys in less than 5 minutes with just docker-compose and that's really helpful to keep the master branch clean, this allows features to be merged only after they have received product team approval so that's a big win. With GitHub you can also do it with Drone-CI which you might like better because it's easier to setup even though it remains behind GitLab-CI in terms of features that matter to me.


We use GitLab at work and I use it for personal projects as well, it has been very slow for several days now and they had downtime yesterday as well.

I don't mind the general sluggishness of the system that much (as I love the platform in general) but when you can't get your work (nor hobbies) done because of tools breaking, it gets really annoying really fast.


> when you can't get your work (...) done because of tools breaking, it gets really annoying really fast

I always assumed that the publicly hosted version of gitlab is basically a giant demo version of the enterprise edition you buy to host it yourself. Hell, you can even host the community edition for free.

If your work relies on it, why rely on a free online product?

EDIT: TIL gitlab.com also has a paid options. I stand corrected.

We have a hosted Gitlab at 2 of my clients. Both are up.


The online version has non-free plans as well, so I don't see your point of not relying on it.


> If your work relies on it, why rely on a free online product?

We're not using the free product, we're paying customers (using the hosted saas version of the product).


> If your work relies on it, why rely on a free online product?

Some orgs are paying 99$ per user-month for the SaaS version of gitlab.com, they probably rely on it pretty heavily :)

https://web.archive.org/web/20191119174637/https://about.git...

Edit: others made the same point, but I'll leave this here for the link to the pricing page.


gitlab.com is not only for free clients, plenty of paid clients use it instead of hosting themselves.


To be fair, just be thankful you don't use something like Jazz RTC - we had down time at least once every two weeks (from an hour to a whole day at a time). It got so bad we ended up setting up a local network Raspberry Pi as a Git server and emailing patches to remote teams.


It's still essentially a git repository, you can work locally and push later.

Yes, the issues, MR, reviews are not available, but honestly I wouldn't want to be in a job where I could not get by without it for several hours.


In terms of energy and storage usage (so environment footprint), wouldn't their shared online hosting be more efficient than many self-hosted ones here and there?


It's not a tool, it's an online service.


There has been a theme of instability with Gitlab.com over the last week or two. I'm not sure if it's growth related (they've seen a steady increase of users/traffic) and they've reached a scaling peak. OR if it's technically related - they've been doing a number of different infrastructure changes over the last few weeks which make a material difference to the main layers of the service.

For me the real test here is how they respond to this. As a paying customer I want to understand the issue, the efforts to prevent this in future and how they communicate this.


Suffering the same instability issue and switched back to Github as that's what a paying customer would do.


Notably they don’t hire any real people with ops experience. Erring instead to go for developers hoping that they can do everything needed.

I like gitlab as a product but they don’t have a service mindset, and I think not hiring operations-centric people is a symptom of that which causes these kinds of issues.


A friend of mine interviewed for some kind of ops role there.

He was asked to clone a repo during the interview, he told me it was almost a GB.

When he commented it was slow compared to Github, the engineer interviewing him told him it was his fault for living in Australia and that they have bad internet and they're too far away from everything.

Go figure.


How do you know that they do not hire "any people with ops experence"? Do you have insight? Is there any public evidence for this or anything you could publish yourself to add some substance to your words?

It would be great if you understand that just saying something is not enough on the internet.


What you say is completely fair.

I’ve been looking at job postings and watching the way they work too for a little time since it’s all open.

DBAs are “ruby devs who have used Postgres”

https://about.gitlab.com/jobs/apply/backend-engineer-databas...

SREs are “ruby devs who have used docker/kubernetes”

(No job listing currently)

The only open job labelled “ops” is telling.

https://about.gitlab.com/jobs/apply/frontend-engineer---conf...


GitLab engineer here.

The "Ops" section actually consists of product development teams for features in GitLab itself (the Configure/Monitor "stages"), while the SREs are in the "Infrastructure" department. See https://about.gitlab.com/handbook/engineering/#engineering-d.... But yes, we don't seem to be hiring SREs at the moment, but I assume we'll add more openings in the beginning of next year.

Regarding DBAs, that job you posted is more of a normal Backend Engineer role with a database specialty. We also have dedicated Database Engineer roles:

- https://about.gitlab.com/job-families/engineering/database-e...

- https://about.gitlab.com/job-families/engineering/database-r...

The job description for SREs is here:

- https://about.gitlab.com/job-families/engineering/site-relia...


Hi Toupeira,

I didn't find some of those jobs, so thanks for linking them.

As a side note I just went through those job descriptions and I -really- like the layout.

On topic: Unfortunately they really do prove my point. There is a very strong focus on "strong" programming skills which is rather undefined. It's literally mentioned in every single role description.

The overwhelming majority of staff that knows how to run software reliably ironically are not software engineers, although there certainly are some software engineers who also possess this skill.

The people I'm speaking about typically understand concepts and solutions (like PAXOS, filesystems or public cloud) more than they understand software development methodology or software product structure.

I guess you have a global reach and can be quite picky about who you hire, maybe you /do/ exclusively hire architecture and systems focused programmers, or maybe "strong" programming skills are a different definition to mine.


That's certainly a valid concern! From my perspective, the programming skills in these job descriptions are one requirement among many others, and I'm not sure how much weight it really has in the hiring process for these roles, especially if your other skills are a good enough match.

We do have this note on all job pages, which maybe should be more prominent:

> Avoid the confidence gap; you do not have to match all the listed requirements exactly to apply.

Some amount of general programming experience is definitely required though, since SREs and DBEs frequently have to dig into our codebase and things like Ansible runbooks. And especially regarding SQL, a lot of it is heavily abstracted not only through the Rails ORM but also our own code.

Found some other jobs that focus less on programming, though we don't have current openings for most of these either:

- https://about.gitlab.com/job-families/engineering/cloud-nati...

- https://about.gitlab.com/job-families/engineering/infrastruc...

- https://about.gitlab.com/job-families/engineering/monitoring...

- https://about.gitlab.com/job-families/engineering/security-e...

- https://about.gitlab.com/job-families/engineering/vulnerabil...

We don't seem to have a good overview of all roles, I found these through https://gitlab.com/gitlab-com/www-gitlab-com/tree/master/sou... :)


Why is significant Rails experience a requirement for a database engineer position? That will quite drastically limit your options. Yes, it is a useful skill but hardly necessary to do this job. I know a lot of really good database consultants and they can usually identify and fix bad queries in virtually any framework or ORM. The skills are really transferable.


As recently as October 2017, they had exactly one database person. So I can imagine quite a lot of work and hiring needed to scale up a solid ops practice.

"Until very recently I was the only database specialist which meant I had a lot of work on my plate."

https://about.gitlab.com/blog/2017/10/02/scaling-the-gitlab-...


Almost to prove my point Yorick self-identifies as a "Ruby/Rust Developer" https://railsisrael2016.events.co.il/people/2644-yorick-pete...

Obviously this doesn't preclude operations or systems knowledge but again, it's at least telling of the mindset.


I would not call myself an operations expert by any means, but I have done quite a bit of infrastructure work in the past; both with bare metal setups and cloud based solutions.

With that said, the old database specialist position was about 80% engineering, 20% infrastructure, with the infrastructure work being done in cooperation with the production engineers.

I left the database team a good year ago and quite a lot has changed since then. I think these days we have a handful of people focusing on the database side of things.


To be fair it depends a bit on the context. I present myself sometimes as a database expert and other times a Ruby/C Developer. And I would argue that I, as a minor PostgreSQL contributor who follows the mailing lists, am more knowledgeable than most about databases and especially PostgreSQL.

Edit: Admittedly while I have done a lot of DBA stuff and server operations on top of my software development, hardware and networking are not my strong points so if the company could afford it I would want a more traditional server/networking guy on my team (and at a previous job I did exactly that). And I agree their job posting seem to have a very heavy focus on development experience.


My anecdotal experience: I applied to a SRE position, and I have 10 years of experience in system engineering and 5 years in dev ops. I didn’t even get an interview. I’m not saying I would have or should have gotten the job, but they at least should have interviewed me.


> Website, API, Git (ssh and https), Pages, Registry, CI/CD, Background Processing, Support Services, packages.gitlab.com, customers.gitlab.com, version.gitlab.com, forum.gitlab.com

How come all of them are down all at once.


The status page has updated to indicate that they misconfigured their firewall. Apparently their entire set of services go through a single firewall (or at least, multiple firewalls with the same config). It's worrying that they don't have a staging setup for these kinds of things.

(NOTE: I am speculating here, if they do have a staging system and this wasn't reproduced there then the last sentence doesn't apply.)


Likely just means they have a Single Point of Failure.

Some guesses would be:

Automation/orchestration - They've migrated to k8s (I don't believe they've actually done this yet), but it could be their orchestration / automation tool automated a broken thing everywhere.

Database/Auth - Pretty much everything in gitlab will touch the database as far as I'm aware. Otherwise, how do you check whether users are auth'd to take action something. You wouldn't expect this to break the static website, i.e. the sales landing pages, but these could be based off an internal CMS, or could be checking for "guest" role session.

DNS/Service Discovery - As a sibling posted, "it's always DNS". It's good practice to use names for services instead of IP addresses, but this means your DNS needs to generally work, or everything will go down. Service Discovery could rely on DNS, but it could also be an API call that finds out DNS addresses or IP addresses directly.

CDN - You wouldn't typically put this in front of auth'd usage, and typically a CDN might not be helpful in front of something like SSH, but a quick look at fastly suggests they might support this. The main downside is sharing all the user data / auth tokens.

Security Product / CA - All you need is a requirement to encrypt internal traffic and rotate secrets, and you end up with a secret store that sits in the middle of everything.

Storage Layer - I believe they were big on Ceph for a while. If everything is backed by Ceph, everything will go down if you fail with Ceph.

Obviously, whatever it is, you'd expect them to split up their fail over plan a bit more in the future if it is something like that, but usually there's a single point of failure somewhere.


Replying to myself, because it's now on their status page that a firewall change took down the database.

This points to there being:

- a lack of process and testing on key networking changes. Aren't they doing CI/CD, automated testing and peer review for this?

- A SPOF in the database; why couldn't things connect to a secondary for a read-only mode?

Quite a lot of the time, things break for stupid reasons. The main difference is when a normal company does something stupid, they can hide it, lie about it, or make it sound more complex.

The fact Gitlab publishes their fuck ups, is supposed to force them to do a better job and actually look at root causes and apply proper fixes that we can all judge. I wouldn't hold any particular fuck-up against them.


Network devices are generally hostile to advanced automation, and if they had both primary and secondary as the same class of machine then the changes would apply to both.


I believe they're hosting in-cloud, which means it's probably not a device and can be automated. Obviously, public IP addresses will be specific to environments, but that's what PRs should double check.



presumably a single point of failure - my guess would be something at the network level.


It's always DNS!


This time it might be Consul


Spot on. The status page confirms it's a bad firewall configuration


It's always the network.


At the bottom of the page they list availability of third party services used - Fastly has a warning symbol, and I imagine they put that CDN in front of everything.


> Fastly has a warning symbol, and I imagine they put that

> CDN in front of everything.

Check their status page, it's just a simple reroute since the 13th [1].

[1] https://status.fastly.com/


Latest tweet by @gitlabstatus said "We've identified an issue with database connectivity", which could explain why so many services are impacted.


Bad firewall change...they just updated the page.


I used to be a total Gitlab fanboy. I was going to Sid's meetups back in Utrecht 6 years ago, and he's a model of mine to this time.

I would have done anything to stay on the platform, for at home and the office and setup gitlab instances in two different offices.

Since somewhere last year, I moved back to Github primarily, and I'm sad to hear that my company is likely to make the same choice soon.The only reason is stability, and that's a little sad to me. I really want to love the product, but I need something that just works; not bells and whistles


Whenever GitHub is down you see comments saying much the same thing only with the names reversed.

The thing that strikes me each time is how fragile everyone's setup is if GitHub/lab is a single point of failure for them...

At least gitlab let's you self host, which would let you run backups on offsite hosting meaning zero downtime.


GitHub's uptime is way better than GitLab. I am not really sure gitlab.com even reaches 99% availability (not to mention 99.9%).

GitLab may well be focused on providing full-stack dev-services (VCS, build-server, CI and stuff), but in the end they are a hosting company - and for hosting, uptime is one of the most important metrics.

EDIT: 99% was exaggerating, but I got so many 50Xs the last days that it was from time to time unusable.


I agree that GitHub's uptime is better, but I doubt that Gitlab aren't at 99% availability - that'd mean over 14.4 minutes of downtime each day.


We have some public Pingdom stats at http://stats.gitlab.com/4932705/history, looks like this was the first time we dipped below 99.9% this year (partial outages excluded).

But yeah, our response times have been steadily increasing and could definitely be a lot better ;-)


You can see GitLab.com's uptime here via pingdom:

http://stats.gitlab.com/4932705/2019/11 (99.75%)

http://stats.gitlab.com/4932705/2019/10 (99.98%)

http://stats.gitlab.com/4932705/2019/09 (100%)

http://stats.gitlab.com/4932705/2019/08 (100%)

http://stats.gitlab.com/4932705/2019/07 (100%)

http://stats.gitlab.com/4932705/2019/06 (99.98%)

EDIT: I see one of my colleagues also posted here, I wasn't asked to and I'm doing it of my own accord. I assume they are as well, but I can only speak for myself.


Ok, I admit that 99% was exaggerating, but you sure should make availability of gitlab.com your main focus - and you promise that for years to the users (linking at issues etc.).

Btw.: Your links are http, and the https version has `SSL_ERROR_BAD_CERT_DOMAIN` - you should check with pingdom.com maybe?


Agreed, nobody's perfect.

My experience with github is much better though on average. Haven't had a single bad experience personally the past year for my personal stuff. Not once. Github does only one thing, the CI is usually somewhere else so there have less reasons to fail as well... It might be comparing apples and oranges.

We use Gitlab at the office and it is constantly slow. Might be our setup I agree, but in the end it also tarnishes the brand to me.

All in all, my general experience is that speed and stability has degraded over the past years, in favor of the 'whole in one devops pipeline'. Github stayed focus on its core market.


>Github does only one thing, the CI is usually somewhere else so there have less reasons to fail as well... It might be comparing apples and oranges.

This is changing with Github Actions.


> This is changing with Github Actions.

Well gitlab does artifactory/docker management, security, value stream management, . . . So still


GitHub now has package registries and security scans too, and adding more features to complete the lifecycle.


Wasn't aware. Will look into it. Thanks!


To be fair, Over the last 5 years, I've been hit by several Github outages and never been hit with a Gitlab outage. The reason? I don't use Gitlab during my working day. Similarly with this one, I wasn't doing anything on Gitlab when it went down.


Interesting. My issues have been in the evening :). Are you in the US? I am EU based


I'm in Japan :-)


Zero downtime is an ideal that you'd need a lot of investment for to achieve. I mean for 99.99999% uptime you'd already need to have a multi-server, multi-AZ, multi-region and multi-cloud setup and orchestrate data parity between those environments. That's not easy to set up and maintain.


Yes.

Tell that to all the companies that have to stop working when either of these sites go down because they've built their entire workflow on GitHub/lab being up.

Since zero downtime is so costly, maybe it's a better idea not to rely on the uptime of a third party?


You're wrongly assuming that your own service will have a better uptime.


Given git's distributed nature, I can't imagine either service going down for a day could have that enormous of an impact. I'd be curious to know how people manage that. I would think other services which point to git.* would largely have some option for a manual work around, or would not be utilized in mission critical roles.


But git is not truly distributed. Each 'node' in the network does not automatically gets access to new trusted nodes (remotes). Management of the distributed nature is still manual, meaning that your CI/CD system cannot suddenly pull changes from your local developer machine, instead of GitHub/GitLab/Gerrit/...

In an ideal world, each commit is cryptographically signed and automatically distributed to a large number of nodes. Only correctly signed commits would be picked up by CI/CD and the build artifacts would use the digital signature of the code to further deploy the resources in a trustworthy manner.


Its mostly about the project management, issue tracking, pull requests, CI/CD, releases and other features being unavailable. Not the source code itself.


Continuous integration is typically not distributed, and it blocks releases.


Github also supports self-hosting.



Sure, it is unfortunately above the monthly budget that my wife allows me to spend :)


Yes but not for free.


>> At least gitlab let's you self host, which would let you run backups on offsite hosting meaning zero downtime.

Zero downtime but not zero maintainence cost. I think the administration of a full CI/CD environment (not only the gitlab machine, also the runners, the network etc.) are the main factor for this.


GitHub also lets you self-host.


Totally off topic: also went to those meetups in Utrecht! They were small fish then. Cray what can happen in ~6 years.


I love gitlab, I really do, but their uptime is atrocious for a multi-billion dollar company.


It seems like GitLab should indeed focus on stability now. The feature set is great. But it is all not worth it if it keeps disrupting work on a constant basis by being unstable.


I recommend hosting it yourself, works great and is fast on local network. You can run updates when it's convenient to you, not to others.

GitLab is a rare software that enables you to be in control.


I've always been a little cautious of running it myself. When I look at the components involved there are a lot of moving parts. And I don't want to be ever in a position where I can't get things started again. If I'm running on premise I'll switch to gitea with drone ci instead.


GitLab has virtual machine image, so it's really unobtrusive for your system.


They also have a Debian package with sets up the whole environment. It has worked well for me so far. The only thing I had to add was a backup script.


The omnibus package makes it real easy to install, especially with Ansible.

FYI if you are uploading to AWS or other cloud providers, you don’t even need a backup script. You can configure it in the gitlab.rb file:

https://docs.gitlab.com/ee/raketasks/backup_restore.html#upl...


Updated just now...bad firewall change.

"[Identified] We have identified firewall misconfiguration was applied that is preventing applications from connecting to the database."


It was already a bit flaky since last friday (although the status was still green then) - perhaps having to roll out two security updates yesterday pushed it over the edge...


One starts to wonder if they will ever get stability under control.


It was an iptables problem

https://gitlab.com/gitlab-com/gl-infra/production/issues/142...

The rollout of new iptables rules blocked database server connections


> [Identified] We have identified firewall misconfiguration

> was applied that is preventing applications from connecting

> to the database. We've rolling back that change and expect

> to be operational again shortly.

Heh, happens to the best of us! Seems to be coming back online now.


Any service on clearnet can go down. p2p alternative like git center (git over zeronet) is more resilient to service provider mul-configuration and censorship

git center: http://127.0.0.1:43110/1GitLiXB6t5r8vuU2zC6a8GYj9ME6HMQ4t/

client: https://zeronet.io/

proxy: https://zero.acelewis.com/


Not sure if I have anything else to add on top of what people are already saying but the performance of GL is extremely frustrating for the past couple of months. Viewing the diff of a PR can sometimes take around 20 seconds just to load the tab.

I think it's about time they dedicated some resources or addressed the public about the efforts they're making to address these issues. It's becoming excruciating.

I've used Github every day for years and years and I feel like every page load has been pretty much instant forever.


Originally joined gitlab for their free private repos but with the recent downtime/sluggishness, i have jumped over to github (now that they offer free private repos too)


Gitlab is still the only viable choice for non-commercial groups who want private repos though, the 3 private members and no ability to have mixtures of public/private repos in organisations on Github is very limiting.


Sorry, but this isn't true at all. Bitbucket works fine, and GitHub offers plans for non-profit groups for free: https://github.com/nonprofit

Self-hosting has multiple different options as well.


Registered non-profit <> non-commercial.

- a private website for my local sailing club. - a mod for a game - an open source project that requires a private repository for a few things - any project relating to a private community

None of these are registered non-profits.

Bitbucket is capped at 5 users as far as I can see, and self-hosting is just a recipe for lost data. I don't know of many amateur groups that can safely host a server.


Bitbucket? (Even though their availability is terrible too.)


It is worth mentioning their status page is not down.


It looks like it's externally hosted at status.io. It makes sense to have your status page on separate hosting otherwise it can go down at the same time as your main infrastructure and that defeats the point of having a status page.


LOL!


Hmmm, just today I wanted to install a new virtual machine with the latest Gitlab release to check if it makes sense to run that inhouse... does anybod know if there is a VM, a vagrant machine, an ISO or a repository online that still can be used? Thanks!

Or does exist a mirror on github.com? ;)


If you just want to try it out, the easiest is often Gitlab Development Kit.

    gem install gitlab-development-kit
    gdk init
    cd gitlab-development-kit
    gdk run
I doubt it will work today though; it's probably pulling everything from Gitlab.

For Docker, check out https://hub.docker.com/r/gitlab/gitlab-ce I haven't tried it out for myself but it seems popular :)


The GDK installation has changed a bit (just set it up today, you need to run gdk install and gdk start (instead of run) for example). Please also note that it involves installing dependencies on your local machine, like e.g. Ruby / Postgres, etc. Here is the link to the GDK: https://gitlab.com/gitlab-org/gitlab-development-kit

Otherwise I would try the docker container OR just install it with omnibus in a VM: https://about.gitlab.com/install/


Oh, thanks. I got that from some old notes since my bookmark didn't work for some reason ;)

For me, running Gitlab locally is extremely heavy (webpack especially). How beefy is your computer, as a Gitlab developer?


I have got a 16 GB MacBook from 2018.

Funny that you mention Webpack, I am part of the webpack working group and we are trying to improve developer experience:

https://about.gitlab.com/company/team/structure/working-grou...

We were able to reduce memory consumption quite a bit, but we are working on different other improvements.


Great! I see you were actually involved in one of the problems that bit me (8 GB...). https://gitlab.com/gitlab-org/gitlab-development-kit/issues/...

Small world :)


> Hmmm, just today I wanted to install a new virtual machine

> with the latest Gitlab release

Just wait till later/tomorrow. Setting up your own instance is relatively easy. Still, you can get most of the way without their repos...

https://www.techrepublic.com/article/how-to-set-up-a-gitlab-...

> Or does exist a mirror on github.com? ;)

It seems so: https://github.com/gitlabhq/gitlabhq


Get the docker image from dockerhub


oof, literally all of it. dont see that too often. on thanksgiving day no less.

i root for gitlab when i can, but theres a reason i mirror my repos to github.


Ironically, status.gitlab.com takes upwards of a minute to load on my end. I thought it was going to time out.


I hope this incident is not #HugOps again, waiting for new interesting things from postmortem


We’re doing this now? For my entire tenure on HN, this site has been a status page for GitHub. Looks like it’s Gitlabs turn.


You guys should consider switching to GitHub. Much faster, no 500-errors, less downtime.


Microsoft must have migrated it over to run on Windows Server.


Maybe you mixed something up. This is about Gitlab, not Github.


I don't want to hijack this thread but what was the end result of the decision of Gitlab not hiring _in_ certain countries anymore? A lot of media coverage when the incident happened but no idea what happened afterwards and can't easily find it.


IIRC, they are not hiring 'in' certain counties. This is different than 'from' certain countries.

AFAIK, nothing material happened after the announcement. There are lots of companies which do not hire in certain countries.


The discussion is still ongoing, the latest official communication I'm aware of was the blog post at https://about.gitlab.com/blog/2019/11/12/update-on-hiring/

Note that this was always only about excluding certain job roles who have administrative access to production servers, not all jobs in general.


Usually it's Legal/HR issues with the way to pay your employees.

As another one said, their policy in not hiring people "in" rather than "from", as tax rules change from country to country.


[Speculation] Perhaps a DDoS or a disgruntled employee...

In all seriousness, hopefully these things are not related.


This strikes me as odd, because if there's something gitlab engineers are known for, is their unmatched competence,


Like when they dropped their database and took a few days to recover?


It took about 24 hours to recover, not "a few days".


Hmm how did you come to this conclusion?


Good one




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: