Although it's nice that Git is Git and we can all mostly still work, it still seems foolish to rely on a single point of failure like Github. I've been toying around with the idea of creating a tool that would map the Git api to work with two+ hosting services at the same time. The effect would be something like, run "$git push" and it pushes to Github, Bitbucket, and Gitlab. I can't imagine something like this would be too difficult and would eliminate having to twiddle your thumbs while you wait for things to come back up.
I will be that one, who will remind that git server does not have to be a gitea/gogs/gitlab/onedev/pagura/git-remote-{keybase,s3,codecommit,...} - you might provide a path to the WebDav server[0][1][2], if you need a very simple git server (i.e. served internally over 192.168/16 in office network, let's say RaspberryPi with an external USB drive or temporal repository share from own laptop to your colleagues).
Or you can just provide a URL to any host with a SSH server and Git installed. You only need `git init --bare /some/path` on the server and `git remote add origin myserver:/some/path` on clients. If the repo is used by multiple users, you’ll need the `--shared` flag.
Yes, but you can also push/pull from a filesystem location. To be able to push it, it is simpler to init the repo with `git init --bare`.
I've used this on NFS drives, but also SMB shares from windows, and just about anything that can be mounted to a folder. Having an external hard disk drive or usb stick also works.
And lastly, git also comes with a daemon mode which makes it easy to temporarily host a server for a repo. Just connect multiple laptops trough Wi-Fi, and work together (with a pull workflow rather than a push workflow). That's quite useful [1]
Number of alternatives to popular git repository hosting services looks indeed awesome.
> Yes, but you can also push/pull from a filesystem location. To be able to push it, it is simpler to init the repo with `git init --bare`.
Personally, I have build my very own and simple solution to sync encrypted files over the Internet using git with git-remote that uses filesystem location. The implementation evolved over time but initial idea[0] was to combine restic, pass and git with simple scripts to pull/push the git-remote repo (located in /tmp/repos) to S3 bucket via restic that takes care of upload, deduplication and encryption. Thanks to restic I also don't care much if I'd commit to stale (outdated) master branch, because it uses snapshots and it's quite easy to navigate between them.
Now that you solved your problem. Let me guess your next question:
I have two git repositories which somehow got into an inconsistent state: How can I reconcile changes in both repositories and resolve conflicts between mutable-metadata (branches, tags) in a sane way?
Wasn't the original point to be able to push to the second copy when the first is down .. what's the point (other than backup) of a second "working" copy that you can't use.
A repo admin or script could enable pushing permissions to the mirror while the primary is down. The when the primary is back up, fast-forward it and change back. Or, just allowing pulling from it and wait to merge until the primary is fixed.
Branches: Let each branch owner deal with that. They likely have the most information about it. Create a new temporary branch, merge both sides into it and see what happens.
Tags: Don't have a process which can result in tags pushed into different places. It's a path to madness. Same applies to master/release branches.
You can just push and pull directly to/from your colleague's computers. The main advantage (for an established team) of github/gitlab/bitbucket are pull requests, issue management, CI etc., and that's not easily synchronized across multiple providers.
Me too. That's awesome: I've just suggested it to our team. Thanks for sharing GP!
One serious question though: how do you deal with PRs when you do this? That's one area where it feels like things could be quite messy, especially if you have quite a few PRs going in throughout the day.
There have been various proposals over the years for how to integrate issues and reviews in the distributed git tree itself (http://dist-bugs.branchable.com/software/), but I don't think any of them have really gone anywhere, certainly not in terms of support by the hosted git vendors.
There is git-appraise to fill that gap [1]. I am personally waiting for a federated "forge" for federating PRs across platforms, such as the one developed in [2]. Maybe via e-mail? [3].
Merge the PR on github, pull to your local copy (now you're ahead of one of the urls of origin), push (and it should just push to the origin that's behind)
If you have any discrepencies in between them, you'll need to merge locally of course.
Yes, exactly. Here's an example for a repository hosted on my server and in Keybase Git. Pulls / Fetches will use the repository on my server. Pushes go to both.
[timwolla@/s/xxx (master)]g remote show origin
* remote origin
Fetch URL: git@git.example.com:xxx.git
Push URL: git@git.example.com:xxx.git
Push URL: keybase://private/timwolla/xxx
HEAD branch: master
Remote branch:
master tracked
Local branch configured for 'git pull':
master merges with remote master
Local ref configured for 'git push':
master pushes to master (up to date)
Is this safe? The Git docs explicitly say not to do this:
> Note that the push URL and the fetch URL, even though they can be set differently, must still refer to the same place. What you pushed to the push URL should be what you would see if you immediately fetched from the fetch URL. If you are trying to fetch from one place (e.g. your upstream) and push to another (e.g. your publishing repository), use two separate remotes.
which seems to imply that weirdness might happen if the two happen to get out of sync, or if one (specifically, the one pointing to the repository you're fetching from) fails.
For something that may be a bit safer, I believe it's possible (but haven't tested) to have multiple values for branch.whatever.pushRemote-- that should do the same thing, and has the added bonus of making the secondary remote easily fetchable.
I don't see how the parent comment does anything different from what's advised there? It's setting two push URLs for the same remote, not a push and a fetch URL. Presumably for fetch you would have a separate remote. I think the idea is that every time you push you push to both.
But why not just have different remote names other than the default of “origin”? Somebody else on the thread mentioned that it might be a bit complicated to clean things up after an outage on such a “multiplexed” remote.
There's not much more to do than create a quicker remote wrapper. The Git flow here is 2+1 steps: add another remote, push to that remote (+1 is to create the remote). It would be cool to see it built into a Git plugin or wrapper!
t my last company when our internet connection went down a bunch of team members said they couldn't work because they couldn't get too GitHub. They were shocked to learn they could still collaborate by pushing their changes back and forth with other colleagues.
Perhaps what they really meant is that they couldn't get to stack overflow :-(