Before Microsoft bought them they were basically at a standstill and no new features were being added to the product. At least, that's my recollection of it, perhaps someone can correct me if I'm wrong.
Before the MS acquisition, Github didn't even have emoji reactions on comments in issue threads, so people just spammed +1s if they had the same issue. It was terrible. (You can still see evidence of this on older repos.) They also didn't have like 5 billion other features that we now take for granted.
EDIT: Sorry, this is incorrect. GH did ship them prior to the MS acquisition. I do however remember them taking an EXTREMELY long time to ship emoji reactions, which struck me as a fairly trivial feature. I stand by the point that GH ships features faster post-acquisition, though.
I remembered that reactions irked me for some reason, but I had incorrectly tied it to the MS acquisition. The real issue was this: it took Github an extremely long amount of time to add a feature as simple as reactions. Github was founded in '08, and didn't add reactions for another eight years! But I distinctly remember once MS acquired them, they started shipping features left and right, and features much larger than simple emoji reactions.
Before this gets downvoted, are there any notable features that could only have been added thanks to the Microsoft acquisition?
I think GitHub actions are pretty successful, which may not have been developed by Microsoft as it was launched right after acquisition, but I guess it's easier to keep free since it runs on their own hardware.
I don't immediately see copilot as a GitHub feature, but maybe that'll change for better or worse.
I think you're asking a slightly incorrect question. Very few features could "only" be added thanks to the MS acquisition. What you really want to know is how many more features were added, thanks to MS. Or, how much longer would those features have taken to be built if GitHub was not acquired by MS. My gut feeling, seeing GH pre- and post-acquisition, says that a lot of the stuff they shipped post-MS would simply never have been shipped before.
Under MS, they shipped - just off the top of my head:
* dev containers
* vscode-github-in-the-browser
* github actions
* that extremely useful fuzzy-find that repos have (press t
in any repo)
* copilot
I seriously doubt they could have shipped a single one of those things pre-MS.
GitHub Actions seems to use Azure a lot under the hood; given that they seemed to use AWS for older features (attachments, releases, etc.) it seems likely that it actually needed Microsoft.
My best guess, with no knowledge of what actually happened, is that it was derived from Azure Pipelines.
Yeah I don’t remember the last time I used a screw driver. Electric screw driver for tiny stuff (one of my fav tools of the last decade+) and a small impact driver for bigger screws.
I'm sure making private repos free also dramatically increased the number of total repos to manage. I know personally that I went from having a couple of public repos to at least a dozen private repos for notes, configs, etc.
Am I the only one that remembers frequent outages before the acquisition? I didn't keep track in a spreadsheet or anything so don't have any data to back it up though but always assumed it's similar to Twitter of the day, hard to make a stable service in Rails.
If anyone can actually put out the data that would be great. I suspect there was a period post acquisition of relative stability that dulls our memory of frequently broken PRs which we probably just got used to at the time.
I’ve also noticed a lot of consistency issues on their site recently.
Open PR; someone comments-doesn’t show up no matter how many hard refreshes, browser restarts, etc. Push a commit to your branch - doesn’t show up in the PR, but shows up in the commit history. If you have auto-merge ticked, it might never merge even when it meets the conditions, and if the branch merges you won’t know, because again, the PR never updates and it still looks open.
I have these issues- in varying degrees of duration and severity- about once a week.
I gave up trying to be a polite API consumer with GitHub events, etc. Polling the basic resources every minute is way more reliable and your code will survive a junior developer's shenanigans.
I had the REST API e-tag polling working well, but then I discovered my org event stream didn't include label changes. This is a separate thing I needed to poll and at that point I lost my mind. I refuse to keep track of 6+ pieces of state in order to pull essential data from an API.
My current pattern is to list all open issues and then compare their updated_at with persisted copies. If any changes, then I refresh the comments for the issue as well as top-level items (title/body/labels).
It's part of the Shopify and BigCommerce's design pattern explicity. Like, yes we offer webhooks, but also you need to come back once an X (I typically do hourly) and sweep for missed data.
Not to mention with Shopify, webhooks are not guaranteed to be at all ordered. You may receive an order.updated event prior to an order.create event. Delivery milage and timing may vary.
So, after all that back and forth we asked ourselves, why do we even bother with webhooks?
Well for one, it helps our clients being real time to improve their shipping speed, keeping real time inventory, and so forth. Secondly, it keeps our processing more flat. We might run an hourly cron to pickup stragglers, but we don't do huge data dumps per hour. In some our busier systems it can be a real challenge when clients do things like monthly invoicing all their customers. Using webhooks and processing records as they come in keeps the queue as small as reasonable.
Yeah GitHub doesn't like when you do frequent polling over hooks but you pretty much need to have something polling (infrequently) to catch undelivered hooks.
That it be and applies to me. And countless others. Oh the drudge.
I also know countless people who work outside of CRUD world who rely on CI. The only exception I can think of are scientists who, well, tend not to use version control beyond folder naming. Not that I blame them, on the contrary.
fwiw pull requests heads can be accessed locally via
```
${remote}/pull/${ID}/head
```
where remote is the git remote for the repo (probably `origin`) and id is the pull request number. You may need to fetch to get the up to date head and if it still doesn't work, try just fetching the ref directly from the remote.
You can then diff against main/master, try merging into main, etc. which should give you everything you need to code review.
If you want to diff the current branch against main/master at the branch point, you can do:
```
git diff $(git merge-base --fork-point master)
```
That will diff against the point in history where the branch diverged.
Move where exactly? Bitbucket and GitLab aren't that great either.
I'm affected by GitHub outages maybe once every 2 months for a couple of hours. Sure, various parts go down but it's never something super impactful to my workflow.
I am not particularly keen on github and microsoft scraping my code and re-selling it via copilot. If you own mission critical ip you can always spin up a vm and setup reliable backups using gitlab, or simply via ssh. It really is that easy to move away.
That tag line isn't for you. It's for aging C-suite exec's who like fancy sounding words.
As for not going down. Meh, at least you don't have to do on-call for it. That's kind of half the point of paying for SaaS. At the end of the day, it's just a human system.
Outages, forced 2FA bullshit, half-wit AI feature because buzzword... all this makes long term customers want to self host. Can someone drop a hint to the CEO please: WE PAY YOU TO BE BORING AND STABLE.
I don't trust security people to do sane things. - Linus Torvalds (2017)
Specifically, if you have a happy client paying you monthly who has established a functional workflow and no problems, how is forcibly breaking that based on a series of bullshit assumptions (eg. customer has a static phone number in one country that they have continuous SMS access to, customer wants to install your extra app, whatever) for the nominal benefit of some theoretical attack category a smart idea?
It's fine to offer the migration, even to encourage it. But to insist is both wrong and bad for business.
> eg. customer has a static phone number in one country that they have continuous SMS access to, customer wants to install your extra app, whatever
GitHub uses TOTP which does not require SMS, can be used with dozens of apps (because it's a protocol), and can be easily transfered using QR codes or seeds.