IndieAuth is super super cool and a vital component to get back control of the internet to users, but I can't shake up the security concerns.
Also, near the end of the article. Using a security nightmare such as Wordpress as your identity provider, what could go wrong? It only takes one single rogue plugin.
Someone breaking into a Wordpress install due to a plugin's 0-day for example, and then being able to log into all the accounts managed by that WP's openID server.
We have 7 racks, 3 people and actual hardware stuff is minuscule part of that. Few hundred VMs, anything from "just a software running on server" to k8s stack (biggest one is 30 nodes), 2 ceph cluster (our and clients), and a bunch of other shit
The stuff you mentioned is, amortized, around 20% (automation ftw). The rest of it is stuff that we would do in cloud anyway and cloud is in general harder to debug too (we have few smaller projects managed in cloud for customers.
We did calculation to move to cloud few times now, never was even close to profotable and we woudn't save on manpower anyway as 24/7 on-call is still required.
So I call bullshit on that.
If you are startup, by all means go cloud
If you are small, go ahead, not worth it.
If you have spiky load, cloud or hybrid will most likely be cheaper.
But if you have constant (by that I mean difference between peak and lowest traffic is "only" like 50-60%) load and need a bunch of servers to run it (say 3+ racks), it might actually be cheaper on-site.
Or a bunch of dedicated servers. Then you don't need to bother to manage hardware, and in case of boom can even scale relatively quickly
Every one of your examples in the second list is relevant to both on-prem and cloud. Also cloud also has on-call, just not for the hardware issues (still likely get a page for reduced availability of your software).
The problem here is “cloud” can mean different things.
If you’re taking about virtual machines running in a classical networking configuration then you’re not really leveraging “the cloud” — all you’ve done is shifted the location of your CPUs.
However if you’re using things like serverless, managed databases, SaaS, then most of the problems in the second list are either solved or much easier to solve in the cloud.
The problem with “the cloud” is you either need highly variable on-demand compute requirements or a complete re-architecture of your applications for cloud computing to make sense. And this is something that so many organisations miss.
I’ve lost count of the number of people who have tried to replicate their on-prem experience to cloud deployments and then came to the same conclusions as yourself. But that’s a little like trying to row a boat on land and then saying roads are a rubbish way to filter traffic. You just have to approach roads and rivers (or cloud and on-prem) deployments with a different mindset because they solve different problems.
This is simply not true unless you build in the cloud the same way you build on prem and just have a bunch of VMs. PaaS services get you away from server / network / driver maintenance and handle disaster recovery and replication out of the box. If you're primarily using IaaS, you likely shouldn't be in the cloud unless you're really leveraging the bursting capabilities.
“Just not for the hardware issues” is a huge deal though. That’s an entire skillset you can eliminate from your requirements if you’re only in the cloud. Depending on the scale of your team this might be a massive amount of savings.
At my last job, I would have happily gone into the office at 3am to swap a hard drive if it meant I didn't have to pay my AWS bill anymore. Computers are cheap. Backups are annoying, but you have to do them in the cloud too. (Deleting your Cloud SQL instance accidentally deletes all the automatic backups; so you have to roll your own if you care at all. Things like that; cloud providers remove some annoyance, and then add their own. If you operate software in production, you have to tolerate annoyance!)
Self-managed Kubernetes is no picnic, but nothing operational is ever a picnic. If it's not debugging a weird networking issue with tcpdump while sitting on the datacenter floor, it's begging your account rep for an update on your ticket twice a day for 3 weeks. Pick your poison.
The flip side is there is an entirely new skillset required to successfully leverage the cloud.
I suspect those cloud skills are also higher demand and therefore more expensive than hiring for people to handle hardware issues.
Personally, I appreciate the contrarian view because I think many businesses have been naive in their decision to move some of their workloads into the cloud. I'd like to see a broader industry study that shows what benefits are actually realized in the cloud.
Right. The skillset to pull the right drive from the server and put replacement one.
Says that you know nothing at all about actually running hardware as the bigger problem is by far "the DC might be drive 1-5 hour away" or "we have no spare parts at hand", not "fiddling with server is super hard"
So you're suggesting GitHub should determine the total "average" license of a private repo and determine if your fork is indeed valid or not, before revoking access.
> So you're suggesting GitHub should determine the total "average" license of a private repo and determine if your fork is indeed valid or not, before revoking access.
No, I'm not suggesting that. In fact, I didn't say anything about what GitHub should or shouldn't do. My comment related to how licenses work, not how GitHub works or should work.
In particular, I was responding to this comment:
> It's not your code or data, it's your previous employers IP and they determine who has access and who doesn't and what licenses do or don't apply.
The claim in the above comment is incorrect. I stated that the claim is incorrect. That is unrelated to the issue of how GitHub should deal with private repo forks.
You mentioned in another post that the company only released a cleaned up version publicly. That cleaned up code which they published is clearly and unambiguously under the MIT license. Any other modifications that were made and not published (including any you made in your fork as an employee) are not automatically licensed as MIT. Your employer holds the copyright to that. They might be fine with those internal changes being released as MIT or they might not, but it is up to them.
If the private repo had the MIT license in it, then it was licensed with the MIT license, regardless of how widely or publicly the repo was distributed.
It isn't that clear-cut. A license is a legal grant of rights from the copyright holder to the licensee. A license file is just documentation. A repo can have different parts that are covered by difference licenses and there are different ways to mark the licenses, whether in the text of each file itself, or other top-level files that document the status. It is also fine for internal working copies to not have all their licensing documentation perfectly applied the instant a file is created. In particular, in many companies, individual software developers don't have authority to license software on behalf of the company who owns the copyright, and so any markings they place in the repo are just tentative drafts pending legal review. So in the context of a private working copy, the existence of a license file doesn't have a ton of legal weight. What matters is when the copyright holder chooses to grant a license, via whomever the company gives authority to do so.
Once the organization publishes the software to others, whatever license documentation they include with it is binding, unless other issues trump that (like them not holding the copyright to begin with).
Ok, it's not true in the general case, but it's true in this specific case. The project was intended to be an open source project from the beginning; we were intending to open source everything.
I will not say why this plugin may be better as I'm naturally biased, but you can check the features from both plugins (mine under https://graphql-api.com/features/) and see how they compare.
I haven't tried, but I believe that you can actually install both plugins side by side (using WPGraphQL's single endpoint, and my plugin's Custom Endpoint under a different route), and using the same GraphQL queries to test them, check their speed, usability, security mechanisms, configuration, extensibility, and anything else that could be relevant for your project.
The above post was also linked from the obligator project's GH readme