The solution to the "interface/tooling to translate" problem, at least for open source applications, is https://translatewiki.net/ , with the additional benefit that it comes with a team of experts that can help you understand how to deal with stuff you might be unfamiliar with, such as RTL languages and plural forms.
I find the repeated deprecations on GitHub Actions frustrating to work with. One of the key goals of a build system is to be able to come back to a project after several years and just having the build work out of the box.
Yet with GHA I need to update actions/checkout@v2 to actions/checkout@vwhatever (or, what I'm doing now, actions/checkout@main because the actual API haven't actually changed) because... some Node version is "out of maintenance"?!
GHA is literally code execution as a service. Why would I care whether the Node runner has a security vulnerability?
NPM package security is a far bigger problem than some ephemeral invocation that probably isn't under PCI-DSS or HIPAA and doesn't serve content to the wild interwebs. Amount of caring should be nuanced for the use-case rather than projecting blanket, absolutist declarations.
I believe the guidance holds true regardless: only maintained code should run in these execution contexts. Otherwise, you are assuming more risk (needlessly, imho). How much more risk? I cannot say. Everyone’s risk appetite is different, but hosted providers clearly have an incentive to reduce their risk, as do most businesses.
If you want to run builds with old containers running old code on your personal equipment, sure, that’s fine, the impact is likely limited to you. A person has little financial, liability, or reputational risk.
Why is Github enforcing that decision rather than leaving it to the developer to enforce it as part of their workflow? At the bare minimum, why not give the developer the option to override the decision? Something on the lines of "Do you really want to run node 20?"
The article shows that such an option does exist. But it will be phased out in 3 stages, making it impossible to run node 20 eventually. This really does disrupt the standard software development practice of reproducible builds. Safety determinations must be implemented orthogonally to reproducible builds.
Ultimately, this is the next stage in the war on computing freedom and general purpose computing. They're moving from "We decide what you're allowed to run." to "We decide what you're allowed to develop." I know that many will object to this calling it an overreaction and claim that nobody is forced to use their CI/CD system. But as history has shown time and again, lock-in and restrictions come in numerous small doses that are individually too small to trigger the alarm bells among the general population.
> If you want full control, install and use your own runners, which flips the responsibility to you.
It's a bit tedious to have to explain that service providers have certain responsibilities and obligations too. Corporate culture has given bigtech a blank cheque to behave the way they want. That aside, based on the way you framed your reply, let me assure you that I don't trust them with even my projects, much less the CI system or their runner. And time and again, they vindicate that decision. I have posted the full argument as a reply to your sibling comment.
I'm still paying for these github runners. There is also some line between fully dictating which versions of languages and packages you are allowed to use and a managed runner that doesn't get in your way like a control freak.
because your "yes" might mean you are putting other projects at risk depending on the vulnerability.
Computing freedom applies only on machines YOU control. you can't expect to be able to do everything you want on hardware others control.
go buy some servers, put any github lookalike service in there and you are completely free to run with Node v1 if you really want.
You can absolutely install Node 20 yourself on GitHub's runners if you want and GitHub is fine with that. GitHub itself, and other projects, are protected by isolation from your workloads. Their security does not depend on the software you're running in your workflow. TFA is just talking about the version that comes preinstalled and is used by JavaScript-based actions.
> because your "yes" might mean you are putting other projects at risk depending on the vulnerability.
By ruining build reproducibility through such short sighted decisions, you are actually compromising a security measure. And I already proposed a way to overcome this problem if you insist on disabling node 20 by default - provide a way to explicitly override it when needed.
Besides, the security model you suggest isn't even the correct way to deal with supply chain vulnerabilities. They have to be version-flagged at the artifact distribution point, not at the CI stage by unilaterally imposing backwards-incompatible restrictions.
> Computing freedom applies only on machines YOU control. you can't expect to be able to do everything you want on hardware others control.
There are two fundamental issues with that argument here. Any user depends on services like these after entering an agreement with a service provider (Github). Even free tier users agree to the ToS. This is significant because the developer(s) are making an investment on the provider's service. GAs are not seamlessly transferrable to any competing service, even ones that try to replicate GA. The users are basically soft locking themselves in to the platform. It takes nontrivial effort from the user if they want to migrate away. In such situations, it's only ethically and morally correct for the service provider to never blindly pull the rug from under the users like this. This is especially true with paying customers.
The second problem with that argument is that it's not fair for the service provider to keep shifting the goal post once the restrictions have been agreed upon by both parties. In case of GA, the developers are not doing whatever they please on Github's servers. Their actions are executed within a restricted context with predefined restrictions. Any damage is restricted to that context and is ephemeral. Arbitrary modification of those restrictions later on only creates headache for the developers without any meaningful benefits.
> go buy some servers, put any github lookalike service in there and you are completely free to run with Node v1 if you really want.
I stay away from GH as much as possible precisely because this uncaring attitude of theirs. As I explained earlier, it's not trivial to migrate even to GA lookalikes. I would rather target a different platform that wouldn't randomly rugpull from under me like this.
You're spreading FUD, assuming unprofessionalism, and making a strawman argument based on nebulous hypotheticals and assuming the worst intention and stupidity. There are plenty of vendors who $upport "unsupported" versions of many languages, libraries, and frameworks with backported security and functionality patches for things that matter that are too expensive to update and are outside of FOSS support. For other things that don't matter, sometimes prototyping or one-of runs the functionality of executing is far more important than any or all CVEs that aren't exposed in any material manner. This is the definition of nuance.
I am sharing my professional opinion based on real world experience and subject matter expertise, so others can take what is of value and disregard what is not. You are free to disregard it in its entirety if you wish.
My day job org is a Github customer with a few hundred thousand dollars of annual spend with them, so while we get their ear, we don't move the needle with regards to product changes (their customer success team is very helpful when they can be though). I imagine the situation is not as great if you are a free user, or someone with immaterial spend with them; you're simply along for the ride.
As always, do what is best for your risk and threat model.
I look at it as “The risk of running unmaintained code on an old interpreter version is difficult to quantify and therefore it is low cost and effort to require it run on a maintained, recent version.” Developers will argue their time is too valuable to require such code be updated to run on recent interpreter versions, and I’ll argue it’s cheaper than chasing successful exploits and any footholds established. Dev Vs Ops, a tale as old as time.
Perhaps having had to run down potential exposure across a large enterprise from the recent npm supply chain attack has made me a bit more paranoid lately around supply chain and cicd security. But, I get paid to be paranoid, so it is what it is. Run your own runners I suppose? Hard to complain when someone else is running the infrastructure for you (and you’re not paying enterprise prices). Supply chain and hosted/multi tenant execution code security is just fundamentally hard and fraught with peril. Ongoing deprecations to keep up with current state are therefore unavoidable.
I think GitHub Actions is missing a distinction between builds and automation.
When I build my software I care less about eliminating security vulnerabilities (after all, I need to build while updating to fix security issues), but also don't need, or ideally don't want any external access. A vulnerability in the build toolchain should be encoded into the artifacts but shouldn't necessarily prevent artifacts being generated.
However when I automate processes, like categorising bugs etc, I care a lot about eliminating security vulnerabilities because there is necessary external access, often write access, to my sensitive data.
GitHub considers these two things the same, but they're distinct use-cases with distinct approaches to maintenance, updating, and security.
Why both the pipe into sh and eval? The latter could handle all everything.
Couple more thoughts and unsolicited feedback from a quick eyeball:
- Use https://
- Wrap everything in a shell function to ensure that partial content is not executed.
- Wrap variable substitutions with "${OUTPUT_DIR}" to prevent arg splitting if they contain whitespace. Line 124, `rm -rf $OUT_DIR_INSTALL` is pretty scary if invoked with whitespace in OUT_DIR.
- Download nodejs tarball to temp directory and then extract them to prevent partial extraction
So my script writes variables to stdout and redirects everything else to stderr. I use this to update a `.bashrc` while also updating the current shell
> Why would I care whether the Node runner has a security vulnerability?
I’m guessing they know you don’t care, but the big company customers cant have a CVE anywhere and won’t accept a EOL node version so they can check a box on something.
(I guess there’s also people with self hosted runners, who might be running them inside networks that aren’t segmented.)
> Why would I care whether the Node runner has a security vulnerability?
Because that "build" process has free access to your repo and potentially your organization. If your repo is also where you deploy from, then potentially deploying a vulnerable version of your software, live to your users.
If that's something you care about, then don't define your CI build in terms of GitHub Actions steps. Instead, call your build that takes care of using whichever version of Node you want in CI.
Real question here - why is `actions/checkout` dependent on Node at all? Seems like we wouldn't need to be on `actions/checkout@v5` at all if it was just written in, say, shell.
I don't expect to come back after x years and a build system to work. You're very much at the mercy of multiple components in your stack and environment. For example you could be on a Mac and 2 years ago you were using x64, but now you are on ARM64. Whole load of stuff just breaks from that alone.
In the end, this is the age old "I built by thing on top of a 3rd party platform, it doesn't quite match my use case (anymore) and now I'm stuck".
Would GitLab have been better? Maybe. But chances are that there is another edge case that is not handled well there. You're in a PaaS world, don't expect the platform to adjust to your workflow; adjust your workflow to the platform.
You could of course choose to "step down" (PaaS to IaaS) by just having a "ci" script in your repo that is called by GA/other CI tooling. That gives you immense flexibility but also you lose specific features (e.g. pipeline display).
The problem is that your "ci" script often needs some information from the host system, like what is the target git commit? Is this triggered by a pull request, or a push to a branch? Is it triggered by a release? And if so, what is the version of the release?
IME, much of the complexity in using Github Actions (or Gitlab CI, or Travis) is around communicating that information to scripts or build tools.
That and running different tasks in parallel, and making sure everything you want passes.
I'm not sure if there's a monorepo vs polyrepo difference; just that anything complex is pretty painful in gitlab. YAML "programming" just doesn't scale.
Doesn't everything in GitLab go into a single pipeline? GitHub at least makes splitting massive CI/CD setups easier by allowing you to write them as separate workflows that are separate files.
> GitHub at least makes splitting massive CI/CD setups easier by allowing you to write them as separate workflows that are separate files.
this makes me feel like you’re really asking “can i split up my gitlab CICD yaml file or does everything need to be in one file”.
if that’s the case:
yes it does eventually all end up in a single pipeline (ignoring child pipelines).
but you can split everything up and then use the `include` statement to pull it all together in one main pipeline file which makes dealing with massive amounts of yaml much easier.
this doesn’t run anything for `job_b_from_template` … you just end up defining the things you want to run for each case, plus any variables you need to provide / override.
you can also override stuff like rules on when it should run if you want to. which is handy.
gitlab CICD can be really modular when you get into it.
if that wasn’t the case: on me.
edit: switched to some yaml instead of text which may or may not be wrong. dunno. i have yet to drink coffee.
addendum you can also do something like this, which means you don’t have to redefine every job in your main ci file, just define the ones you don’t want to run
You can have pipelines trigger child pipelines in gitlab, but usability of them is pretty bad, viewing logs/results of those always needs extra clicking.
I would argue that the second rule is even optional here. There is enough literature (McConnell's Software Estimation, Boehm's Software Engineering Economics) that suggests that, given a well-scoped problem and other projects to base the estimation on, a good (+/-50% or so) estimate is possible. But if you don't know what you're building it's all wasted effort because you're estimating the wrong thing!
The subsidy could be independent from the carbon emissions (e.g. by subsidies on the produced goods) while the carbon tax isn't, effectively creating an incentive to produce in a less carbon intensive manner.
As with any http website, a malicious actor (e.g. someone in a coffee shop or an airport) could set up a plausible looking wifi service and then MITM the website and insert adverts or malware into the page.
However, that has been discussed on many other topics that are directly to do with TLS/certificates etc. so I don't think it's worth bringing up (aimed at the OP) every time there's an HTTP linked.
With HTTPS, the site author could still do all of that, no? So I’m not convinced this is really that big of a concern on an unknown website that I’m not entering any credentials or personal information on.
That's more of an issue with trusting any website, whereas TLS mitigates the risk of trusting a wifi provider or ISP. I also don't think it's much of a concern for old, infrequently used sites, but I wouldn't trust the competence of a modern website that didn't have a current SSL cert.
the SITE can do that when HTTPS is used, yes, but an unauthorized third party can inject stuff much more easily when it's plain HTTP. A little ARP poisoning and some mitmproxy and before you know it you're injecting malware or whatever
Whether or not that matters when viewing this particular site is up for debate
Yes – into the sandbox of this particular site (and limited to non-HTTPS-mandatory browser APIs at that).
If that's a big threat vector, I feel like the much bigger risk would be visiting malicious sites, not a local or ISP located attacker injecting stuff into benevolent-but-HTTP-only ones.
> limited to non-HTTPS-mandatory browser APIs at that
Another trick that could easily be pulled by a malicious ISP/wifi provider is to insert a redirect into the HTTP page to go to an HTTPS site controlled by the attacker (presumably with some semi-related name so as to not seem suspicious to the user) and to then bypass non-HTTPS restrictions in the browser.
Alternatively in the same vein, I wonder if it's possible to make a web server only listen on 443. I feel like maybe modern browsers try that first so you can skip 80 and it works?
The alternative would be "two applications talking to the same microservice" where you run into the same issues with backwards compatibility, except the API is now "application to microservice" instead of "application to database schema". Either way, when someone changes the interface they either need to do this in a backwards compatible fashion or all dependent applications need to be updates.
I think the usual approach is to have each microservice store its own local version of the data - but you can do that with a database, just use a different table. the value is in scaling - if one service needs a big database but another needs fast access, etc. overall, nobody other than the top 50 (let's be generous) tech companies needs this.
I have the same (and ran into this trying to wrap my head around why Maven didn't work... I don't want a tutorial explaining how to get started, I need to understand the fundamentals to understand what's happening!).
I think, however, that starting from the examples might help with good API design: if you design your API to be "core concept first", this will likely lead to an API that _can only be used after you understand the core concepts_, which is not great when people are only occasional users.
Judging by the graph on the linked page, the UK's rail network is mostly safer due to a lower number of workplace accidents. A cynic might suggest that that's correlated with a lack of maintenance :-).However I do also believe that the UK takes workplace accidents more serious than some other European countries.