Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Disappointed to see so many knee-jerk reactions to this. Vendoring dependencies is a simple way to ensure consistent build inputs, and has the bonus effect of decreasing build times.

To respond the two major criticisms:

1) “It takes a lot of space”

Don’t be so sure. Text diffs and compresses well. I have a 9-year old Node repo that I’ve been vendoring from the beginning and it’s only grown 200MB over that time. (Granted, I’m fairly restrained in my use of dependencies. But I do update them regularly.)

But even if it does take a lot of space… so what? If your dependencies are genuinely so huge that this is a problem, then vendoring may not be right for you. But you could also use one of the many techniques for managing the size of your repo. Or just acknowledge that practices are contextual, and there’s no such thing as “best practice”—just a bunch of trade-offs.

2) “It doesn’t work well with platform-specific code”

This can cause some pain if you’re in a multi-platform environment. The way I deal with it (in Node) is by installing modules with --ignore-scripts, comitting the files, running “npm rebuild”, and then adding whatever shows up to .gitignore. I have a little shell script that makes this easier.

This is only an issue for modules that have a platform-specific build, which I try to avoid anyway. But when it comes up, it can be a pain in the butt. I find its pain to be less frequent and more predictable than the pain that comes from not vendoring modules, though, so I put up with it.

Bonus) “It’s not best practice”

Sez who? Dogma is for juniors. “Best practices” are all situational, and the only way to know if a practice is a good idea is to examine its tradeoffs in the context of your situation.




> Vendoring dependencies is a simple way to ensure consistent build inputs

If consistent build inputs is your concern may I ask why using lock files wasn’t enough? That’s a problem they were designed to solve.


npm has long had a problem respecting lock files. The concept is easy: have a fixed lock file, get a reproducible build. But no: npm will change your lock file (I believe it's framed as "optimizing") without notice.

(Perhaps they've solved this in the last couple of years. I've been staying away from that ecosystem... too much growing in it...)


I think the trick is that you should use `npm ci` instead of `npm install` in most cases.


It says so in the post, in case another left-pad removal happens.


Not that I'm a fan but didn't npm resolve that problem right after? You can't yank entire packages anymore 24 hours after they're published


Yes, you are right. But there are people who trust npm (now and esp. in the future) and other free infrastructure and there are those who prefer to be a bit more self reliant after getting burned once.


I use vendoring in Go because my team's builds happen within a huge, complicated corporate network that has been known to break arbitrarily in new and interesting ways (or rather when something gets changed unexpectedly and then it takes days/weeks to navigate outsourced IT and change it back). Vendoring deps doesn't save me all the time, but I've generally found it helps. Plus builds are a bit quicker because I can download everything in one go (via git clone) rather than pulling everything in at build time. It also helps when the linter decides it wants all the dependencies downloaded before doing anything and then we find it has a relatively short timeout when the network gains a lot of latency without notice.

On reflection, it seems more like I'm papering over network issues. Perks of working in an enterprise company I guess.


Onus confusion strikes again <https://news.ycombinator.com/item?id=29276656>. The (mediocre) tooling for lockfiles isn't bedrock.

In a discussion about Skub, no one need to explain why the Skub-powered approach isn't good enough. It is the duty of anyone pushing Skub to explain exactly what makes Skub so special to the point that we need to have Skub in our lives.


There are only 2 problems I see with the existing solution in npm.

- "npm add package" puts in a "^ver", which is bad practice

- there is no good infrastructure to pull hash based blobs out of the ether in case npmjs is offline

npm-shrinkwrap has solved repeatability forever, people just didn't always use it. Auto-upgrading dependancies is the big problem, which should have never existed because it is not principled. I'd go further and say that dependancies and devDependances should only support exact versions, and peerDepenancies are the only thing that supports non-exact versions.


Came here to say this but you said it better.

I don't check in my dependencies in my current project because I don't need to; but in earlier projects, I or we did, for various good reasons; and it worked perfectly well, and was extremely convenient for new developers.


> Vendoring dependencies is a simple way to ensure consistent build inputs

It wouldn't be necessary if the dependency tree was a pure function of package manifest.

https://developer.okta.com/blog/2019/12/16/semantic-versioni...


Dumb question: what does it mean to “vendor” your dependencies?

Best guess is something like “ship required source or binaries along with your end product.” Like static linking but extended to dynamic languages and source control.


To "vendor" a dependency is to check it into your project's source control, whether as source or binary.


It's to have it come from some place you control. It can be your source control, but a very common way to vendor dependencies on other languages is to just save them in a server somewhere and install pull them from that on your code.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: