Hacker Newsnew | past | comments | ask | show | jobs | submit | biggusdickus69's commentslogin

In the amount of bloat, yes.


It is also important to note that this is not specific to Zed. As someone else have mentioned, it is a cultural problem. I picked Zed as an example because that is what I compiled the last time, but it is definitely not limited to Zed. There are many Rust projects that pull in over 1000 dependencies and they do much less than Zed.


Yeah tbh one time I had a Rust job and their back-end had like 700-800 dependencies.



Imagine a hobbyist developer with a ~ $0 budget trying to publish their first package. How many thousands of km/miles are you expecting them to travel so they can get enough vouches for their package to be useful for even a single person?

Now imagine you're another developer who needs to install a specific NPM package published by someone overseas who has zero vouches by anyone in your web of trust. What exactly are you going to do?

In reality, forcing package publishers to sign packages would achieve absolutely nothing. 99.99 % of package consumers would not even bother to even begin building a web of trust, and just blindly trust any signature.

The remaining 0.01 % who actually try are either going to fail to gain any meaningful access to a WoT, or they're going to learn that most identities of package publishers are completely unreachable via any WoT whatsoever.


I didn’t downvote, but...

Depending on a commercial service is out of the question for most open source projects.


Renovate is not commercial, it's an own source dependabot, quite more copable at that.


AGPL is a no-go for many companies (even when it's just a tool that touches your code and not a dependency you link to).


good. that's the point.

agpl is a no go for companies not intending to ever contribute anything back. good riddance.


Not necessarily, some supply chain compromises are detected within a day by the maintainers themselves, for example by their account being taken over. It would be good to mitigate those at least.


In that specific scenario, sure; but I don't think that's a meaningful guardrail for a business.


The time upgrading is not linear, it’s exponential. If it hurts, do it more often! https://martinfowler.com/bliki/FrequencyReducesDifficulty.ht...


The memory usage is interesting, where different kind of shared memory is obvious hard to visualize, just two values per process doesn’t say enough.

Most users actually wants a list of ”what can I kill to make the computer faster”, I.e. they want an oracle (no pun) that knows how fast the computer will be if different processes are killed.


Git is an scm, not a vcs. By design.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: