This. I find open source projects written in Go or Rust are usually more pleasant to work with than Java, Django or Rails, etc. They have less clunky dependencies, are less resource-hungry, and can ship with single executables which make people's life much easier.
Not sure why you include java in that, as you mostly get a standalone file. No such thing as a jre in modern java deployment.
As for python, at least getting a dockerfile helps a lot. Otherwise it's a huge mess to get running, yes.
Python is still a hassle anyways, since the lack of true multithreading means that you often need multiple deployments, which the Celery usage here for instance shows.
> Not sure why you include java in that, as you mostly get a standalone file. No such thing as a jre in modern java deployment.
Maybe I'm behind the times, but I can't figure out what you mean here. As far as I know 'java -jar' or servlets are still the most common ways of running a Java app. Are you talking graal and native image?
For deploying your own stuff, most people do as before, yes. But even then, it's at least still only a single jar file, containing all dependencies. Not like a typical python project where they ask you to run some command to fetch dependencies and you have to pray it will work on your system.
But using jlink for java, one can package everything to a smaller runtime distributed together with the application. So then I feel it will be not much different than a Go executable.
> The generated JRE with your sample application does not have any other dependencies...
> You can distribute your application bundled with the custom runtime in custom-runtime. It includes your application.
Python application deployments are all fun and games until suddenly the documentation starts unironically suggesting that you should “write your configuration as a Python script” that should get mounted to some random specific directory within the app as if that could ever be a sane and rational idea.
No, Go modules implement a global TOFU checksum database. Obviously a compromised upstream at initial pull would not be affected, but distros (other than the well-scoped commercial ones) don’t do anything close to security audits of every module they package either. Real-world untargeted SCAs come from compromised upstreams, not long-term bad faith actors. Go modules protects against that (as well as other forms of upstream incompetence that break immutable artifacts / deterministic builds).
MVS also prevents unexpected upgrades just because someone deleted a lockfile.
generally I prefer humans in the loop, someone to actually test things. This is why distros are stable compared to other distros which are more bleeding edge.
For SC security, the fewer points of attack between me and the source the better.
For other kinds of quality, I have my own tests which are much more relevant to my use cases than whatever the distro maintainers are doing.
I've been a DD and while distros do work to integrate disparate upstreams as well as possible, they rarely reject packages for being fundamentally low quality or make significant quality judgements qua their role as maintainer (only when they're a maintainer because they're also a direct user). Other distributions do even less than Debian.
Fedora currently packages 10646 crates. It's implausible that they're manually auditing each one at each upgrade for anything other than "test suites pass", let alone something like obfuscated security vulnerabilities.
In the end most distros will be saved by the fact they don't upgrade quickly. Which is also accomplished by MVS without putting another attack vector in the pipeline.
I think I don't want "more than a hundred" additional points of trust, especially if they're trying to audit 50+ projects with various levels of familiarity each. And no, I don't believe one person can give a real audit to 50 packages each release even if was their actual job.
To paraphrase, all "more than a hundred" of those people need to be lucky every time.
Just think about Gitea vs GitLab.