I’m so thankful that Rust is helping popularize the solo exe that “just works”.
I don’t care if a program uses DLLs or not. But my rule is “ship your fucking dependencies”. Python is the worst offender at making it god damned impossible to build and run a fucking program. I swear Docker and friends only exist because merely executing a modern program is so complicated and fragile it requires a full system image.
> I’m so thankful that Rust is helping popularize the solo exe that “just works”.
Wasn't it Go that did that? I mean, not only was Go doing that before Rust, but even currently there's maybe 100 Go-employed developers churning out code for every 1 Rust-employed developer.
Either way “Rust is helping” is true. And given that Go is a managed language it never really factored into the shared library debate to begin with, whereas Rust forces the issue.
Maybe, but it's misleading. Using the assertion that "$FOO made $BAR popular" when $FOO contributed 1% of that effort and $BAZ contributed the other 99% is enough to make most people consider the original statement inaccurate.
> And given that Go is a managed language it never really factored into the shared library debate to begin with, whereas Rust forces the issue.
How so? Rust allows both shared and static compilation, so it's actually the opposite - Rust specifically doesn't force the use of single-binaries.
I'm struggling to interpret what it is you are saying: Go specifically forces the use of static linkage, whereas in Rust it's optional, is it not?
I am under the belief that in Rust you can opt-out of static linkage, while I know that in Go you cannot.
Are you saying that Rust doesn't allow opt-out of static linkage?
> Using the assertion that "$FOO made $BAR popular"
Thankfully that’s not what I said! This sub-thread is very silly.
FWIW Rust is exceptionally bad at dynamic/shared libraries. There’s a kajillion Rust CLI tools and approximately all of them are single file executables. It’s great.
I have lots of experience with Rust, the Rust community, and a smorgasbords of “rewrite it in Rust” tools. I personally have zero experience with Go, it’s community, and afaik Go tools. I’m sure I’ve used something written in Go without realizing it. YMMV.
Ehhh. You can compile a single exe with C or C++. I’ve personally come across far more Rust tools than Go. But I don’t really touch anything web related. YMMV.
The choice is actually between dealing with complexity and shifting responsibility for that to someone else. The tools themselves (e.g. virtual environments) can be used for both. Either people responsible for packaging (authors, distribution maintainers, etc.) have some vague or precise understanding of how their code is used, on which systems, what are its dependencies (not mere names and versions, but functional blocks and their relative importance), when they might not be available, and which releases break which compatibility options, or they say “it builds for me with default settings, everything else is not my problem”.
> Either people responsible for packaging have some vague or precise understanding of how their code is used, on which systems, what are its dependencies
But with python it’s a total mess. I’ve been using automatic1111 lately to generate stable diffusion images. The tool maintains multiple multi-hundred line script files for each OS which try to guess the correct version of all the dependencies to download and install. What a mess! And why is the job of figuring out the right version of pytorch the job of an end user program? I don’t know if PyTorch is uniquely bad at this, but all this work is the job of a package manager with well designed packages.
It should be as easy as “cargo run” to run the program, no matter how many or how few dependencies there are. No matter what operating system I’m using. Even npm does a better job of this than python.
A lot of problem with Python packages is the fact that a lot of Python programs is not just Python. You have a significant amount of C++, Cython, and binaries (like Intel MKL) when it comes to scientific Python and machine learning. All of these tools have different build processes than pip so if you want to ship with them you end up bring the whole barn with you. A lot of these problems was fixed with python wheels, where they pack the binary in the package.
Personally, I haven't ran into a problem with Python packaging recently. I was running https://github.com/zyddnys/manga-image-translator (very cool project btw) and I didn't ran into any issues getting it to work locally on a Windows machine with Nvidia GPU.
Then the author of that script is the one who deals with said complexity in that specific manner, either because of upstream inability to provide releases for every combination of operating system and hardware, or because some people are strictly focused on hard problems in their part of implementation, or something else.
A package manager with “well designed” packages still can't define what they do, invent program logic and behavior. Someone has to choose just the same, and can make good or bad decisions. For example, nothing prohibits a calculator application that depends on a full compile and build system for certain language (in run-time), or on Electron framework. In fact, it's totally possible to have such example programs. However, we can't automatically deduce whether packaging that for a different system is going to be problematic, and which are better alternatives.
> A package manager with “well designed” packages still can't define what they do, invent program logic and behavior.
The solution to this is easy and widespread. Just ship scripts with the package which allow it to compile and configure itself for the host system. Apt, npm, homebrew and cargo all allow packages to do this when necessary.
A well designed PyTorch package (in a well designed package manager) could contain a stub that, when installed, looks at the host system and select and locally installs the correct version of the PyTorch binary based on its environment and configuration.
This should be the job of the PyTorch package. Not the job of every single downstream consumer of PyTorch to handle independently.
> Just ship scripts with the package which allow it to compile and configure itself for the host system.
Eek. That sounds awful to me. it is exceptionally complex, fragile, and error prone. The easy solution is to SHIP YOUR FUCKING DEPENDENCIES.
I’m a Windows man. Which means I don’t really use an OS level packages manager. What I expect is a zip file that I can extract and double-click an exe. To be clear I’m talking about running a program as an end user.
Compiling and packaging a program is a different and intrinsically more complex story. That said, I 1000% believe that build systems should exclusively use toolchains that are part of the monorepo. Build systems should never use any system installed tools. This is more complex to setup, but quite delightful and reliable once you have it.
I remember having to modify one of those dependency scripts to get it running at all on my laptop.
In the end I had more luck with Easy Diffusion. Not sure why, but it also generated better images with the same models out of the box.
The only way I know to manage python dependencies is Bazel as the build system, and implementing a custom set of rules that download and build all python dependencies. The download is done in a git repo. All magically missing libs must be added to the repo and Bazel. And finally you might have a way to... tar the output into a docker container... sigh
I don’t care if a program uses DLLs or not. But my rule is “ship your fucking dependencies”. Python is the worst offender at making it god damned impossible to build and run a fucking program. I swear Docker and friends only exist because merely executing a modern program is so complicated and fragile it requires a full system image.