Having had a need for nano positioning in the past I really try to avoid PI like the plague. Bad and slow sales people, opaque pricing, no way to test their product before they expect you to lay down 50k. Bad integration with other tools.
I mean it's pretty wild to take s-expressions and not call them extremely terrible to read. The nix language sucks really badly, but I gladly take it over writing S-expressions.
It reads almost the exact same as any functional C-style language. Not to mention that specifically for Guix, you're going to be writing the (name value) form for 99% of it.
> you're going to be writing the (name value) form for 99% of it.
That's exactly the part that is wrong with Guix, and Scheme in general. Scheme has associated lists, they are written as '((name . value) ...), but since that's too ugly everybody makes macro wrappers around them to get them down to just (name value). But that means you aren't dealing with an obvious data type anymore, but with whatever the macro produces and if you want to manipulate that you need special tools yet again. And then you have record-type and named arguments which are different things yet again, but all serve the same name->value function as an associated list. Names themselves are sometimes symbols, sometimes keywords, and sometimes actual values. Same with lambda, sometimes you need to supply a function, other times there is a macro that allows you to supply a block of code.
It's like the opposite of the Zen of Python, there are always three different ways to do a thing and none of them as any real advantage over the other, they are just different for no good reason and intermixed in the same code base.
I have never seen anything else use the (name value) syntax. You do deal with obvious data types, the REPL tells you exactly what those data types are (records, in the case of Guix). Schemes outside of Guile don't even have keywords, much less named arguments.
Are you complaining that a language has both associative containers and structs? Which one do you advocate for removing in Python to keep up the precious "Zen"?
Lisp programmers have used editors that count the parens for them for decades. Many use something like paredit that simply automatically adds the final paren. I've written significant amounts of Lisp and you simply don't see the parens. You might as well complain about French having all those accents. It's just a different language. Learn it and you'll see why.
I can write lisp. That a lot of lisp programmers require special editors to handle it should tell you enough. It's not that the language is unworkable. You can definitely write stuff in it. The point is that it is quite far from something that should be written by people, in my opinion.
Are you really going to argue that a good programming language is one where you can construct it character by character, by hand? Emacs has existed for decades and it runs basically anywhere. Nobody is programming in ed (well, apart from Dave Beazley[0]). With LLMs the world is finally catching up to the fact that programming isn't typing characters one by one. Lisp programmers have been at this for decades.
I consider it essential for a programming language for people that it is easy to understand things by looking at things locally. Requiring/strongly encouraging extremely deep nesting is not conducive to that.
This is not some weird opinion I have. There is a reason "flat is better than nested" is part of the pretty popular zen of Python.
Most code is read much more than it is written, at least I read much more code than I write. So for me code should be optimized to be read as plain text as that is basically what every tool uses. Requiring a separate tool to get basic readability is not really acceptable. I can't do that on github, I can't do that with the various diff tools, I can't just quickly cat a file or use the countless other tools that are designed to present plain text.
If I then can choose between guix and a language that doesn't require these hoops and extreme quality trade off the choice is not hard.
Anyway if you think guix is better than nix, than nothing stops you from using it.
I just have a hard time taking such a comment seriously, because I have made it myself. Many times even. My second comment on Slashdot in 1999 was a comment just like yours. I try to tell myself it is ok because I was still a teenager.
7 years later I had to write common lisp and I think the parens were a problem for about a week. Since then I have written many thousand lines of lisp code. I usually go back and forth on what I like the most. ML (ocaml mostly) or (guile) scheme. In just about every other language (except maybe factor) I miss the macros (even syntax rules ones) way more than I like the extra expressiveness of, say, Haskell. [0]
Wisp is a part of guile. So you can write your own config using it. It is not completely straightforward, but if you really hate parentheses it is a way. Or you continue the wonderful string gluing of Nix. Whatever floats your boat.
[0]: I do miss typing sometimes in scheme though. I have thought about vibe coding a prototype of an s-expr language that compiles to f# AST.
This isn’t about whether someone can get used to parentheses. Obviously they can. I don’t doubt your extensive experience there. The question is what the language optimizes for by default.
My argument is that S-expressions optimize for structural uniformity and macro power, but they do so at the expense of plain-text readability without tooling. And that trade-off matters in contexts where code is frequently read outside of a fully configured editor, code reviews, diffs, quick inspection, etc.
Saying “editors solve this” doesn’t really address that, it just shifts the burden to tooling. In contrast, many other languages aim to be reasonably legible even in minimal environments.
So I’m not arguing that Lisp is unusable. I’m saying it makes a different set of trade-offs, and for my use case where I spend much more time reading other peoples code in some web portal and with basic terminal tools those trade-offs are a net negative. I would expect this trade off holds for most code produced.
> This is not a language that is optimizing for being written by humans
I've taken a look at the code - having never written a line of Guix in my life - and it seems very readable to me. It's cleanly structured and makes good use of indentation.
The string "))))))))))", which you claim you're seeing 'regularly', appears exactly twice in 4,580 lines of code. It's the longest parens string that appears in the file. Seems to me like you deliberately searched for the most atypical example, that you're now misrepresenting as 'regular', when it is highly atypical.
And honestly, what would that look like in some 'more normal' language?
);
}
);
];
};
)()();
Better?
I will never understand this fear response some people have to seeing `(fn a b)` instead of `fn(a, b)`.
I indeed searched for the longest chain. Something that happens in 4.5k lines twice is hardly rare. And even if you take away a brace it occurs even more frequently.
And yes your example is better, but still terrible. The point is not the formatting. The point is that there is that 10 deep nested code is just not easy to understand. I would also say a line of c/python that does 10 nested function calls as unreadable. But they do not encourage this, whereas with lisp its modus operandi to write such incantation.
> Something that happens in 4.5k lines twice is hardly rare.
Provided you don't consider the context, sure. One of them is software with buggy tests, the other is one that provides a custom test suite that basically has to be reimplemented in the package definition. How often do you think either of those things happen?
Looking at a lot of nix package expression: Quite a bit. Besides, just taking a way a single brace gives 7 hits. Still a ridiculous level of nesting. So I don't go with your reasoning that these are some kind of super special cases. If something happens so often in 4500 lines of code you cease the right to claim it is special.
It happens 1/2290 (0.04%) of the time. You are significantly more likely to guess a stranger's birthday totally at random (0.27%) twice in a row (0.07%). If you don't consider that exceedingly rare, then you and I need to hit up Vegas immediately.
You are projecting a programming style onto Lisps that's quite alien to them. Lisps tend towards small, discrete functions that are composed together. It is the ordinary languages that tend towards deep branching and nesting; Lisps generally favour recursion and function composition instead.
There are 101 `(package` definitions and 58 out of them have more than 6 nest level, which I would consider more than excessive. That's an incidence of over 50%.
Beside I don't think I'm alien to the functional way of writing things. I write mostly Haskell professionally. But Haskell doesn't casually suffer from making insane expression nesting the default.
You may not be a stranger to the functional style, but you do seem a stranger to the Lisp style, which is closely allied. A lot of FP originated in Schemes and Lisps - Haskell is greatly influenced by the ML family, which is itself greatly influenced by Lisp. Modern FP is a style that would be recognisable to a Lisp programmer fifty years ago, when everyone else was writing imperative soup.
I don't think there's much anyone can say that's going to change your mind. You're strongly coming off as though you've formed a view, based on little experience, and will now Ctrl-F cherry-picked examples to sustain it rather than listen to any contrary information. I respectfully suggest greater open-mindedness and a willingness to reserve conclusions in the absence of data.
I personally don't use Lisp too much, so I'm not particularly invested in this exchange, but I know from experience it's not even remotely what you're describing it as. Everything about Lisps tends towards minimal nesting, from the use of paredit to edit expressions through REPL-based workflows.
The only thing this exchange has done, as someone who programs in FP exclusively, is make me reminisce about and yearn for Lisp. It's a wonderful language for FP.
That link isn't working for me (something about AI detection), but as a point of accuracy, those aren't derivations, they're simple source files. Derivations are generated out of them.
As for the closing braces, would it be better if you had a newline between each?
That's a pretty big claim. I don't doubt that a lot of uv's benefits are algo. But everything? Considering that running non IO-bound native code should be an order of magnitude faster than python.
Its a pretty well-supported claim. uv skips doing a number of things that generate file I/O. File I/O is far more costly than the difference in raw computation. pip can't drop those for compatibility reasons.
I don't think the article you linked supports the claim that none of UV performance improvements are related to using rust over python at all. In fact it directly states the exact opposite. They have an entire section dedicated to why using Rust has direct performance advantages for UV.
> uv is fast because of what it doesn’t do, not because of what language it’s written in. The standards work of PEP 518, 517, 621, and 658 made fast package management possible. Dropping eggs, pip.conf, and permissive parsing made it achievable. Rust makes it a bit faster still.
This is either an overly pedantic take or a disingenuous one. The very first line that the parent quoted is
> uv is fast because of what it doesn’t do, not because of what language it’s written in.
The fact that the language had a small effect ("a bit") does not invalidate the statement that algorithmic improvements are the reason for the relative speed. In fact, there's no reason to believe that rust without the algorithmic version would be notably faster at all. Sure, "all" is an exaggeration, but the point made still stands in the form that most readers would understand it: algorithmic improvements are the important difference between the systems.
I think we might be talking past each other a bit.
The specific claim I was responding to was that all of uv’s performance improvements come from algorithms rather than the language. My point was just that this is a stronger claim than what the article supports, the article itself says Rust contributes “a bit” to the speed, so it’s not purely algorithmic.
I do agree with the broader point that algorithmic and architectural choices are the main reason uv is fast, and I tried to acknowledge that, apparently unsuccessfully, in my very my first comment (“I don't doubt that a lot of uv's benefits are algo. But everything?”).
I don't think the article has substantive numbers. You'd have to re-implement UV in python to do that. I don't think anyone did that. It would be interesting at least to see how much UV spends in syscalls vs PIP and make a relative estimate based on that.
Vague. What's pretty close? I mean, even for IO bound tasks you can pretty quickly validate that the performance between languages is not close at all - 10 to 100x difference.
I'm saying that the Rust might execute in 50ms and the Python in 150ms. You are the one not making sense, we are talking about application performance, why are you not measuring that in milliseconds.
That is assuming Rust is 100x faster than Python btw, 49ms of I/O, 1ms of Rust, 100ms of Python.
> I'm saying that the Rust might execute in 50ms and the Python in 150ms.
Okay, so the Rust code would be 3x as fast. Feels arbitrary, but sure.
> You are the one not making sense, we are talking about application performance, why are you not measuring that in milliseconds.
I explained why your post made no sense already...
> That is assuming Rust is 100x faster than Python btw, 49ms of I/O, 1ms of Rust, 100ms of Python.
That's not how anything works. Different languages will perform differently on IO work, different runtimes will degrade under IO differently, etc. That's why even basic echo HTTP servers perform radically differently in Python vs Rust.
This isn't how computers work and it's not even how math works.
This conversation has become nonsensical. The thing we can agree with is this - no, uv would not be as fast if it were written in Python.
> That's not how anything works. Different languages will perform differently on IO work, different runtimes will degrade under IO differently, etc. That's why even basic echo HTTP servers perform radically differently in Python vs Rust.
> This isn't how computers work and it's not even how math works.
What are you disagreeing with? There's some baseline amount of I/O that the kernel does for you, that's what I'm assuming is 50ms, and everything else like runtime degrading is overhead due to the language/platform choice. I'm saying Rust is upwards of 100x faster in that regard thanks to its zero cost abstraction philosophy. You can't just include the I/O baseline in a claim about Rust's performance advantage. You'll be really disappointed when Rust doesn't download your files 100x as fast as the Python file downloader.
Anyway, I'm sorry I provoked your antagonism with my terse messages, I wasn't trying to be blase. I believe uv is the sort of tool that wouldn't suffer much from the downsides of Python and that in most situations the reduced runtime overhead of Rust would have a negligible impact on the user experience. I'm not arguing that they shouldn't build uv in Rust. Most situations is not all situations, and when a tool is used so widely you'll hit all edge cases, from the point where the 10s of milliseconds of startup time matters to the point where Pythons I/O overhead matters at scale.
I think a missing piece here is that you think that Rust won't download a file faster than Python but it absolutely can. This seems to just be a misconception people have about IO, like "download a file" is a thing that exists wholly outside of your process.
I know it can, but it can't download it faster than the network card can write it into its buffers. That's the part I would count as the 50ms that both can't improve upon.
Of course. But why would that matter if Python can't get there to begin with? You're not going to hit NIC bottlenecks with Python, not without a ton of work and tradeoffs at least.
> Different languages will perform differently on IO work,
IO is executed by kernel, file system or network drivers. IO performance is not dependent at all on which language makes the syscalls.
> The thing we can agree with is this - no, uv would not be as fast if it were written in Python.
In this thread, we are talking about the speed of uv in terms of user experience - how long a person waits for command line operations to complete. Things that pip takes multiple seconds to do, uv will do in dozens of milliseconds. If uv were written in python, it would take dozens of ms + a few dozens more, which means absolutely fuck all nothing in the context of the thousands of milliseconds saved over pip.
Its possible a user might perceive a slight difference in larger projects, but if pip had been uv-but-in-python, the uv-in-rust project would never have been started in the first place because no one would have bothered switching.
> This conversation has become nonsensical.
Agreed. No one in this thread is disputing that Rust code is faster than Python, only that in this case it is completely insignificant in the face of all the useless file and network I/O that pip is doing, and uv is not.
> IO is executed by kernel, file system or network drivers. IO performance is not dependent at all on which language makes the syscalls.
I think your posts on this topic can not possibly be worth responding to if you're coming to the conversation with this level of not understanding things.
Your post is a combination of not understanding computers and then hand waving about fake numbers and user expectations. IO is not magic, it is not some distinct process that you have no control over from userland, it is exactly the sort of thing that Python does very poorly at, in fact.
I'll just reference techempower again, or you can look up those system calls you referenced like how epoll works and then look into what is involved for Python to use epoll effectively.
I disagree to some degree. Tests have value even beyond whether they test the right thing. At the very least they show something worked and now doesnt work or vice versa. That has value in itself.
I disagree. You simply increase the supply of labour by double digit percentage points. Thinking this will not affect the price, all else being equal, is magical thinking.
You're ignoring the other side of the ledger. If the supply of labor increases, but then those people get paid money, then they spend it and create additional demand for labor.
How do you suppose a country with 100 million people can have the same standard of living, if not higher, than a country with 10 million people despite having ten times the supply of labor? Or for that matter that large populous cities can have higher paying jobs than small towns?
Pure hardware product is great though I admit.
reply