Hacker News new | past | comments | ask | show | jobs | submit login
Why is stdout faster than stderr? (orhun.dev)
41 points by mfrw on Jan 10, 2024 | hide | past | favorite | 32 comments



I thought this is a general unix thing. stdout is buffered so output to other apps/terminal is efficient, whilst stderr isn’t buffered because it’s usually to the console with (smaller) error messages that should be seen immediately in case of app crashes etc.


POSIX only specifies that stderr in C programs isn't block ("fully") buffered by default. I believe it allows for line buffering. https://pubs.opengroup.org/onlinepubs/9699919799/functions/s...

POSIX doesn't govern Rust at all.


It would have been more honest to add (in Rust) to the title.


Does it say somewhere why in Rust one is buffered and the other is not?


Both (line-buffered stdout, raw stderr) date to the initial implementation 10 years ago: https://github.com/rust-lang/rust/commit/94d71f8836b3bfac337... . The commit message mentions that stderr is no longer buffered (probably as a result of the RFC 899 discussion, which I cannot seem to locate).


Thanks for digging this up! It is good and interesting info, but unfortunately still doesn't satisfactorily answer why they are treated differently.


Because stderr being conventionally used for error reporting it's important to immediately output that information to avoid losing it in case the program then crashes or ends without flushing.

Stdout is generally the "normal output" so loss of information on crash tends to be less relevant, and throughput more so as programs commonly send huge amounts of data to stdout.


I did some more digging. By RFC 899, I believe Alex Crichton meant PR 899 in this repo:

https://github.com/rust-lang/rfcs/pull/899

Still, no real discussion of why unbuffered stderr.


https://linux.die.net/man/3/stderr

Notes

The stream stderr is unbuffered. The stream stdout is line-buffered when it points to a terminal. Partial lines will not appear until fflush(3) or exit(3) is called, or a newline is printed. This can produce unexpected results, especially with debugging output. The buffering mode of the standard streams (or any other stream) can be changed using the setbuf(3) or setvbuf(3) call. Note that in case stdin is associated with a terminal, there may also be input buffering in the terminal driver, entirely unrelated to stdio buffering. (Indeed, normally terminal input is line buffered in the kernel.) This kernel input handling can be modified using calls like tcsetattr(3); see also stty(1), and termios(3).


Is this a Linux implementation detail, or is it a feature of POSIX systems generally?


POSIX says that stderr shouldn't be "fully buffered" https://pubs.opengroup.org/onlinepubs/9699919799/functions/s... but I believe that allows for line buffering. Its terminology can be found here: https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/st... (look for _IOFBF).


OpenBSD's stderr seems to be buffered, so it may just be a Linux thing.

https://man.openbsd.org/stdio.3


That page says the opposite:

> Initially, the standard error stream is unbuffered.


I have no idea how I read that so wrong. Mea culpa.


A lot of words. Is it because of line buffering?

coreutils has a very hairy program called stdbuf which can change the line buffering of an existing program. It uses LD_PRELOAD(!) to preload a library which overrides the program's defaults.


The short version is yeah. Rust line-buffers stdout but not stderr, for no particularly good reason.


> In Unix, character special files are files that access to I/O devices such as a NULL file (/dev/null) and file descriptors. In our case, stdout is a file descriptor (1) so it is a character special file. (The name "character special" actually comes from the fact that each character is handled individually.)

Ummm no. This is completely backwards. Stdout isn’t character special if it is pointing to a disk file. And they are hung up on a Linuxism here to boot (the proc symlink). The whole world isn’t Linux and I have never heard of this definition of character special, so I wonder where they found it. Of note, FreeBSD realized the whole block/character distinction was dumb, even block device modes are “character special”.

Author is very confused about the ontology of file descriptors and files, and device nodes. Sure it’s confusing to a newcomer but there are plenty of well written guides on the topic.

Too verbose, too Linux centric and too inaccurate.

And scrolling through nothing new than what is well trodden ground https://stackoverflow.com/questions/37991116/why-is-stdout-b...

Tl fucking dr, stderr is not buffered by default in rust.


The criticisms of file descriptors and files and device nodes and Linux-centrism are all totally reasonable.

The stackoverflow link is irrelevent -- it is a question about C. Rust's stdlib and implementation of stdout/stderr buffering is wholly unrelated to the one in C.


> Rust's stdlib and implementation of stdout/stderr buffering is wholly unrelated to the one in C.

They're not wholly unrelated, they are related. The rationale for the implementation in both cases is the same (whether it is good, nonwithstanding). There are stack overflow questions related to the rust implementation, I selected a non rust to illustrate it's more than some rust specific implementation detail that occurred in a vacuum.


This comment seems extremely aggressive and I’d add that most of the world is, in fact, Linux.


> at most of the world is, in fact, Linux.

Yeah, other than the most popular Unix like desktop OS being macOS and iOS a dominant second in mobile. Other than those two that no one uses, most of the world is Linux. And for the topic at hand the macOS design is basically same as FreeBSD as mentioned, macOS is not derived from Linux.


Most computers are not desktops. Most phones and servers are Linux based, thus the overwhelming majority of computers are running Linux.


Oh fucking please. You were wrong, get over it and stop making irrelevant snipes that have nothing to do with the original topic. The question from the context was, for someone programming Unix like systems, are there systems other than Linux that matter? The answer is yes.

No one started off with a claim that Linux isn't the majority of Unix like systems, how you get that from "The whole world isn't linux" is beyond me.

macOS and iOS are heavily used Unix like systems with significant market share. They're not rounding errors.

> Most computers are not desktops.

Doesn't fucking matter, Mac OS software is still a multi-billion dollar industry.

> Most phones

iOS is about 30% of the market. Yes 70% is a majority, but not an overwhelming majority whatever that is, especially when that 30% pays the bills more than the other 70%

If you're going to pull the irrelevant pedantic shit it helps to be right too, the overwhelming majority of computers are not running Linux since for every crappy Android phone out there, there are dozens of embedded devices running some variety of RTOS. And is a Nintendo Switch a phone or a server?


Too bad TFA started from rust instead of c; this behavior is all explained (in fewer words) in K&R.


This behavior could be explained in a lot fewer words in a Rust tutorial, too. The investigation itself is kind of interesting and learning to use profiling tools is valuable. It's still more verbose than my preference, and I don't love the style. Still. This and K&R are very different kinds of material.


This is my exact concern when new corp of developers try and learn operating systems using languages further away from the building blocks.

Unpopular opinion: It is still good to learn enough C + system programming & their gotchas before starting with a more fancy higher level language.


I don't think this is a fair comparison. If you want to teach that you can write(2) to raw FDs in Rust, you can, just like you can use write(2) or fprintf(3) in C.

C has a standard library which students should understand even though it's making system calls deep down. Rust has a standard library which students should understand even though it's making system calls deep down (in fact, sometimes through the host C library).

I certainly see the value in knowing C and Unix and that was my education over two decades ago as well. But I also watched many people quit computer science altogether because they struggled with heisenbugs with C pointers. If they could have been kept on track by Rust compiler errors and higher level abstractions, maybe they would still be in the industry today, learning whatever else they needed instead of quitting in their first semester.


Is going from high level to low level somehow worse?

I went from very high level (C# web and even webassembly) to C

and while I believe I learned a lot and my understanding of computers improved,

then I think the biggest lesson is that one of the most important programming ecosystems (C) is a very messy and painful.

Not because it must be painful, but because of decisions made decades ago, maybe some inertia, maybe backward compatibility, maybe culture, who knows?

Low quality compiler messages, ecosystem fragmentation, terrible standard lib (where are my basic datastructures), memory management being minefield, etc.


C gets a bad wrap because there are now alternatives built by finding solutions to problems we only know because of Cs existence. Compiler messages, standard library and memory management are all things we can agree are terrible now days but when C came out it was a huge improvement over the norms before. Also it’s important to remember even “big” things like Unix were at one point just a few thousand lines of code.


> Unpopular opinion: It is still good to learn enough C + system programming & their gotchas before starting with a more fancy higher level language.

And easier


Rust full-timer with a background in C/C++, lately using neither at all. That opinion isn't as unpopular as you'd think.


After being a web developer for 10+ years, I'm getting into C for the first time. I'd had a bit of experience with Objective-C years ago when I did some iOS work, but that was the "lowest" I'd gone down the stack.

There's a lot of unfamiliar territory, but I'm really enjoying it. When it's complex, it feels like it's just inherently complex. Which is a breath of fresh air for a web developer. I'd gotten so sick of the bullshit complexity that comes along with the high-level work; programming feels fun again.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: