I thought this is a general unix thing. stdout is buffered so output to other apps/terminal is efficient, whilst stderr isn’t buffered because it’s usually to the console with (smaller) error messages that should be seen immediately in case of app crashes etc.
Both (line-buffered stdout, raw stderr) date to the initial implementation 10 years ago: https://github.com/rust-lang/rust/commit/94d71f8836b3bfac337... . The commit message mentions that stderr is no longer buffered (probably as a result of the RFC 899 discussion, which I cannot seem to locate).
Because stderr being conventionally used for error reporting it's important to immediately output that information to avoid losing it in case the program then crashes or ends without flushing.
Stdout is generally the "normal output" so loss of information on crash tends to be less relevant, and throughput more so as programs commonly send huge amounts of data to stdout.
The stream stderr is unbuffered. The stream stdout is line-buffered when it points to a terminal. Partial lines will not appear until fflush(3) or exit(3) is called, or a newline is printed. This can produce unexpected results, especially with debugging output. The buffering mode of the standard streams (or any other stream) can be changed using the setbuf(3) or setvbuf(3) call. Note that in case stdin is associated with a terminal, there may also be input buffering in the terminal driver, entirely unrelated to stdio buffering. (Indeed, normally terminal input is line buffered in the kernel.) This kernel input handling can be modified using calls like tcsetattr(3); see also stty(1), and termios(3).
coreutils has a very hairy program called stdbuf which can change the line buffering of an existing program. It uses LD_PRELOAD(!) to preload a library which overrides the program's defaults.
> In Unix, character special files are files that access to I/O devices such as a NULL file (/dev/null) and file descriptors. In our case, stdout is a file descriptor (1) so it is a character special file. (The name "character special" actually comes from the fact that each character is handled individually.)
Ummm no. This is completely backwards. Stdout isn’t character special if it is pointing to a disk file. And they are hung up on a Linuxism here to boot (the proc symlink). The whole world isn’t Linux and I have never heard of this definition of character special, so I wonder where they found it. Of note, FreeBSD realized the whole block/character distinction was dumb, even block device modes are “character special”.
Author is very confused about the ontology of file descriptors and files, and device nodes. Sure it’s confusing to a newcomer but there are plenty of well written guides on the topic.
Too verbose, too Linux centric and too inaccurate.
The criticisms of file descriptors and files and device nodes and Linux-centrism are all totally reasonable.
The stackoverflow link is irrelevent -- it is a question about C. Rust's stdlib and implementation of stdout/stderr buffering is wholly unrelated to the one in C.
> Rust's stdlib and implementation of stdout/stderr buffering is wholly unrelated to the one in C.
They're not wholly unrelated, they are related. The rationale for the implementation in both cases is the same (whether it is good, nonwithstanding). There are stack overflow questions related to the rust implementation, I selected a non rust to illustrate it's more than some rust specific implementation detail that occurred in a vacuum.
Yeah, other than the most popular Unix like desktop OS being macOS and iOS a dominant second in mobile. Other than those two that no one uses, most of the world is Linux. And for the topic at hand the macOS design is basically same as FreeBSD as mentioned, macOS is not derived from Linux.
Oh fucking please. You were wrong, get over it and stop making irrelevant snipes that have nothing to do with the original topic. The question from the context was, for someone programming Unix like systems, are there systems other than Linux that matter? The answer is yes.
No one started off with a claim that Linux isn't the majority of Unix like systems, how you get that from "The whole world isn't linux" is beyond me.
macOS and iOS are heavily used Unix like systems with significant market share. They're not rounding errors.
> Most computers are not desktops.
Doesn't fucking matter, Mac OS software is still a multi-billion dollar industry.
> Most phones
iOS is about 30% of the market. Yes 70% is a majority, but not an overwhelming majority whatever that is, especially when that 30% pays the bills more than the other 70%
If you're going to pull the irrelevant pedantic shit it helps to be right too, the overwhelming majority of computers are not running Linux since for every crappy Android phone out there, there are dozens of embedded devices running some variety of RTOS. And is a Nintendo Switch a phone or a server?
This behavior could be explained in a lot fewer words in a Rust tutorial, too. The investigation itself is kind of interesting and learning to use profiling tools is valuable. It's still more verbose than my preference, and I don't love the style. Still. This and K&R are very different kinds of material.
I don't think this is a fair comparison. If you want to teach that you can write(2) to raw FDs in Rust, you can, just like you can use write(2) or fprintf(3) in C.
C has a standard library which students should understand even though it's making system calls deep down. Rust has a standard library which students should understand even though it's making system calls deep down (in fact, sometimes through the host C library).
I certainly see the value in knowing C and Unix and that was my education over two decades ago as well. But I also watched many people quit computer science altogether because they struggled with heisenbugs with C pointers. If they could have been kept on track by Rust compiler errors and higher level abstractions, maybe they would still be in the industry today, learning whatever else they needed instead of quitting in their first semester.
Is going from high level to low level somehow worse?
I went from very high level (C# web and even webassembly) to C
and while I believe I learned a lot and my understanding of computers improved,
then I think the biggest lesson is that one of the most important programming ecosystems (C) is a very messy and painful.
Not because it must be painful, but because of decisions made decades ago, maybe some inertia, maybe backward compatibility, maybe culture, who knows?
Low quality compiler messages, ecosystem fragmentation, terrible standard lib (where are my basic datastructures), memory management being minefield, etc.
C gets a bad wrap because there are now alternatives built by finding solutions to problems we only know because of Cs existence. Compiler messages, standard library and memory management are all things we can agree are terrible now days but when C came out it was a huge improvement over the norms before. Also it’s important to remember even “big” things like Unix were at one point just a few thousand lines of code.
After being a web developer for 10+ years, I'm getting into C for the first time. I'd had a bit of experience with Objective-C years ago when I did some iOS work, but that was the "lowest" I'd gone down the stack.
There's a lot of unfamiliar territory, but I'm really enjoying it. When it's complex, it feels like it's just inherently complex. Which is a breath of fresh air for a web developer. I'd gotten so sick of the bullshit complexity that comes along with the high-level work; programming feels fun again.