finger was conceived in a time and environment where you would reasonably assume a lot of things which stopped being true a long time ago.
many users per machine/users actually logged in to that machine/users being in walkable distance or in the same building/no compartmentalization, i.e. your demon has access to every users' home directory
and that's just the top of my head.
if you see finger as "everyone has a place to store a message and people can read it" then yes, you might say it wasn't worse than HTTP - but I think the plan feature wasn't even the original intention, it was more "is person X at their desk right now?".
So all features aside, it has so many assumptions baked in, I'd have to think hard how to replicate it in a modern way for a company and still fit the protocol.
I'm not sure I 100% agree with the protocol being silly (merely not great, BUT it's been years since I read the RFC - it's short, you should), it's kinda simply plain text with some wonky hostname shenanigans, but the whole concept hasn't aged well. But that is if you completely ignore anything about security (see what I wrote above) and privacy.
The way I read it - finger protocol is a small rusty bike; whereas http by now is a fancy sports car. (Heck if we are talking http/2 or 3 it’s a damn flying car at that)
We now all go about our daily bakery shopping in the fancy sports car instead of a small rusty bike.
And people have differing opinions whether this is the best timeline to have...
I don't think that analogy holds. Let's assume you can still publish a perfectly fine (for 2021) web page with strict HTTP/1.1 - I think you can. It's very flexible and unchanged since its publication.
Finger on the other hand is would be a very narrow API to for a certain service without ANY of the flexibility of HTTP. No custom headers, no Basic Auth, not even the difference between GET and POST.
So yeah, maybe the original comparison between finger and http is already flawed, but unless HTTP/2 gives you something that HTTP/1.1 can't do then HTTP/1.1 is still perfectly valid, and probably will be in 10 years, at least for low-traffic situations. (finger should be reaaally low-traffic in comparison).
> Finger on the other hand is would be a very narrow API to for a certain service without ANY of the flexibility of HTTP.
That is the point. The flexibility is not free. Every conditional doubles the number of possible execution flows. This brings complexity. To some extent it is mitigated by the economies of scale because now everyone uses HTTP for something, so collectively we get that more complex code more polished. But there is no such thing as the bug free code - so every participant will have to deal with patch cycle and generally preventing bitrot.
For a small well bounded custom protocol which solves a well defined specific use case, one can hope to write a dependency-free implementation that can be tested and work well enough and left alone.
I recently was at an event with a few thousand wifi devices.
About a third of the internet traffic was updates..
No disagreement here, but this was about flexibility, not an absolute judgement of how far-reaching a protocol must be.
I think I like the idea of a "spec" inside the same "protocol" more. For example if you understand HTTP you can quickly reason about any spec of a REST API that's done with JSON payloads without caring for the HTTP wrapper layer, just as you don't care for TCP around it.
finger was conceived in a time and environment where you would reasonably assume a lot of things which stopped being true a long time ago.
many users per machine/users actually logged in to that machine/users being in walkable distance or in the same building/no compartmentalization, i.e. your demon has access to every users' home directory
and that's just the top of my head.
if you see finger as "everyone has a place to store a message and people can read it" then yes, you might say it wasn't worse than HTTP - but I think the plan feature wasn't even the original intention, it was more "is person X at their desk right now?".
So all features aside, it has so many assumptions baked in, I'd have to think hard how to replicate it in a modern way for a company and still fit the protocol.
I'm not sure I 100% agree with the protocol being silly (merely not great, BUT it's been years since I read the RFC - it's short, you should), it's kinda simply plain text with some wonky hostname shenanigans, but the whole concept hasn't aged well. But that is if you completely ignore anything about security (see what I wrote above) and privacy.