Why we have stopped making cool protocols like this? It seems Internet had really cool protocols back in the day and we had so many possibilities. Now it seems we are stuck with HTTP.
Not saying HTTP is bad. It just seems like we have given up on possibilities. I remember, almost a decade ago, Nokia had a mobile web server for Symbian devices which basically hosted HTTP server on the phone[0]. You could message the owner of phone directly through a URL. The request would be handled by server on phone!
No one makes anything like that anymore. Everyone is just building on top of APIs and services provided by MANGA who would obviously not put any effort in such projects.
Finger is a silly protocol. It doesn't exist anymore because it is worse in every possible way than HTTP (or, at least, the tiny subset of compatible HTTP that replaces it).
I remember when I was earlier in my career and more specialized on implementing weirdo protocols hearing that HTTP was going to replace all the existing protocols. I was appalled; it seemed absurd, like suggesting Word .DOC was going to replace all text files.
But for the most part, the people saying that were right, and we are better off for it. The thing about a lot of those purpose-built protocols, even the "important" ones like DNS and most especially infrastructure stuff like SNMP, is that they are pretty dumb, the product of their time and thus, by construction, deprived of several decades of systems learning.
Something interesting about them though is that they were mostly simple, plaintext, protocols and you could learn a lot by popping open a telnet connection and playing around. At least I learned a lot that way when I was a kid. I figured out how to make HTTP GETs, use IRC, etc playing around with telnet.
Counterexamples: DNS and it's silly, idiosyncratic compression; FTP and its absolutely batshit control/data channel connections; the ASN.1/BER nightmare of SNMP. Also consider the "simple, plaintext protocols" that should not have been: the r-utils, for instance, had to be supplanted by encrypted binary protocols; NNTP failed, and ultimately ended up centralizing Internet discussion on sites like this, because it was doggedly optimized for text and inevitably abused for binary sharing.
The moral of my story is: those old protocols were bad, practically all of them.
> NNTP failed, and ultimately ended up centralizing Internet discussion on sites like this, because it was doggedly optimized for text and inevitably abused for binary sharing.
That's not really why it failed though, since those binaries are in separate newsgroups that most servers simply don't carry.
The issue is more about spam and lack of moderation, which makes it somewhat unfriendly to newcomers, since one has to do the spam/troll filtering locally.
This over time lead to most newsgroups slowly dying out.
It's 100% of why they failed. NNTP providers that didn't provide full feeds were loudly boycotted. Customers and users left those providers. Ask me how I know! Doing competitive full-feed NNTP was one of the most annoying and least useful things I've worked on.
I have been running a (non-binary) usenet server for quite some time too.. Don't underestimate the annoyance of spam and trolls.
They kill pretty much anything once active moderation disappears.
Once they exceed a certain fraction people tend to get very annoyed and jump ship for anything where they have interact less with them.
Over time only trolls are left in the newsgroup (in many newsgroups they haven't really left to this day).
I consider binary newsgroups and the rest to be two almost unrelated things. Plenty of non-binary usenet servers were doing just fine, but of course no one would really pay for them.
Fair warning: even by my standards, this is an area where I have strong opinions (I loved Usenet, and ran a Freenix-competitive full-feed site in the 1990s). I'll just leave this here:
I'm not saying I can refute what you're saying, just that I think my claim about "binary vs. plaintext" being problematic with respect to NNTP is well-founded.
(I've implemented producton-grade HTTP libraries many times. HTTP being a one-size-fits-all protocol means the existing libraries are ridiculously specialized and overspecced.)
HTTP is the universal protocol because the mental model of "verb-metadata-payload" fits almost everything you can imagine, and the socket handshake/framing parts are obvious and very performant. This is why HTTP is used everywhere from pushing stock quotes to industrial automation.
HTTP 2 and 3 aren't universal protocols, they solve a very specific problem of serving static content to browsers for Google-scale websites.
In effect, HTTP 2 and 3 are exactly the kind of niche one-problem protocol that HTTP was supposed to replace.
This is a pretty facile analysis. HTTP2 is faster, more flexible, and more reliable than HTTP1. HTTP2 also eliminates the need for several silly performance hacks in HTTP1. The differences might not matter for your static site, but they matter a lot for larger applications, and, more importantly, they open the door for using HTTP2 in situations where today people would be building their own clunky ad-hoc protocols; that's the opposite of the phenomenon you describe with respect to the two protocols.
HTTP3, more the same! The performance wins of HTTP3 won't matter at all for most typical web applications. But typical web applications aren't the point of HTTP3.
My concerns have nothing to do with HTML/CSS/JS, but rather the suitability of HTTP as a transport for arbitrary client server protocols --- the subject of this thread. For general transports, multiplexing alone is an obvious win over HTTP.
You didn't make an argument for why it's a silly protocol. You didn't make an argument for why it is worse in every possible way than HTTP. You didn't make an argument for why we're better off having replaced everything with HTTP. You didn't make any arguments about they they are "pretty dumb."
Your entire post amounts to "I like HTTP" and "old stuff bad."
FTP is an absolutely bonkers protocol. I’ve written lengthy posts in here in the past detailing why that protocol needs to die.
Email is another clusterfuck of a protocol (well, several protocols) that barely functions despite dozens of modern pseudo-standards plastered over it. I’ve written extensively about that too.
DNS is frequently the source of amplification DDoS attacks. It’s another protocol that made sense once upon a time but has struggled to keep pace with modern advancements in technology
IRC is probably the best of the bunch here but even that has struggled to keep pace and can be subject to undocumented behaviours (like line length).
…and these are the protocols still in use. The ones replaced by HTTP were either crazier or over simplistic that they offered nothing over HTTP.
I’ve written my own clients for every one of those examples and had to deal with the pains of their protocols. I’ve also written my own web browser. And while HTTP has some warts too, I’d take that over FTP and SMTP any day of the week.
> Email is another clusterfuck of a protocol (well, several protocols) that barely functions despite dozens of modern pseudo-standards plastered over it. I’ve written extensively about that too.
SMTP is one of last of decentralized open communication protocols that is still widely used by business. It evolved over time, gained some additions and stayed alive. The biggest issue I have with e-mail nowadays is companies like Microsoft and Google acting like they go out of their way to break protocols and deliver less and less messages from perfectly well working but decentralized sources.
Microsoft is especially bad of the two, with years-long tradition of acting against standards (Outlook Express connecting to recipients' MX, cloud offering accepting messages for delivery and never delivering them [1] etc). Google, I believe, as soon as they find a better way to get hold on user's invoices and receipts, will teach their users that they should use something else instead.
Stating that standard barely functions just because anti-privacy corporations only pretend to use standards in way there were intended, but concentrate on breaking them, is not how I would describe current state of e-mail-related stuff.
The "decentralization" of SMTP comes from the high level architecture of store-and-forward. It has very little to do with the protocol, which could be expressed more effectively and cleanly on HTTP2 or HTTP3 (it won't be, but should) without risking any of its "openness".
Decentralization was not a quest when those protocols were created. It pretty much became "a thing" with blockchains. It previously... just "was".
It started "not being anymore" with corporations - and once again I bow before Microsoft and Google - using less and less lube over time when telling their own clients what their role is at the ecosystem.
I will absolutely fight any attempt at calling e-mail protocols broken just because a corporations can't figure out their revenue around it.
You’re conflating a number of issues. We are strictly talking about technical specifications. Not about who owns what nor even arguing that everything should be centralised. In fact it is technically possible to create a better alternative to SMTP while still satisfying all of the non-technical requirements you’ve outlined too. You could even drop feed that new protocol into existence the same way we’ve seen IPv6 creep in parallel to IPv4, albeit it would probably take 20 years to do so.
And this isn’t even touching on the problems with IMAP and the insanity that POP3 is even still a thing.
You are right - I may have went too far into blame assigning while explaining too little of my viewpoit. In my opinion e-mail protocols aren't but e-mail ecosystem is getting increasingly broken. Sure, SMTPv2 is _technically_ possible but I don't think it would be allowed to grow, mature and exist as a standard.
Last time I attempted setting up messaging accounts with beforementioned companies, it wasn't possible to use Mutt or bare Thunderbird - one had to use client software allowing some kind of RCE to set up access to those services. Add Google's bubbling[0] and Microsoft's repeated mail losing, and we no longer really have globally functional e-mail based on standards.
When some of the biggest actors don't follow rules describing delivery without proposing changes - yes - e-mail is being broken but not because protocols underneath are broken. It's because people trust these companies and possibly don't know that they may be victims of careful information filtering.
I have done some e-mail - related work for hosting companies in the past. For some years now, POP3 is not really a thing. It exists, it is being set up by mistake from time to time but the number of POP3 users compared to IMAP users was barely noticeable and I don't think it grew. I'm afraid to ask what your issues with IMAP are...
[0] I suspect that Google bubbles its e-mail customers just like its search users. Most non-technical people I know treat "spam" folder like it would literally burn their fingers upon touching. They act similarly, like trained to only look inside there when not seeing awaited messages in the inbox. Google delivering perfectly fine messages straight into "spam" folder has comparable results to Microsoft losing/destroying their customers' mail.
Email protocols are broken because they are bad 1990s protocols that haven't benefited from 20 years of systems learning about building protocols. I don't much care who does or doesn't have a hard time monetizing them.
I've done POP3 and SMTP and they didn't seem TOO bad but those were just toy implementations which didn't have to work in the real world and I gather things have gotten pretty complex and ugly since then. What would you suggest to replace e-mail while retaining its flexibility?
SMTP doesn’t really have a universally agreed way of handling authentication, error handling nor even encryption. There are several standards floating about, many of which are little more than pseudo-standards.
> What would you suggest to replace e-mail while retaining its flexibility?
There’s no reason why we cannot redesign the email paradigm around a totally new protocol. The problem isn’t that’s it’s technically difficult, it’s that SMTP is too prevalent now. It would take someone like Google abusing their market share so bring in a successor.
Also any replacement would need to be at a protocol level. A lot of the attempts I’ve seen have also tried to modernise the experience as well (like Google Wave) but the reason email is successful is because it is familiar.
As far as I can see, jmap only provides means to upload email to your email provider, it doesn't actually tell how that email ends up from one email provider to another, which is what SMTP does.
Granted it's already nicer for clients not to need to configure SMTP to begin with.
Yes, one confusing aspect of SMTP is that there is a server-server part (listening on port 25) and a client-server part (465 or so, usually authenticated). I haven't dug in-depth, so maybe they are exactly the same protocol though.
Acessing port 25 of server is usually blocked by ISPs as a way to prevent spam.
The difference in ports is due to SSL/TLS expected to be automatically applied on 465 iirc like 80/443 for HTTP(S) (you can also encrypt 25 by issuing STARTTLS after setting up a connection but it's not the default and might fail I think).
Authentication with mail is separate, usually to allow for relaying whilst anyone can usually drop emails IFF your server is the destination.
Confusing and needlessly complex? Yep. Natural result as of uncontrolled evolution? Yep.
One of the big mistakes with the design of IMAP was that there wasn't a standard Out-Box where a client could rely on the IMAP server for sending the email. I'd have loved to see a world where mail clients only had to implement IMAP and nothing else.
finger was conceived in a time and environment where you would reasonably assume a lot of things which stopped being true a long time ago.
many users per machine/users actually logged in to that machine/users being in walkable distance or in the same building/no compartmentalization, i.e. your demon has access to every users' home directory
and that's just the top of my head.
if you see finger as "everyone has a place to store a message and people can read it" then yes, you might say it wasn't worse than HTTP - but I think the plan feature wasn't even the original intention, it was more "is person X at their desk right now?".
So all features aside, it has so many assumptions baked in, I'd have to think hard how to replicate it in a modern way for a company and still fit the protocol.
I'm not sure I 100% agree with the protocol being silly (merely not great, BUT it's been years since I read the RFC - it's short, you should), it's kinda simply plain text with some wonky hostname shenanigans, but the whole concept hasn't aged well. But that is if you completely ignore anything about security (see what I wrote above) and privacy.
The way I read it - finger protocol is a small rusty bike; whereas http by now is a fancy sports car. (Heck if we are talking http/2 or 3 it’s a damn flying car at that)
We now all go about our daily bakery shopping in the fancy sports car instead of a small rusty bike.
And people have differing opinions whether this is the best timeline to have...
I don't think that analogy holds. Let's assume you can still publish a perfectly fine (for 2021) web page with strict HTTP/1.1 - I think you can. It's very flexible and unchanged since its publication.
Finger on the other hand is would be a very narrow API to for a certain service without ANY of the flexibility of HTTP. No custom headers, no Basic Auth, not even the difference between GET and POST.
So yeah, maybe the original comparison between finger and http is already flawed, but unless HTTP/2 gives you something that HTTP/1.1 can't do then HTTP/1.1 is still perfectly valid, and probably will be in 10 years, at least for low-traffic situations. (finger should be reaaally low-traffic in comparison).
> Finger on the other hand is would be a very narrow API to for a certain service without ANY of the flexibility of HTTP.
That is the point. The flexibility is not free. Every conditional doubles the number of possible execution flows. This brings complexity. To some extent it is mitigated by the economies of scale because now everyone uses HTTP for something, so collectively we get that more complex code more polished. But there is no such thing as the bug free code - so every participant will have to deal with patch cycle and generally preventing bitrot.
For a small well bounded custom protocol which solves a well defined specific use case, one can hope to write a dependency-free implementation that can be tested and work well enough and left alone.
I recently was at an event with a few thousand wifi devices.
About a third of the internet traffic was updates..
No disagreement here, but this was about flexibility, not an absolute judgement of how far-reaching a protocol must be.
I think I like the idea of a "spec" inside the same "protocol" more. For example if you understand HTTP you can quickly reason about any spec of a REST API that's done with JSON payloads without caring for the HTTP wrapper layer, just as you don't care for TCP around it.
Imagine if we were stuck in a world with a large bloc of users running MSIE6 regularly accessed web servers running 40-bit SSL2. Neither clients nor servers ever bother updating, because it still works for them.
That's essentially where every other protocol is. HTTP gets all the energy, active development, updates regularly rolled out. Meanwhile if you tried to make, say, FTP more rational by transmitting data over the control connection, nobody would be able to use it since all the servers are still running 25 year old wuftpd with the minimal patches to not get pwned and they have no interest in updating (if they even remember the servers exist).
Personally I think we've lost something valuable when only one protocol exists, but I'm one of those Luddites who still reads email with Thunderbird so what do I know.
The burden of proof is always on the person making a claim. I've made no claim on this subject. I haven't even entirely formed my opinion on this and am not even playing Devil's advocate. The point of my post was to try to improve the quality of the discussion here. Unsupported random opinions aren't generally relevant to strangers.
Unsupported random opinions are the lifeblood of internet forums, not that the comment was really that much of an unsupported random opinion. Strangers do not owe you presentation of their views to your exacting specifications. It's perfectly fine to be confused by things they say and it's perfectly fine to ask them what they meant.
What's lame and actually damages discussion is haranguing people over things they've said you can't make good sense of. That's not curious conversation, it's not really much of a conversation at all.
Completely agree. Also dismissing a post as 'stupid' does not quite sound good to me. I mean, probably we have better arguments to dismiss a particular post!
This has become a meta-debate, but I'd argue that the burden of proof is on anyone who wants something. If the thing you want is to get to the bottom of a question, then it's your own desire which has imposed that burden on you.
If you want something, it would be a shame if you didn't get it because an internet proponent failed to detail their arguments.
If you want me to believe something, it's up to you to convince me, otherwise I have no obligation to believe it based on what you've said. It doesn't mean I have or don't have an obligation to believe it in general, only that I do not based on what you said. This is all I meant. That someone should look into something they're interested in is orthogonal to burden of proof. One can still look into something while being unconvinced by something because someone didn't give an argument. It's only a judgement that their contribution is not useful in the investigation and is no indication that the investigation does or doesn't continue despite that.
Nobody has any obligations to prove anything here; that's not the premise of this space. You'd have more luck extracting clarifications from me if you either (a) wrote more civilly or (b) directly challenged what I wrote with some kind of rebuttal. You've instead chosen the least productive rhetorical path.
Nobody has any obligations to prove anything, only if they want to convince anyone of anything.
What do you think is uncivil about accurately characterizing your post?
I have no interest in challenging what you said. As I said, I don't necessarily disagree. I just think throwing out unjustified opinions isn't useful or interesting and it would be better if you had said why you think what you do.
I agree the novelty of implementing these protocols wears off as soon as one looks into the security nightmare a poorly implemented protocol can cause.
But I still would like to see efforts happening in applications like Nokia's mobile web server. Another such effort from a decade ago was Opera Unite https://linkdekho.in/1e2vAy
And in my eyes, that's pretty terrible - we're stuck with a number of proprietary formats for commercial software, the adoption and more widespread support for which has been an uphill battle every single step of the way, since MS isn't incentivized to do that properly.
Honestly, i avoid .doc and .docx as well as most of MS Office formats whenever possible, at this point i just have LibreOffice installed and use all of those native formats: https://en.wikipedia.org/wiki/OpenDocument
Not only that, but most of these formats are problematic on a technical level - when compared with something like simpler Markdown files or any other text based format, looking things up is needlessly hard, so you can forget about easily searching for some text within a directory of 100s of such files on a server without some niche tool.
It's the Windows vs Linux points of view. HTTP is a kitchen-sink, and extremely complex. I know, you're saying 'Wait, HTTP is simple.' Understanding basic HTTP use of GET and POST is simple, the implementation details of the spec needed to be HTTP compliant are complex. Think of features like proper MIME handling, compression handling, redirects, all the authentication options to handle, proxy support, caching, content-negotiation, headers to selectively GET, web sockets, etc. The spec for HTTP/1.1 weighs in at 176 pages.
Finger on the other hand does one thing, and does it well.
I would argue that we need more smaller protocols and less kitchen sink protocols.
The drivers for kitchen sink protocols are not necessarily technical, they could be financial. As a bigger protocol gets more name recognition it becomes a less risky pitch in the eyes of management to adapt the protocol instead of going with a smaller one, or inventing your own. This adaptation sometimes leads to the bigger protocol getting extensions and growing even larger. The other reason HTTP is often used is security policies - the port is open, so no additional ports are needed, and security is already set up to scan HTTP traffic.
I don't see it. When is/was HTTP ever really extended? HTML/CSS/JS I could accept the analogy, and honestly I think HTTP only really won because of those. They had all the elegance and practicality of the PC[0], and HTTP was dragged along for the ride. The other thing they had in common was being at the right place at the right time, not to mention 'politically' acceptable.
> I don't see it. When is/was HTTP ever really extended?
Most recently, HTTP/2 has become a concern that HTTP servers and clients have required substantial work to implement, but it is easy to say this is a separate protocol from HTTP, because even though it shares the same ports, a client that doesn't speak HTTP/2 when speaking to an HTTP server will not have to deal with them.
HTTP/1.1 wasn't like that, and neither was HTTP/1.0.
The original HTTP "0.9" was really a lot like finger: You would open a port, send a single line identifying the resource you wanted, and then the content would come back, and the connection would close. HTTP/1.0 added headers and some (text-based) framing to this, and fortunately there weren't many clients to upgrade.
Sometime in HTTP/1.0 people started talking about "pipelining" and the need to change the protocol to support this. The "Connection" header was introduced to identify this change - no other header had ever before meant anything to the web server (except when acting in some capacity as an application header), and misunderstanding the Connection header led to hung clients and slow response. This was made more annoying when the defaults changed for HTTP/1.1 -- now the "new" protocol was the default, and thus hung even more clients. I personally find this very funny because there is absolutely no need for a "pipelining" protocol- sockets are actually quite cheap, but most of the http server implementations and most of the http client implementations were badly written, and it may have been difficult to do better (assuming they knew how to do better) -- and so regardless, what was once an HTTP-compliant implementation was suddenly not.
HTTP/1.1 also introduced an "Upgrade" header, which was a kind of "trap door" to add extensions-- hopefully to avoid this kind of problem in the future, but it is complex, and many HTTP implementations simply added support for the "Connection" header and were find for a couple decades where today we are still shaking out clients that don't support Upgrade properly (and never noticed because servers vary on when they use it).
These "extensions" are the sort that everyone had to cope with- and because the protocol was carelessly defined, it was easy for implementations to get it wrong in a subtle way. Most of the other extensions (e.g. DAV, CONNECT, etc) are much easier to ignore simply because they're more "obviously" an extension.
> I think HTTP only really won because of those (HTML/CSS/JS).
HTTP won for a lot of reasons, and being easy to implement "mostly (or sufficiently) right" is a huge factor that I don't think should be ignored: Yes, many clients got it wrong and noticed years later, but "fixing" those broken clients was pretty easy, and the fact that people don't have to start over to gain increased compatibility or features is attractive in a way that should be studied by protocol designers trying to invent the next amazing thing.
>> Most recently, HTTP/2 has become a concern that HTTP servers and clients have required substantial work to implement, but it is easy to say this is a separate protocol from HTTP, because even though it shares the same ports, a client that doesn't speak HTTP/2 when speaking to an HTTP server will not have to deal with them.
this can't be stressed enough. Even many well known sites have a setup where their internet facing sever talks HTTP/2 but the backend is HTTP/1.1.
This protocal downgrade into the backend opens you up to a world of pain like cache poisoning and request smuggling which are also really hard to detect unless you know what you're looking for. And seeing how common it is, I wonder if it wouldn't have been safer to not call it HTTP/2 but a totally different name, just so people understand the danger they are in by thinking that there is any kind of safe interoperability between them.
Only a few of those existed in the 1.0 spec. The rest evolved in practice. You can add your own without asking anyone else. Libraries don't have to be changed. Your payloads are your business. Encodings are flexible and negotiable.
> i think HTTP only really won because of those.
It's proven to be a predictable, stable, extensible, compatible, generic information exchange protocol. If it's won, it's because of that.
I'd go out on a limb here and wager that the majority of HTTP servers and clients aren't transferring HTML/CSS/JS. Moreover, the majority are probably not even talking to a web browser.
it is a protocol for a specific purpose, and because it has that specific purpose, it was used for that, and desired for that. http can do it, but without the constraints, of course nobody will use it for reinventing finger.
I was messing around with servers that had a finger server running a few days ago. I think the university of Wisconsin has a finger server running that exposes the name/address for all their faculty and students
For a long time, the university I worked for had multiple big Solaris boxes that were used as shell boxes for faculty and students. After I left, I would still use the finger protocol to see what my old coworkers were up to!
On the ARPANET (the original one running the NCP protocol), not only didn't you need a special protocol to have an online chat with somebody at the other end of the country, you didn't even need a host or a talk daemon running on a server!
I dialed it enough times that I still remember it. Much thanks to Bruce of "Bruce's NorthStar" BBS in Virginia for that phone number. [1]
MIT-MC: @L 236
MIT-AI: @L 134
MIT-DM: @L 70
MIT-ML: @L 198
Anyone remember how to do a TIP-to-TIP link, as documented on page 5-4 of the "Users Guide to the Terminal IMP" [2], by connecting an input and output socket of one TIP to an input and output socket of another TIP, through an unsuspecting host, so you could chat back and forth directly between two TIP dial-ups, without actually logging into the host?
It went something like @HOST #, @SEND TO SOCKET #, @RECEIVE FROM SOCKET #, @PROTOCOL BOTH, making sure the sockets were different parity so as not to violate the Anita Bryant clause with homosocketuality. [3]
You could also add the octal device port number of any other TIP user on your same TIP after the @ and before the command, to execute those commands on their session. (See page 5-7, "Setting Another Terminal's Parameters".) BBN wrote such great documentation and would mail copies of it for free to anyone who asked (that's how I got mine), you couldn't even call it security by obscurity!
The "ARPANET" episode of "The Americans" really missed the boat about how easy it was to break into the ARPANET. I didn't even have to kill anyone! [3] [4] Makes me wonder about the part about squeezing your anus... [5]
Finger is much better than HTTP from a security point of view, because it’s incomparably less complex (does one thing only), which eg removes all the risks coming from scripting, and makes sandboxing trivial - on both ends of the connection.
True, most finger implementations are trivial. But there is nothing stopping to create a finger daemon which does something dynamic based on the query the user sends. The same set of vulnerabilities is possible. It is also possible to write a simple web server for a single purpose without scripting, which is similarly secure to a finger daemon. Also finger clients are relatively secure as they don't do any interpretation of the data, (which might mean that they don't do any validation, which would allow console injections with escape sequences....) but there isn't anything stopping to send the response in HTML. Also you can write an as secure HTTP client for that tight use case.
In addition we have tons of tools to work with HTTP for debugging, proxying, caching, filtering, ... none of those for finger (which of course is a responds to nobody using finger and everybody using HTTP) which allow better handling.
HTTP does a lot of things right, but it’s almost completely oblivious to the contents of requests/responses. Things like DNS also have to specify what a query looks like, and the format for the response.
TLS was hard, the implementations kinda sucked, and it turned out that it was important.
So we went through a dark age where "Just open a socket and have at it" couldn't fly over WAN, which means there wasn't much point doing it at all.
QUIC will fix this. You can treat it like a bunch of TCP streams and UDP datagrams that are Just Encrypted.
I'm thinking about doing a toy IRC knock-off with QUIC. Having TLS standardized in the transport layer means less work for the app, and having multiple streams and datagrams means that odd stuff like file transfers or even voice chat could be tacked on without opening new ports or new TCP streams. Matrix is cool and all, but I want something you can just throw down for a few friends and some bots with a shared password. Matrix homeservers are too much work for one-off.
My old New Year's resolution was always "I'm finally gonna get into web dev". But I don't like web browsers. My new resolution will be "I'm gonna do web stuff, without web browsers."
I'm building a webrtc video based dungeons and dragons app so my sister can DM for my kids and I instead of using Skype/hangouts/discord etc.. which all suck because she's in . So all that to just say I did it in Electron and it was a cinch and I'm doing mobile versions with Ionic. Both are web dev without exactly using browsers, but still using browser engines. Could be fun.
I see at least 2 here which have commits from 2021.
To be blunt though, I don't like C. It does low-level better than many languages, but C's idea of high-level is too low.
If I had to use QUIC from C, I would pick a Rust or C++ library, write a wrapper that makes it basically into a single-threaded epoll knock-off, and then call that. If I had to do it in pure C, I'd give up.
For my pet projects, I want to use the tools that make me most comfortable, where I can slide between low and high exactly when I want. Web browsers struggle to go low, and C struggles to go high.
I hate C as a language, but I think it makes for a great API. Any language under the sun can bind to C. To bind to Rust or C++, you'd need to basically be a Rust/C++ compiler. From that perspective, I don't have a problem with having a Rust library with a C interface, except it might be harder to maintain for distros. gcc/g++ are everywhere at least.
msquic would be a good start, which is the protocol implementation from Microsoft. I used it for a project of mine which used Quic for something not HTTP/3 related.
I tried to bring some nuance to this. 10 years ago I would have said something more like "Fuck web browsers". I've toned back on the hate, but I still think they have serious shortcomings. (Not that native doesn't - Notice I said nothing about permissions and untrusted code)
tl;dr: I think web browsers have a Pareto problem. Building inside a web browser is pretty all-or-nothing. Their interfaces make the most common 90% of cases easy but the other 10% of interesting niche stuff totally impossible, or too slow to be useful. Just like how old PC games would play music by triggering the CD drive's "Just play this track" feature, browsers are fine for doing super-high-level stuff exactly the way most people want to do it. But if you want to do anything with that audio other than stream it unmodified straight to the speakers, suddenly the APIs let you down. And the whole time, you're taking on some of the biggest dependencies in history. There are two companies that make full-sized web browsers. One is a non-profit constantly struggling for funding while making awful PR gaffs and being hypocrites about privacy. The other is an openly evil advertising company.
Full:
I've done really basic stuff, like I learned how HTTP works, I wrote a few web apps with Rust, I made a game with TypeScript and WebGL. But it just never clicked for me.
My comment is missing a little context. There's basically two different niches:
1. If I want more than 2 or 3 people to use it, it has to run in a web browser. I don't mind doing WebGL and putting it on a static site. I can always do a native port if I feel like it. All the games I've made can be modelled as "Read keyboard input and run OpenGL commands", and browsers are enough for that.
2. If I want to really have fun with something, just for myself, web browsers are too big of a dependency and the restrictions are too tight. Sure, they'll get QUIC as WebTransport soon (IIRC), but I'm always gonna be limited by the dependencies.
I don't like Electron out of principle. It's just so big. Native development can be awful - Big C projects are not fun to build. But Rust is striving for "Just clone and `cargo build`". What's the `cargo build` for web stuff? I have to set up TypeScript or something else to shield me from JavaScript... If I were using Electron I'd have to learn all that...
I actually like local web UIs. I think because that offers flexibility. If I want to send a video stream from a browser, sure I "just" have to use WebRTC. But how do the WebRTC servers work? I haven't found satisfying documentation. What if I want to start with a webcam stream and then compose graphics into it before encoding? I know browsers have Skia, but is that exposed to me? Or is it like so many bad "Play an audio file" APIs where it breaks down as soon as I want to play a _remote_ audio file or _stream_ an audio file or _transcode_ an audio file.
So (sorry for the meandering) back to my toy IRC idea.
I can do that with HTTP and long-polling and it would kinda work. But it would just be a crappy Matrix clone. What I really want is to show off "Look, I think QUIC is going to bring back custom protocols, QUIC has not to come to abolish the word of TCP but to fulfill it, and here's how it looks."
And I could do that with Electron, but like the "Let me do everything for you and make the 10% of niche cases impossible" API that can only play audio or send a webcam stream without any compositing, Electron presumes I'm going to have a GUI, and I'm also going to run it on the same computer.
Whereas if I make the first prototype UI with curses or a local web UI, I can forward it over SSH easily or run it when a GUI isn't available.
It sounds a lot like Gemini, but I think Gemini is a little misguided. It sounds like most of its proponents think that you can control a protocol by just having very noble goals. And it sounds like they are opposed to HTTP and QUIC not because the protocols are bad or even hard to implement (In the case of HTTP. QUIC actually is hard to implement), but just because bad entities use them. I think it's dangerous to believe that powerful tools are only for bad purposes. It will leave good people de-powered.
I like the way you're thinking here, I think the limitations you mentioned with gemini may stand... for me it's kind of like the limitations generally speaking with markdown. Doesn't leave much room for doing stuff like parsing the raw data when they aren't in a hierarchical structure with xpaths you can target and stuff like that, it just throws out so much baby with the bathwater that I'm ready to scream infanticide.
Any thoughts on fast experimental protocols like warp data transfer [1] or fast and secure protocol [2] ? I know they're not exactly the most open things or wellsupported in terms of what you're looking for but I've been really wondering when we're going to start seeing pressure to relieve network congestion using stuff like this. I get that part of the idea of QUIC is generally to shift the optimization of network traffic from kernel-space (for example fq-codel or CAKE) into user space, but does it offer wider improvements on bandwidth usage outside of that?
Stupid security people set up stupid firewall rules that blocked everything else (they are mostly going away nowadays, finally). And then NAT and ISP port blocking happened too.
And the phone thing was that on modern processors listening to the network is a serious battery sink.
Chris Torek had hacked our version of fingerd (running on mimsy.umd.edu and its other Vax friends brillig, tove, and gyre) to implement logging, and while he was doing that, he noticed the fixed size buffer, and thoughtfully increased the size of the buffer a bit. Still a fixed size buffer using gets, but at least it was a big enough buffer to mitigate the attack, although the worm got in via sendmail anyway. And we had a nice log of all the attempted fingerd attacks!
The sendmail attack simply sent the "DEBUG" command to sendmail, which, being enabled by default, let you right in to where you could escape to a shell.
Immediately after the attack, "some random guy on the internet" suggested mitigating the sendmail DEBUG attack by editing your sendmail binary (Emacs hackers can do that easily of course, but vi losers had to suck eggs!), searching for the string "DEBUG", and replacing the "D" with a null character, thus disabling the "DEBUG" command.
But unfortunately that cute little hack didn't actually disable the "DEBUG" command: it just renamed the "DEBUG" command to the "" command! Which stopped the Morris worm on purpose, but not me by accident:
I found that out the day after the worm hit, when I routinely needed to check some bouncing email addresses on a mailing list I ran, so I went "telnet sun.com 80" and hit return a couple times like I usually do to clear out the telnet protocol negotiation characters, before sending an "EXPN" command. And the response to the "EXPN" command was a whole flurry of debugging information, since the second newline I sent activated debug mode by entering a blank line!
So I sent a friendly email to postmaster@sun.com reporting the enormous security hole they had introduced by patching the other enormous security hole.
You'd think that the Long Haired Dope Smoking Unix Wizards running the email system at sun.com wouldn't just apply random security patches from "some random guy on the internet" without thinking about the implications, but they did!
From Wikipedia [0]: "Robert Tappan Morris is an American computer scientist and entrepreneur. He is best known for creating the Morris worm in 1988, considered the first computer worm on the Internet.
1988 – Released the Morris worm (when he was a graduate student at Cornell University)
> You could message the owner of phone directly through a URL. The request would be handled by server on phone!
I know it's not like a web server on the phone or anything, and likely questionable to mention it at all (since I made it), but I made a thing that lets you send notifications to a phone (or desktop) via curl [0] via a simple PUT or POST. It's definitely not a cool protocol since it's simple HTTP, but it's in the spirit of other Unix tools since it's just one thing to do one job.
A. People do still come up with protocols. They've just moved up a level of abstraction. Why deal with the problems http already solves if you don't need to?
B. We now have big enough actors (corporations) that there is less incentive to unify, though even this isn't entirely clear cut. A lot of companies do seem to be trying to create standards for things like iot devices with some success
I think both of those are good reasons, but there's also:
C. Web applications are the way most applications are used on desktop nowadays. The creator already needs to eat the cost of hosting the servers, so you might as well go for control and monetization over delving into making peer to peer work.
It used to be the case that HTTP was used by web browsers (clients) and web servers to exchange mostly textual content. During those times, HTTP was used as a pure application-layer protocol, riding on-top of TCP (a transport-layer protocol).
These days HTTP is used for everything. Server-to-Server API calls, binary data transfer, IPC, etc. A lot of these things get implemented on-top of HTTP though. HTTP is used much more as a transport-layer protocol now, an abstraction layer on-top of TCP.
How did we end up here? It appears that there was an organic need to build an abstraction layer that's easier to work with than TCP, which is probably seen as too low level and much more difficult to work with. Browsers supporting HTTP out-of-the-box with AJAX made this a widespread practice.
I think the reason is because corporate firewalls allow http(s) out to the internet, so everyone just used that, negating the whole point of ports in the first place.
Does adding a header-only (C++) http server to projectM’s visualizer [0] count? I added a very basic http server to switch presets from any basic web browser (including a kindle paperwhite) using a static html page. [1]
It’s been great fun hosting parties and projecting visuals across the room onto an opposite wall. Then passing around a very old Android phone to go through the presets.
I highly encourage others to do the same with their favorite applications it’s fairly straightforward and makes it a pleasure to use.
More than finger, I so miss the times of USENET and the user experience of its hierarchical system of groups with threaded, text-only pull messages (I accessed it from gnus (the emacs newsreader) via my HP 9000 715 running HP-UX 9.03).
The Gemini protocol is an interesting re-imagining of Gopher: extremely simple text-only content, with client-side styling. Identification via client-side certs.
The most important difference a Usenet-like clients brings is per-message read/unread status, which is what enables long-running threads and discussions. Without most clients (most users) supporting such read/unread tracking, you won’t have long-running discussions, which changes the whole character of the medium.
Per-message read/unread status in practice requires keyboard navigation (or as an inferior alternative, paging with read/unread tracking like in web forums), which doesn’t work for mobile.
The lack of read/unread tracking is also why we don’t have long-running discussions on HN.
The only remaining medium with per-message read/unread tracking we currently have is mailing lists.
>Why we have stopped making cool protocols like this? It seems Internet had really cool protocols back in the day and we had so many possibilities. Now it seems we are stuck with HTTP.
Because of corporate network firewalls. Make no mistake, they would gladly break HTTP if they could, but it became too important, so now everything has to piggyback on top of HTTP.
Enforce the end-to-end principle and new protocols will flourish.
The Bitcoin wallet world is using Tor to tunnel from mobile apps back to full node software operating on home servers. Not necessarily an example of your first question, but definitely in the spirit of the HTTP server on phone.
> At her thoughtful suggestion, we are shipping a workstation out with us
He sure picked the right woman.
The Web being the Web, there's more, courtesy Wikipedia: "He met his wife, Katherine Anna Kang, at the 1997 QuakeCon when she visited id's offices. As a bet, Kang challenged Carmack to sponsor the first All Female Quake Tournament if she was able to produce a significant number of participants. Carmack predicted a maximum of 25 participants, but there were 1,500. Carmack and Kang married on January 1, 2000, and planned a ceremony in Hawaii. Steve Jobs requested that they would postpone the ceremony so he could attend the MacWorld Expo on January 5, 2000. Carmack declined and suggested making a video instead."
Okay, now I need a similar story telling me how Carmack is full of shit too, to restore the balance. Otherwise I will keep feeling sad that we don't have more of Carmacks (and, if I may, less of Jobs/Bezos)
I'd suggest reading this[1], it's a quick read and paints a slightly different picture, probably still biased as everything is in society in general but you get a different side of the coin regarding Carmack. It's then up to you to decide if you like him more or less :)
> Anna Kang left Id a couple weeks ago to found her own company - Fountainhead Entertainment.
> It wasn't generally discussed during her time at Id, but we had been going out when she joined the company, and we were engaged earlier this year. We are getting married next month, and honeymooning in Hawaii. At her thoughtful suggestion, we are shipping a workstation out with us, so I don't fall into some programming-deprivation state. How great is that? :)
Fun Fact: MIT still runs a finger server at mit.edu (no subdomain!) that lets you look up anyone with a registered account. Years ago it was publically accessible, but now you need to make your request from within the MIT network to get a response.
You could also get the schedule for movies shown on campus with `finger @lsc.mit.edu`. That finger server is actually still running but looks like it's not being updated.
Nope, I checked the LSC website and confirmed they are indeed showing movies! So it’s just that the finger server isn’t being updated: http://lsc.mit.edu/
(Note the HTTP link. The HTTPS cert is expired. But even if you bypass that warning I don’t think it’ll ever work due to other certificate errors. On almost all .mit.edu sites HTTPS is broken if you don’t have an affiliate client side very installed...)
Similarly, CMU's finger server is still publicly accessible, although almost no one except for professors in the CS department actually has a .plan file.
Seems to be a very basic social network based on concepts from an old protocol called finger. This website seems to replicate it as a bit of nostalgia.
As you might know, finger is an old protocol (actively used well before my time) which in essence showed information about the users on a server running a finger daemon (usually a Unix-like system).
As I understand it, when you queried for information about a user, a piece of the information you got back would be the contents of a ".plan" file in the user's home directory.
In this file a user could provide what we now call "status updates", which would then be promulgated by finger. You might have put your location, or what you were working on.
To avoid doubt, you can do all of this on the website and don't need to actually use the finger protocol at all. But for true nostalgia you'll need to use the finger command - which works!
.plan is the historic unix equivalent of todays status on social media.
you would write your status (or what you plan to do or whatever) into a .plan file in your home directory,
and you could use the finger command to query the contents of that .plan file from other users on the machine. or on any other accessible machine.
plan.cat appears to be an attempt to make that feature accessible through the web, complete with the ability to create an account and add your own .plan file.
not what i would go for. i would much rather prefer something similar that i put on my own webpage. (well, technically, all it would take is to agree on a standard url like https://my.home.page/plan or something like that.
Wow, it's one thing to read something and harken back to my early days at school and work, but actually running finger from a shell and seeing something real come back caused this deep wave of nostalgia to roll over me. Part of me really misses those days.
I haven't thought about .plan files in years. This takes me back to setting those on the HP-UX account I had at college, and fingering my friends' .plan files daily. That was my first real encounter with actual multi-user, family-tree Unix, though I had been horsing around with some of the various Linuxen available in the mid to late 90s and early 2000s.
Mine usually had snarky movie or TV quotes. Usually from MST3K, Babylon 5 or Army of Darkness.
We had a couple SGI machines (Indigos) for architectural modelling and rendering, would use plan for letting others know that we had a big project rendering; man those were the days!
That's a pretty pathetic offering (a copy of the Wikipedia article about Catalonia in Catalan). They could have at least tried to pick something the reader wouldn't already know about.
Yeah, I'm from catalonia and I'm ambivalent about this kind of usage. On the one hand, you have some meme sites that are cool, but are often just "exploiting" the tld to make for a nicer url. Aesthetics. As a hacker, it doesn't seem a problem at all... but then you have to consider the cultural issues too: catalonia is not that big, with 7.5M population. Should we consider the "inappropriate" use of .cat domains as a form of cultural appropriation? Might seem a bit exaggerated, but it's not a ridiculous question to ask oneself.
That being said, fundació.cat is allowing the sites to exist and doesn't seem to care that much either about what's going on in their domains (I sent them an email once asking for a list of all / the_most_popular .cat sites to find sites in catalonia worth promoting, and they don't even have that kind of information available), so if they don't care themselves, what can lowly citizens even ask for.
The objection isn't to using content from Wikipedia, the objection is that .cat websites are supposed to include Catalan content and this one chose to satisfy that requirement in the laziest and least interesting way possible. Why not pull from an article about something actually interesting and relevant to the site at least, rather than a couple intro paragraphs in Catalan about the place where most speakers of that language already live? It's not a huge deal but it seems like a missed opportunity.
Afaik, it's up to the country to determine the rules for the tld. Some require strict rules (eg, .us requires making public a current mailing address, something making me consider getting rid of mine as I don't want that so public), and others (eg, .io and .fm) simply use it as a way to make some money.
To be accurate it is not that you have to have content in Catalan (although that seems to be the _main_ purpose of the sponsored TLD) but you have to "be a part of the Catalan
Linguistic and Cultural Community" as set forth in the registry agreement between ICANN and domini.cat [0]
My almamater, Grinnell College, has a still active social network Grinnell Plans, which extended from .plan usage for social networking on the college's Vax computer system.
Nice to see a another Grinnell alum in the wild! I want to try plans, but lost access to my @grinnell.edu email after I graduated. What do you think about it? Should I check it out?
Man, this takes me back. I had a .plan file that was an ever-growing list of quotations, often in dialogue with each other. The only one I still remember was one from a friend who said, "you should put more quotes in your finger file."
Let's call it pretty open... I couldn't find one either, years ago, so I kind of went with what works for me.
Note that plan files aren't exactly a micro-blogging platform like some people seem to refer it to as. To me anyway, it's a way to capture work and todos in a simple digital journal.
This is the standard I followed for myself...
This is a rolling plan file where things get moved on completion against a specific date (no backtracking, sliding tasks)
- is todo
* is done (for grepping)
bugs are tagged [bug]
other tags can be use as [tag]
~~is a cancelled~~ task or bug
// is a comment or thought
@next is upcoming work
@later is backlog
if it is not done, it's in next, later or ~~cancelled~~
date is YYYY-MM-DDD followed by a rough recorded timesheet
:0000-0000-XXm as in :start-end military time -minus XX mins of AFK.
timesheet that is parsable by another program to get time spent /week /month /total
For example...
### 2021-01-10-sun :0900-2100-50m
* ditched ~~passport~~ [wtf]
* auth via bcrypt and jwt tokens
* new vue app, trying water.css - nice [noteworthy]
* JSDoc is awesome, makes typescript a lot less needed [noteworthy]
* sign in with email / pass against db
* register against db (not in vue yet)
* validating jwt tokens properly
* clean up package.json
Note that none of this is a replacement for actually doing the work ;)
It was just text. There was no definitive markup format, just what people conventionally used in other text-based forums (like newsgroups) at the time. So you'd have a variety of potential bullets (*, +, -), things like *foo* or _foo_ for types of emphasis, and whatever plaintext formatting they wanted besides that.
We used to run a site at quakefinger.com that scanned all the quake and eventually gamedev plan .files. It was awesome. Sadly the wayback doesn't go back to the earliest days but you can get the flavor here:
I actually love finding simple, durable old sites that are still up. Sometimes the actual end product is still ghastly 90s graffiti art but the backend is usually so beautifully clean.
Times New Roman font on a white background. Everytime I come across a site like this, these adjectives come to mind – Robust, unadorned, resilient, pragmatic, resourceful, efficient, functional and fast.
When I was in college, (1991-1995) I use to enjoy running finger on the unix terminals at the library and reading everyone's .plan. It was popular to put ASCII art in your .plan which I always enjoyed. I collected all that ASCII art and, in 1994, made a website that featured all of it. It no longer has my college URL anymore, but it's still online and gets a respectable amount of daily traffic. (Although interest in ASCII art has steadily waned, since fixed width fonts aren't really a thing on mobile devices.)
If I remember correctly, you could also finger some of the soda machines on campus and check their inventory.
My first impression of this was: I thought it was some sort of doxxing site where people's online pseudonym was matched with their legal name. Oh how I was wrong.
Hmf. I wanted to cobble together a little thing that could update the plan from a shell. Using curl, I can't get anything other than a 500 reply when posting to /login, even after storing and sending the cookie from an initial get request, retrieving the csrf token for the form, and matching all the headers from a normal browser session.
If anybody manages to get a working login with curl, I'd love to see the magic incantation you used.
Even better on MacOS is that if you create a .plan file in your home directory then you can read the file using the finger command! (the port is blocked for external access)
MacOS finger also reads .forward, .project and .pubkey
Twitter's version of it, you might say, is putting the whole world in one flat namespace.
Naturally (as per Tobler's law), we care more about our direct environment, but Twitter never created a view like finger where you could only see the direct members of your own lab, all content is external/public. I think that's because Twitter wasn't planned, but happened as an accidental side project. A more careful design would have created a multi-tiered visibility level with ways to choose what visibility is best for a message (default: local).
It would be cool to create .plan files dynamically using content pulled from social media status updates for people who do not maintain their own .plan.
Much as I bemoan the loss of elegant text-based protocols like this, I am simultaneously thankful that I can interact on a platform that retains a great deal of the spare, text-heavy aesthetic and doesn't come riddled with various "social" integrations, analytics, and auto-playing video trash.
A social network(?) based on .plan files -- back in the day, this was how you let people know your status; you'd use the finger [0] command to get them for a user.
The 2001 surprise indie drama hit “Freddy Got Fingered” tells the story of Frederick Gauswurth, an MIT professor whose melancholy plan updates about aging gather a surprising audience among computer science students.
Not saying HTTP is bad. It just seems like we have given up on possibilities. I remember, almost a decade ago, Nokia had a mobile web server for Symbian devices which basically hosted HTTP server on the phone[0]. You could message the owner of phone directly through a URL. The request would be handled by server on phone!
No one makes anything like that anymore. Everyone is just building on top of APIs and services provided by MANGA who would obviously not put any effort in such projects.
[0]: https://linkdekho.in/254nl