Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

FTA:

> Unfortunately, some of the open technologies for building more local-first, p2p applications just aren’t there yet. Like Peter’s point on webRTC.

I guffawed at this. I mean, seriously. In the days before the internet was widespread, like mid-90s, there were actually multi-player games that worked over LANs, both IP and IPX. Somehow, people were able to configure multiple computers to talk to each other with this funny numbers called "addresses" or something. DNS existed of course, but you could also add to /etc/hosts, Novell had a thing, everyone had a thing.

The problem is that somewhere around the mid 2000s or early 2010s we forgot that anything networking-related could be based around non-web protocols and stacks. What a disservice we've done ourselves.

Not that I'm advocating people type in IP addresses or have to set up their own DNS servers or admin their own LANs, but wow, it's like part of our collective brains is just missing.



I think it’s more than forgetting. I didn’t forget for sure. But what is easily forgotten is why we moved from local p2p to internet brokered services. It’s mostly simplicity. By hosting all the state and discovery, match making, etc, in a remote service outside the local network and RF environment, you dramatically simplify everything to “configure the local tcp stack.” The 50’s-2010 world wasn’t awesome and things were super fragile, tedious, poor compatibility, etc etc. The move to service based approach meant protocols were homogenous and compatible, implementations were simple, state management easy, reliability near perfect, etc. You even get to ensure everyone has basically the same software since all the real software is in the service and there’s exactly one version on earth. In the bad old days you referred to, every device around you had some ragged edge version dependency graph of everything from the device to its network card through to the end user experience. It was a miracle anyone ever played Red Alert on the LAN.

People always talk about “how expensive” it is to send packets to Oregon to turn on your lightbulb in seattle. But the cost is already fully loaded and paid for. If you didn’t notice it, maybe that’s because it’s actually not more expensive? Yes latencies are higher, and that’s literally the only downside I can imagine. But it is a lot more reliable, flexible, simpler to deploy in any environment, robust.


> It’s mostly simplicity.

You make points below that are good, but let's be clear. You're talking about simplicity of interop and configuration that's gotten better, rather than simplicity of the software stack. There are so many variables that have changed by adopting the web stack that I don't think any other dimension of simplicity has been achieved.

If we had instead focused networking protocol design on the simplicity of configuration, discovery, and naming, then we'd have been better off. It's not like you need an electron stack, an apache stack, and a cloudy (planet megascale!) datastore for that.


I think the simplicity story is more than that. First, state management is a tricky thing to do in any environment, but in an environment of a billion random variables it’s tedious if not impossible. I’d assert that each of those complexities you highlight are actually solutions to a specific challenge in peer to peer interaction between computers that are simpler in the aggregate than if they didn’t exist and we had just focused on, say, making Bluetooth not be a massive annoyance. The ability to simply ensure a tcp stack is configured then be done with the entire end to end problem of peer to peer collaboration across platforms, devices, stacks, local RF environment, LAN configuration, and it includes durability and available assurances, etc, are enormous.

I also think it’s kinda not true that protocols have stayed stationary in that time. Certainly Bluetooth has improved, there’s lot do innovation on small device adhoc networks, there even a lot of innovation on internet protocols over the last 5 or so years. My observation (having been at Netscape at the time) was the real demise of network and internet protocols until late was the MSFT destruction of Netscape. We were a huge driver of protocol innovation and had demonstrated there’s gold in them thar networks. We hired a lot of protocol and standard developers and built products that implemented as close as our talent allowed to the spec. It was a heady time. Microsoft realized at the time this push really threatened their core business model. They so publicly and roundly pwn’ed us out of existence that it created a massive chill in anything internet tech related (most of us went into e-commerce as a result). It wasn’t until sort of recently Google self servingly started bullying standards out the door that things sort of unwedged again and despite the kind of sad way Google primed the pump, things are really exciting right now on the internet.

I’ll note that our Mozilla play was a really important effort on our part to ensure that culture we had built around open internet standards and technologies could stay afloat into the present. I’ve been happy to watch Gecko and Xul and others grow from their sapling state to where it is now. It also prevented Microsoft from establishing a real hegemony in internet standards. But most of all, I am so happy to see a real C/C++ replacement candidate emerge (Rust) from that effort. It was a real thumb in Microsoft’s eye to one up their $1B free browser anticompetitive play with taking our then worthless IP and establishing a long lived foundation around it.


> In the days before the internet was widespread, like mid-90s, there were actually multi-player games that worked over LANs, both IP and IPX. Somehow, people were able to configure multiple computers to talk to each other with this funny numbers called "addresses" or something.

In the scheme of things very few people were able to set this up. It's definitely unfortunate that so much software no longer works without a server component you dont control, but one of the major reasons for it is that the typical user can now do it because they don't have to set up the networking or understand much about it. A little later software companies realised they'd actually rather keep control and stopped distributing the server component at all.


The bar is higher now which is what has really changed. Your thing has to work on every network, through multiple levels of NAT, without copy-pasting ip addresses or updating system files, often across multiple platforms.

WebRTC is the focus because it won by a huge margin, other p2p protocols are a footnote. Everything uses WebRTC whether it actually has a web client or not.


What happen is that dads realize that it's easier to write , maintain, and monetize multi-client software that runs primarily on the dev's computer and pushes some of the UI to the client, than multi-client software that runs primarily in the client computers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: