Hacker News new | past | comments | ask | show | jobs | submit login

Here is a more interesting question:

How would you reinvent Usenet?

What Usenet did well was that it was completely decentralised, had zero cost of engagement (despite 'hundreds, if not thousands of dollars'), and was everywhere.

What Usenet did badly was that there was a complete absence of identity management or access controls, which meant no accountability, which meant widespread abuse; and no intelligence about transmitting messages, which meant that every server had to have a copy of the entire distributed database, which meant it wouldn't scale.

It's a tough problem. You need some way to propagate good messages while penalising bad messages in an environment where you cannot algorithmically determine what good or bad is, or have a single unified view of all messages, all users, or even all servers. And how do you deal with bad actor servers? You know that somewhere, there's a Santor and Ciegel who are trying to game the system in order to spam everyone with the next Green Card Lottery...




I think reddit is the reinvention of USENET. It is mod-heavy and has enough critical mass of users to provide excellent results from its upvoting system. And many subreddits are extremely well maintained with a very high signal to noise ratio.

It even has its equivalent of alt.binaries.pics.* if one is so inclined.


Sort of, voting rearranging the chronological stream of conversation makes it significantly different IMO. There is also the phenomenon of funniest image tending to win the votes, too. (Barring excellent moderation, but Reddit does very little to make moderation easy, or even set goals for moderation.)

That's not great for discussion, but then Reddit was always designed as more of a system of briefly commenting on URLs than actual discussion.


This is only true for the front page and for things like /r/AdviceAnimals. The front page is in and of itself a separate phenomenon than the rest of the subreddits, in my opinion.

For many subreddits, there are truly fantastic discussions that are very relevant to the subreddit topic. /r/askHistorians or /r/askScience, for example, has an extremely high signal to noise ratio.


See also Aether, a decentralized Reddit-like application:

http://getaether.net

(Not involved with it myself, just thought it was relevant.)


Isn't reddit centralised? (Never used it myself.) If so, that disqualifies it.


reddit low number of mods and users that browser r/new creates the very opposite of usenet.

reddit is incredible unilateral.


Few years ago I created http://www.newswebreader.com (still functioning) website which is the web frontend for USENET. It has NNTP server in the background connected to other NNTP servers, and it displays groups, headers and posts similar to three pane Thunderbird. You can create an account and susbscribe groups, it remembers what messages you read.

Idea was in the end to make frontend to USENET that would look like Stack Overflow, with voting, and your replies would propagate back to USENET.


> How would you reinvent Usenet?

Already done, it's called reddit. And the main problem with Usenet was its replication architecture and not its identity/authentication.

reddit doesn't have any identity system in place and it has hundreds of millions of users.

reddit improved on Usenet by adding voting, which is something that at least one Usenet client tried to implement (gnus) but which should have been implemented in the architecture itself.


You don't have to penalize bad messages. Just don't link to them. Curation and moderation seem to be higher level problems that don't need to be specifically addressed by underlying storage/transport layers.

ipfs[1] is an interesting project that could be used to develop applications in this area.

[1] http://ipfs.io


"Server not found." Not a solid start to a Usenet replacement.


Note that www.ipfs.io does not work. Did you perhaps type in www.ipfs.io rather than what was linked?


I clicked your link. Then tried a few variations of it. Didn't work. It's online now. That means they can't do HA and rolling updates on the cheap despite all the software/hardware to do so. Can't rely on that but they'll pull off:

"This forms a generalized Merkle DAG, a data structure upon which one can build versioned file systems, blockchains, and even a Permanent Web. IPFS combines a distributed hashtable, an incentivized block exchange, and a self-certifying namespace. IPFS has no single point of failure, and nodes do not need to trust each other."

Wouldn't rely on it for production. I'll go back later and check it out for curiosity, though.


Little bloopers like this always confuse me. The people who can create a "new peer-to-peer hypermedia protocol" can't configure basic DNS?


See, the thing is, something.com and www.something.com are different DNS records.

As far as dns is concerned, there is nothing special about www. You could say bob.something.com.

There is a cultural expectation (mostly from people who started using the internet after the late '90s) that www.something.com goes to the same place as something.com, but as far as DNS is concerned, the two are completely different records.

(in the late '90s, one of my tasks at my first programming job was to write a patch to mod_vhost_alias to implement company policy... e.g. to make www.ourcustomer.com go to the same place as ourcustomer.com the patch was required because www.ourcustomer.co.uk also needed to go to the same place as ourcustomer.co.uk, so I couldn't just take the rightmost three chunks)

The upshot is that people who have been around longer, and who like to be curmudgeonly about it will often configure www.mydomain.com and mydomain.com to go to different places, because they are different records. (of course, some would say that this is so they have a chance to explain this, and a chance to feel superior to those who need this explained.)


I get that this is how it works and maybe for some reason people like to treat www as any other subdomain and send it somewhere else - but is there any reason beyond not configuring dns to just blackhole www traffic like that site does?


If you are actually trying to understand this phenomena, I suggest checking out the silicon valley lug webpage. it's at http://www.svlug.org - http://svlug.org now has a 'hey stupid' note that redirects after a few seconds. This page was put up after a lot of moaning from some of the older lug members who thought that the normal user expectation that www.something.com and something.com would go to the same place was, well, stupid, and a sign of the sort of person we don't really want or need to communicate with.

Of course, this is the opposite of the bit people in this thread were complaining about. http://www.svlug.org has always been live, it's http://svlug.org that was dark until the youngsters complained.


A well-understood phenomenon in DNS that most admins take care of to ensure users with a reasonable expectation end up in the right place. Further, a common case a admin should account for. Then, there's these admins and their apologists...

And another try has root working but not www. Need I say more lol?


I get that they're different records, www just seems like the one subdomain that everyone expects to be synonymous with the root. Even if that isn't where you want your resources to "live," it's an essentially free way to help people get to your site -- like googel.com redirecting to google, except, as you said, with like 15 years of ingrained user training.


Maybe there are also some lessons to be learned from the FidoNet era.


Defining the goals is a key aspect. If re-invention is what we desire than I would like to take a shot at outlining the positive aspects of usenet, as well as the negatives.

Positive:

* Anonymity possible (to an extent)

* Moderation possible (to an extent)

* Caching of desired content at the network edge

* Binary data (though obviously no more yyencode/etc)

* Libre (as in freedom of speech)

* Free (as in beer)

* Useful, if probably illegal, content

* Distributed

The negatives:

* Impersonation/other false claim to identity.

* Spam

* Illegal content (to whom? how to identify? intractable)

* Flame wars

* Difficulty of setting up a 'feed'

I'd like to take a small stab at these various problems.

For identification I would specify the use of public key cryptography; it's the only de-centralized option I know of. OpenPGP with some extensions (IE: ed25519 signing keys) seems to be the obvious choice.

With identification the use of spam filtering technologies can also be resolved. Have users 'file' copies of messages in to several training bins via flags. Flags would be ternary state entities (true/false/null). Liked, On Topic, 'harmful content' (the catch all would be used in a design sense to include any type of illegal content, however for some groups that content /is/ the signal; this is meant to inform users so they can choose, not to be a nanny for them).

The above tagging would allow for aggregation to determine the 'health' of a data-pool, as well as how useful it was to the user base of a given server.

Data pools would, in themselves, be another type of tag. The built in base tags defined above would be the only 'required' ones, but a firehose of all data is crazy. Thus tags (similar to keywords) would also be attached. Advanced users (any that provide 'detailed' feedback) could 'vote' on the accuracy of applied tags including the base tags (which would be inferred as necessarily existing).

Base tags become 'groups' in this distributed database.

Critically servers aggregate and thus anonymize the tag weighting of their own userbase (even from their own user base).

Every (tag sync period) an enumeration of all non-default tags (and their yes/no vote counts) would be computed and the published result for that listed.

Also published, would be a list of the other 'servers' which this current server is aware of. SOME of these would be replication servers (which would have a non-zero weight that isn't required to be published), while others are just the servers known by other servers. Each entry would have an age; this would be the last time that the tag stats of that remote server was successfully polled (thus low entries are likely to be replication sources, BUT might be 'validation' of other servers as obfuscation).

Servers might only share post contents with authorized connections. Anyone able to do so would be able to source the other server and therefore replicate the tagged data that it chooses to cache. The other server may require something like providing account data for it to sync your server's userbase stats to it. Comparing the relative accuracy of stats would enable it to determine if your userbase is real or not, as well as how your userbase votes on things it's userbase does not. This would be the reason that (semi-anonymous) peering between even not-like-sized servers would be permitted; particularly if your own server is frugal and normally doesn't download things that aren't voted on it.

Obviously server to server communication would involve the automated use of signing keys /for the server/.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: