Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hmm they are sort of different things. Freenet basically has hash-addressed content plus some relationship between human-readable strings and the hashes, so you/can/refer/to/stuff/like/this making it easy to use HTTP on top of Freenet for navigation. In contrast IPFS makes the hashed content itself in charge of navigation by using git-like objects -- if you understand the way git objects work https://git-scm.com/book/en/v2/Git-Internals-Git-Objects then you understand that this allows you to navigate an immutable tree of content and an immutable history. In contrast, on Freenet particular files are immutable, but that's as much as they guarantee.

They also differ in the way routing works. On Freenet you ask a (mostly) random neighbor whether they have a file with the hash you want. If they don't have it, they ask another (mostly) random neighbor. This can go on for a while, until it either finds the content or hits a maximum number of hops, in which case it backtracks. The only point of these rube-goldberg shenanigans is anonymity. Since IPFS is more concerned about performance it flips this on its head: instead of blindly asking nodes for content, it carefully keeps track of peers who advertise what they're looking for; aside from being much more efficient, this also allows you to choose not to do business with leechers, like in bittorrent.

Maybe IPFS is a reinvention of a past technology, but certainly not Freenet. (Does anyone know of something closer?)

Freenet paper: http://www.cs.cornell.edu/courses/cs414/2003sp/papers/freene...

IPFS paper: https://github.com/ipfs/ipfs/blob/master/papers/ipfs-cap2pfs...



>The only point of these rube-goldberg shenanigans is anonymity. Since IPFS is more concerned about performance it flips this on its head

Right, but there in lies the biggest issue that holds these systems back. Do you:

- Replicate and cache data freely between nodes, and by doing so open up scenarios where unpleasant content is stored on and served from people's nodes without their consent OR

- Limit replication and storage to elective manual choices made by the user and/or recent data they have explicitly accessed, and in doing so severely compromise the ability of your system to retain and serve data as that data ages.

IPFS in it's current state is prone to as much if not more bit rot than the web as a whole - When nodes drop offline the content they have pinned is unlikely to be present on any other nodes unless the original host party has explicitly replicated to other nodes they also control and pin content on.

The only solution IPFS has for this currently is manual, elective pinning of content by other network participants. Realistically, if your replication and robustness scheme depends on manual user intervention it's not going to find wide adoption.

All of this is fine, IPFS still has usage scenarios it meets well when operating in the state it's currently in. But as far as some ideas being bandied about on how it's producing a censorship-resistant, bit-rot resistant persistent storage infrastructure that might replace HTTP... Nope, not unless a novel solution emerges to this specific problem.


Very interesting, thanks for the answer. I'll read the papers you linked.


> Freenet basically has hash-addressed content plus some relationship between human-readable strings and the hashes, so you/can/refer/to/stuff/like/this making it easy to use HTTP on top of Freenet for navigation. In contrast IPFS makes the hashed content itself in charge of navigation by using git-like objects [..] on Freenet particular files are immutable, but that's as much as they guarantee.

Actually there isn't that much of a difference here. Freenet manifests are analogous to git trees; knowing the chk of a manifest file gets you to the metadata that identifies all the files under the tree. It's all immutable.

There are some noteworthy differences though. One that stands out in particular is that freenode breaks large (>32kB) files into chunks. Everything is encrypted too. So finding a file you want is not quite as simple as taking the hash of the plain, unencrypted file.

Either way, you can easily build a git-like hierarchy of immutable content (and history) on Freenet, and this is more or less what happens under the hood anyway with manifests and splitfiles.

As a slight deviation from the norm, Freenet also can also address (signed) content by its public key rather than content hash. This is one way to enable mutable data; not entirely unlike heads. They can still link to immutable content hash keys.

> They also differ in the way routing works. On Freenet you ask a (mostly) random neighbor whether they have a file with the hash you want. If they don't have it, they ask another (mostly) random neighbor. This can go on for a while, until it either finds the content or hits a maximum number of hops, in which case it backtracks. The only point of these rube-goldberg shenanigans is anonymity.

It's worth pointing out that Freenet does have a simple but powerful routing system. Each network node has a virtual location, in key space. Requests are routed towards the nodes that are most close to the requested key. With careful selection of peers, the network topology can make for very efficient routing. E.g. one could have a small number of nodes "far apart" in key space, to faciliate routing towards far-away keys. Then you have a larger number of relatively close nodes, so that when a request comes in "your general direction" from a far-away node, you're likely to have the right peer to route to.

It is true that some randomisation helps with anonymity.

EDIT:

I'll add that the flipside of a network that essentially enables leeching is that it's also good for retention of popular (or "popular") items. Requested data is cached en route, so it's sort of automatic load balancing. Soon enough popular resources are likely to be held by whoever is nearby. People won't need to manually pin the content, and it's hard to directly DoS those who share the content you're "after." Asking for it just makes it more available.

I think this is important to consider if we're discussing reliability of distributed networks.

I'm not saying what the implications are -- for they can be good or bad.


Thanks for the correction! I don't think manifests were mentioned in the whitepaper but I found some info on the wiki: https://github.com/freenet/wiki/wiki/Simple-Manifest.

It looks like I will also have to learn more about IPFS's routing. Clearly Freenet's has some merits, and I would hope that IPFS is strictly faster since it sacrifices privacy, but I don't understand it well enough to make sense of how it scales.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: