Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Amino – The Public IPFS DHT Is Getting a Facelift (ipfs.tech)
134 points by dennis-tra on Oct 3, 2023 | hide | past | favorite | 111 comments


> The “Public IPFS DHT” is henceforth going to be called “Amino”. This follows along with the trend from 2022 in the IPFS ecosystem to use more precise language to create space for alternative options

I'd argue that "Public IPFS DHT", if less catchy, is far more precise than "Amino".


As the quoted part mentions, if they'd call it "The Public IPFS DHT", there isn't really any room for someone else to create something that could replace it, because There Could Be Only One.

With a specific name for the specific implementation of a general concept, others could provide alternative implementations implementing the same concept.


I am probably missing something then, but as far as I can tell most of the value proposition of ipfs is the single universal dht. if you remove or fragment it now all you have is basically a worse bittorrent.

Most of the interesting things I want to do with ipfs involve the dht, any sort of file transfer is usually a secondary concern.


> I am probably missing something then

Yes :) There can be multiple implementations using the same specification and still be compatible. For example, the new "facelift" they talk about in the submission article would be a new implementation, but still compatible with the old one, as they would (publicly) mostly have the same interface, so they can still talk.

There are many IPFS and libp2p implementations already, and they mostly aim first to be compatible with each other, so even if they work differently inside, they can communicate.

Similarly as with BitTorrent where many clients implement the same specification and can therefore exchange data still, just like TCP or basically any protocol.


Call it "The Public IPFS DHT v1.0" then ;)


Because that wouldn't be confusing if a novel implementation named itself "The Public IPFS DHT v2.0"?

Specificity >> descriptiveness, when you're aiming for a distributed future with multiple parties

As much as I hate AWS' jungle of names, I get why they did it. And it's probably better than any alternative


It's less precise because there are many ways one could implement a "Public IPFS DHT". "Public IPFS DHT" is a concept, Amino is a concrete instantiation of that concept.


Agreed. One frustrating thing about PL is that they seem to make odd decisions that detract or distract from their main value proposition. In particular:

- Filecoin is not interesting. IPFS and lip2p are interesting.

- Renaming IPFS-the-application to Kubo is confusing

- Naming the IPFS DHT "Amino" is confusing. Why does it even need its own name?

I really wish PL would go through the occasional contraction phase where it prunes the bulk of its initiatives and re-focuses on what it does amazingly well. IPFS and libp2p are truly amazing.


> Why does it even need its own name?

So that other DHT implementations can exist and potentially replace the existing one.

Same for go-ipfs being renamed. We generally don’t have web browsers named after the protocol they use. And with multiple ipfs clients, one of them being named “ipfs” is itself confusing.

Frankly, both should have probably happened years ago.


>So that other DHT implementations can exist and potentially replace the existing one.

What's wrong with e.g. "The IPFS DHT was using Kademlia and is now using Coral"?

The result is that there are now more names for the same thing, and the old names have changed. This is more confusing, not less. Ditto for "IPFS" and "Foo's IPFS implementation".


> What's wrong with e.g. "The IPFS DHT was using Kademlia and is now using Coral"?

Because that’s not what is happening.

The Kademlia-like DHT is not going away. It just has a name (“Amino”) so you can refer to its as an implementation relative to other implementations that will coexist, not necessarily replace, the original.

> The result is that there are now more names for the same thing, and the old names have changed.

They gave a name to a thing that didn’t have a formal name, because it can compete with other implementations. Having distinct names for a service and an implementation isn’t uncommon. The only criticism is that they didn’t do it sooner.

All of this seems pretty academic. The majority of developers and vast majority of users won’t know or care about the branding or abstractions allowing for different implementations. The tools work the way they did, hopefully faster. That’s it.


>Because that’s not what is happening.

It was an example, but I think my point still stands: there is no confusion other than the one created by changing the names of things in-flight.

>All of this seems pretty academic.

Agreed :/


I think the point is that amino is just one public ipfs dht, so they renamed it, so that other public ipfs dhts can exist without confusion.


Is IPFS working these days? I was very excited about it eight years ago, to the point where I made one of the first IPFS pinning services, but lost all my interest. IPFS is a great idea, but the implementation basically doesn't work, and it certainly doesn't work to the point where people can be running the node locally.

It used to have tons of problems discovering content from other nodes on the network unless it was directly connected to them, and it broke often. It also didn't seem like Protocol Labs worked on any of these problems at all, focusing on launching a cryptocurrency instead.

Has it changed now?


I have a similar take with slightly more recent experience.

When it came down to it the resource requirements for an IPFS node were pretty insane relative to the "value" provided and by many takes it still basically didn't "work".

I understand it's not the same thing at all but in the days of running a web server on nearly anything than can handle many thousands of requests/sec an IPFS node running on the beefiest hardware we could throw at it ate tremendous amounts of system resources and bandwidth for double digit requests per second, and even then it would frequently time out and/or get into various unrecoverable states necessitating a service restart. We had to run a cluster of them and watch IPFS nearly melt the hardware...

We tried every IPFS implementation available and ended up having to use the "least worst" while also adding a lot of instrumentation, health checks, etc around it just to keep it up and running in some kind of consistent, usable fashion.


I briefly ran an IPFS node, I believe working towards the same project that you are discussing. It ate my home network: drove my packet loss into the 10% range and somehow convinced my core switch (a Brocade ICX6610) to send all traffic to every port. When I saw every port on my upstairs switch blinking like crazy and tcpdump showed traffic intended for a downstairs server arriving at my upstairs workstation, I pulled the plug and told free he was on his own.


This was for an internal project, but yeah your experience sounds about right.


Depends on how you define working :). I'm a 6+ year vet of the IPFS ecosystem, we work on iroh these days, which I think addresses many core issues with the protocol design: https://iroh.computer

The biggest challenges still unaddressed are twofold imho: 1. The network is very forgetful. Stuff you added 24 hours ago is likely gone unless you've taken specific steps to keep it up. This is hard because all CIDs in IPFS have equal weight, which makes it very hard to cache intelligently. 2. The implicit promise that IPFS will resolve _any_ of the 86-100k new CIDs it sees daily in "normal internet" times, (sub-second TTFB). This doesn't work in practice, because mapping content addresses to location-based providers who are under high churn is, well, very hard.

Both of these problems are "content routing" problems, which is the core of "get me stuff from this hash, I don't care where" interface IPFS offers. It's hard. With iroh we just don't make that promise at all right now.


I hope you succeed!


Well, I guess it depends on your use-case. As a general and public discovery / providing / downloading network, it's been kind of overloaded for the last years, and it finally seems like Protocol Labs is putting some efforts to solving some of the most biting issues.

In this case, it's about the process of adding content to the network. It was neigh impossible to add large directories/files as your connections got overloaded with provide messages. This seems to batch things up and parallelize better, so should at least make it easier to add content and subsequently find it for the peers who want to.

But, the implementation still works very well when you're doing your own networks, which I think is a much better use of the protocol anyways. So when building a application with IPFS, you're using your own network only composed of nodes that are actually relevant to your application, instead of connecting to the public DHT.

Unless your scale is really big, it'll work a lot better than using the already huge public DHT.


That's kind of a shame, the appeal of a public, peer-to-peer, content-addressable network was very high to me personally, because of its relative uniqueness. For a personal network, IPFS becomes just another technology I could deploy, out of many options.


Yeah, definitely I feel the same about the public parts, but I still find the public-but-private network part of it useful in itself. Both have use cases, but currently the public one just isn't feasible to use :/


I think it would be better to fix the public network than to split the network into millions of local networks...

The public design today still allows actual file transfers to be local over your local network - it is only metadata that goes over the public internet.


Sure, ideally that'd be the case. But today that kind of isn't feasible as discovering and providing content is slow, but hopefully it gets better in the future with changes like this and more.

But what I wrote is how you can solve the issue today.


I hope the irony of a protocol called „Interplanetary file system“ being more suitable for local usage isn’t lost on people :)


I hope you don't believe I wrote something about that IPFS is only suitable for local usage. You can have your own remote network, spanning whatever nodes/computers/servers you want, remote or otherwise :)


Funny enough. Interplanetary refers to the ability to know a file created on earth would still hash to the same value if an alien created it on Alpha Centauri. You could both work independently and combine your databases and the distance wouldn’t matter.

That’s seems pretty local to me.


I also was very active in the early IPFS days. I think two points really atributted to your experience

1. Success: IPFS got tons of usage early on so scaling the software (which back then was mostly a prototype) was challenging, especially with Bennett's BDFL initial stance.

2. The need to codify an incentives market which then lead to the creation of filecoin took a lot of effort and setting up a trustworthy org around that one and ipfs got even more challenging

So Yes, working with IPFS was not plain sailing (and still isn't) but it seems that by now the two projects have been set up to start iterating again and I see a lot of great work happening on both fronts so it looks lika a promising future here.

Source: I have worked and am still working with both IPFS and Filecoin as part of my business


> 2. The need to codify an incentives market which then lead to the creation of filecoin took a lot of effort and setting up a trustworthy org around that one and ipfs got even more challenging

I see this as a non-goal: HTTP is doing fine without an "incentives market", and that's the sort of core layer IPFS is suited for. When I switch off my HTTP servers, there's no expectation that the resources they're hosting remain accessible; and the same is true for IPFS. The advantage of IPFS is that it allows resources to remain accessible, e.g. if someone else cares enough to host it too, or if I happen to have copies buried on some old boxen (without having to coordinate some load-balanced shenanigans up-front).

For example, we can avoid "leftpad" fiascos if software companies could host their own dependencies (as in, contribute to ensuring their canonical URLs resolve; rather than current practice of re-hosting copies at myriad private URLs, or routing their network through caching proxies).

Good luck to other projects which want to work on such a thing (Filecoin, etc.), but it's mostly orthogonal to IPFS itself.


I bet IPFS could have gotten by with mirroring system. Sites publishes list of files that they want mirrored, and supporters could copy and host the files. I think IPFS could be more useful for software, and mirroring or caches would work well.

Also, IPFS was supposed to have clients host the file like Bittorrent does. My impression is that most users don't host, they just download. It is quite possible that adding money to system reduces people's desire to contribute. I also got the impression that flaws in client make it harder to host.


> The advantage of IPFS is that it allows resources to remain accessible, e.g. if someone else cares enough to ho...

I think there is a key point here. This exactly is the reason why incentives markets built around IPFS are so relevant. They democratize the act of saying "I care about that data". Nowadays you need to be a poweruser to express this but with the world's content being more and more digital media this isn't the realm of the poweruser any more.


> I think there is a key point here. This exactly is the reason why incentives markets built around IPFS are so relevant.

I fundamentally disagree. If someone wants to make filesharing easier, go ahead (personally I think dragging files into a "Shared" folder, like eDonkey/Kazaa/Limewire/etc. would be easier for non-"powerusers" than minting and securing a crypto wallet and trying to convince others of its value in exchange for storage space, but whatever).

To me, such projects seem about as relevant for IPFS as, say, Geocities is for HTTP.


I'm having difficulty following this train of thought. How is any of this related to consumer level users wanting to keep IPFS data alive?


> How is any of this related to consumer level users wanting to keep IPFS data alive?

I'm saying that "consumer level users wanting to keep... data alive" is unrelated to IPFS.

If someone wants to work in that space, they can do so in separate projects (there are already many). Such projects could make use of IPFS; but they could also use HTTP, Bittorrent, GNUNet, Dat/Hyperdrive, Freenet, ED2K, FTP, or whatever seems appropriate (ideally, all of the above and more).

Presumably such projects could encode their text using UTF-8, to 'democratise saying things outside of the Anglosphere'; but that shouldn't de-rail the Unicode consortium into a multi-year crypto pivot.


That's all fine but the fact of the matter is that a lot of web3 content is built on IPFS and not on all those other technologies (partially because IPFS is truly permissionless), so the "just use another tech" argument is quite moot/academic


> the "just use another tech" argument is quite moot/academic

What "argument"? I said that filesharing/storage-incentivising projects can build on whatever they like; and that they have been doing, for decades.

My "argument" is that the IPFS project/protocol is not such a project. It is a network protocol (or suite thereof; for peer discovery, data transfer, hash representations, etc.).

> the fact of the matter is that a lot of web3 content is built on IPFS and not on all those other technologies

So what? Hopefully those projects chose to use IPFS due to its technical features, its stated goals, etc. How's that working out for them?

Note that, in particular, those technical features and stated goals do not include getting other people to host your shit. If some "web3" projects have decided to build their crypto bullshit on top of such a fundamental misunderstanding, then that's hilarious! More realistically, I'm guessing they know full-well that's not how any of this works; and are just keeping the illusion going until their inevitable rug-pull.

Either way, that's not IPFS's problem (which, again, is a (suite of) network protocols).


Incentive markets aren’t “democratization”. Something like BitTorrent where people freely choose to contribute simply to help others is democratic.

An incentive market basically makes it a business.


What's not democratic in having the power to use and operate businesses?

I think this is the ideal way. Instead of limiting "democracy" to the power users you also include consumers this way.


Can you say a bit about using it for your business? Curious how people are using IPFS in the real world.


Apart from the obvious web3 stuff there are some interesting usecases for classical enterprise software.

Caveat, my experience is limited to clusters below 100 nodes.

You can run your private IPFS DHT if you want and the protocol has some cool load shedding features. There is also IPFS cluster which adds data availability features to your IPFS deployments. When we tried private IPFS networks the cluster project was relatively early and we found that managing DA in the application that owns the data ended up being more straight forward so we didn't use that but I've heard good things lately.

If you don't need to enforce access management across your internal data calls it can be an interesting alternative to stuff like minio or ceph. YMMV but my experience was that, running a private IPFS cluster can be easier and much more flexible than running a similarly scaled minio or ceph deployment. It can be seen as a shift left like DevOps is. Instead of having each internal app on your project depend on a Data team that manages Ceph or another data storage and access technology you run (most of) it yourself.


I’m actually using it for sharing enterprise components so that a document saved in one company can be sent to another company and the document loads content from IPFS. All this without needing a central server


We've been doing quite extensive measurements over several parts of the architecture, which you can find here: https://probelab.io/

Still lots to optimise, but wouldn't say it's unusable. In fact, performance is pretty good for a decentralised P2P network.


Well said. I too was looking at IPFS few times, but hardly could find a place to use it. Im a big fan of distributed storage or self hosting, but IPFS is far too static for anything usefull, except maybe archives of import static blobs.


What exactly are you unable to build with IPFS that requires it to be even more dynamic than what it is? I've found it flexible enough for most use cases I've had in mind, as long as you're flexible on how they architecture should look.


Thats the point. I cant kinda wrap myself into it. I cant see it anything more that distributed object storage. So, calling it a future web is kinda exaggeration imo. Not sure. I host very simple web pages myself, and yet they are pretty dynamic on output.

Can you point me to some interesting projects that utilize IPFS?


See https://awesome.ipfs.tech/ and https://ecosystem.ipfs.tech/ for some projects using IPFS


Thx. Seems im too old fashioned or sth.. Most projects are out of scope. Some are interesting, and some are weird, like IPFS chat or Push to talk.

I think I will stick to my good old web for now :)


It works great for my use case

Since you havent looked, people are using Protocol Lab’s crypto version of IPFS to pin on IPFS

Filecoin+IPFS is far more free than any of the IPFS SaaS pinning services

and it has decent replication too

I serve over CDNs, of which there are many, and they cache well enough.

I use it to stay on Vercel and Netlify’s free tiers for my static assets, so my sites can have huge spikes in traffic but my static assets are not loaded from them.

Its free on free, big use case for exploratory projects

https://web3.storage does that filecoin+ipfs pinning


I'm quietly using it in a side-project of mine that is intended to provide a cloud-esque environment to a permissioned p2p compute cluster. In my case, it's basically providing S3-like functionality, which works rather nicely in a datacenter environment.


I mostly lost interest in it when I learned that it's possible for a file published to IPFS to simply blink out of existence one day in the future (if no one is left pinning it).

At that point I'd rather stick something in an S3 bucket and pay for it myself.


I didn't mind that too much, because IPFS is strictly better than S3 in that regard. IPFS isn't meant to make it so that you don't need to host your own files, but rather that I can seamlessly host your files too.

In that regard, it's much more available than any HTTP server.


a file on S3 will simply blink out of existence too one day if you stop paying for one specific someone to "pin" it, so I don't think this point makes that much sense without further context?


I care about making things available to other people. My initial attraction to IPFS was that it looked like it could help me do that - but then I learned that just publishing a file to IPFS doesn't permanently solve that problem for me.

It solves caching/distribution, but it doesn't solve make-this-thing-available.

I've been using and paying for S3 for this purpose for 15+ years. I was hoping IPFS could offer a better alternative, but I don't think it does.

I might consider IPFS in the future if I need to distribute a prohibitively large file and my target audience are the kind of people who can access IPFS.


The Cloudflare gateway works pretty well for this honestly. You can send an HTTP request yourself to the Cloudflare gateway after pinning a file and Cloudflare will generally have it available.


I don't mind pinning, but back in the day I was having issues using IPFS to transfer small files between devices. I admit I haven't investigated to see if the problems were ever resolved.

I've been watching https://github.com/ipfs/helia which is going to replace https://github.com/ipfs/js-ipfs and hoping they can get an IPFS node working in the browser.


IPFS has ruined the public perception of what a content addressable network could be.

Now when you mention CAD, people think IPFS and freak out about bad performance and flakiness. It's a real shame because we already have a fantastically reliable CAD and DHT in BitTorrent, and it's trivial to build on top of that to create excellent experiences.


IPFS is propped up by AI companies.

When it becomes clear that their models were trained on library genesis, they are betting that "but our web crawler stumbled into it through cloudflare's gateway" will be a good enough excuse to keep them out of prison.

This is basically the only thing IPFS offers that Bittorrent doesn't.


> This is basically the only thing IPFS offers that Bittorrent doesn't.

Now that's simply just untrue. The main difference from Bittorrent is that it relies on Content ID, not torrent links. IPFS is used by many organizations and individuals more than just AI companies.


I've been looking into a private IPFS network as a way to share photos. It doesn't seem ready for that. Is there something out there that allows clients to update a mounted drive and keep in sync? Something that is transparent enough that ordinary users aren't intimidated to use it?


You can do that with peergos [1]- mount a peergos folder locally using FUSE. Or login to the web interface and share easily and privately.

[1] https://github.com/peergos/peergos


I think https://fission.codes/ecosystem/wnfs/ might do what you want (though I don't know about viewing photos in browser etc). Alternatively, IPFS supports unixFS and a mutable filesystem through the desktop client if you are happy to host them on your own machine (it acts like a unix dir)

edit: ah sorry, I see you actually asked for a private network. You could possibly look into https://ipfscluster.io/, though it might be a little heavyweight for what you're looking for


There's also Perkeep [1], though it seems like development has slowed down on it in recent years.

[1]: https://perkeep.org/


Maybe Syncthing would work for you? [1]

[1] https://syncthing.net/


Syncthing with "copycat" as a web UI and Samba access is what I give to my users. People who onboard to syncthing like it but usually need help on initial setup.


I'm building something that solves this problem. I'd love to hear more about your use case, is it something you'd like to discuss or join a beta down the line?

If so, reach out at marc@ at my username .net


The concept of an "Interplanetry Filesystem" is a good one.

The actual IPFS implementation doesn't live up to expectations though.

Expectations:

* I want to be able to mount / as IPFS and know that I can boot linux from anywhere.

* I want to have my photo library on IPFS and add to it from anywhere.

* I want to be able to share anything on IPFS, and if someone else has already uploaded it for the upload to be instant.

* I want all the storage on my phone/laptop/whatever permanently full of other peoples stuff, earning me credits to store my own data.

* I want my stuff reed-solomon encoded with lots of other data, so that in case of a failure of a chunk of the network, my data is still recoverable.

* I want the network to be fast and reliable with excellent sharding of data and minimal hotspotting.


Are those expectations coming from reading the landing page at ipfs.tech, or where they come from?

> * I want to be able to mount / as IPFS and know that I can boot linux from anywhere.

A starting point: https://github.com/magik6k/netboot.ipfs

> * I want to have my photo library on IPFS and add to it from anywhere.

I personally wouldn't keep my private photos on a public network, but everyone is different. You should be able to do this today, maybe you're saying that the client software for doing this is missing? Because the protocol would support it, but I'm not aware of any clients that would help you with this.

> * I want to be able to share anything on IPFS, and if someone else has already uploaded it for the upload to be instant.

You don't really "upload" anything to IPFS, ever, that's not how the protocol works. You "provide" something and then if someone requests it, you upload it directly to them. So in that way, "uploads" are already instant if the content already exists on the other node.

> * I want all the storage on my phone/laptop/whatever permanently full of other peoples stuff, earning me credits to store my own data.

> * I want my stuff reed-solomon encoded with lots of other data, so that in case of a failure of a chunk of the network, my data is still recoverable.

These are both "solved" by Filecoin rather than IPFS, although you can solve the second one by yourself with IPFS by just running multiple nodes you yourself own. But the whole incentive part is (rightly) part of Filecoin rather than IPFS.

> * I want the network to be fast and reliable with excellent sharding of data and minimal hotspotting.

You and me both :)


> I personally wouldn't keep my private photos on a public network, but everyone is different.

IPFS needs transparent encryption yesterday. I tried to start a discussion and even made a rough design but they don't seem interested.

They have added some basic protection where a node won't serve content to another node without knowing the CID but this isn't the same level of security as E2EE.

I think the encryption key should be transmitted with the CID but separable. So that you can pin data with just the raw CID but share data easily with CID+key.


You could already add E2EE encryption yourself, just encrypt the content before you hash it and share the hash?

Still, I wouldn't want anything I want to be private to be on a public network, be it IPFS, S3 or the internet at large. Who knows when the encryption will be broken? Simply too little benefits compared to the massive drawback in case the encryption doesn't hold in the future.


Yes, but having it be disjointed from the protocol adds friction. You can't just browse the files using native tools and gateways. I think it would be great to be built-in.

In fact I would argue that all data should be encrypted. But by default it could be encrypted with its own hash or similar so that it can still be deduplicated but has strong protection from people who don't already know the content (or hash) and can be pinned on untrusted nodes. This would resemble the Google Docs "public link" sharing. The only downside would be slight CPU overhead and longer keys.


You can do that and also require auth to retrieve the cipher text blocks: https://peergos.org/posts/bats


I'm surprised that nobody talks about content-addressed encryption with IPFS. It would be the perfect fit for IPFS. Content-addressed encryption uses the hash of the content as the encryption key. Which means don't need to transmit an extra key. And that anybody who has the original hash can access the encrypted version.


Yeah, I think content-addressed encryption is a good default. It means that anything you add is only accessible to those you share it with.

I do think it is important to also support custom-key encryption. That way you can share publicly-known content without others knowing.


> Yeah, I think content-addressed encryption is a good default. It means that anything you add is only accessible to those you share it with.

My first thought about that is a problem around how hashes gets passed around. When you add something to IPFS, it "provides" the resulting hash to the DHT (the submission article goes more into this), so other nodes know how to get it. If you then use the same hash for encrypting the content, it's basically as worth as using no key, as other nodes already know the hash because your node told them about it.

So, lets not provide the hash when you add it then? But then the whole content-discovery part falls apart, how are nodes supposed to find the content if no one knows who has what hash?

In the end, it sounds like a simple idea, but I'm not sure it'd provide value on a public network like IPFS.


You use different hashes for discovery and encryption.

One method is use HASH(0 || content) for discovery and HASH(1 || content) for encryption.

You could also use HASH(content) for encryption and HASH(HASH(content)) for discovery.

(Talk to a real cryptographer to ensure that this is both theoretically sound and robust against likely algorithm vulnerabilities)

As long as you can't go from the discovery key to the encryption key you should be fine.

IIRC this is already done. I think they do something like HASH(CID) for publishing on the network, but before the data is sent to the node they have to prove that they know CID. This provides protocol-level protection this content-based encryption. (Although it has downsides such as not being able to store encrypted data on untrusted nodes)


> I personally wouldn't keep my private photos on a public network, but everyone is different.

Well, that's exactly the problem, isn't it? IPFS could be extremely useful for local and private storage, as it provides a network file system with proper directories, an optional HTTP interface, content addresses and an fuse implementation to mount it on Linux, along with automatic distribution and caching of the data. Those are all excellent features that I haven't really seen in any other system.

But the actual support for local or private hosting is basically non-existent. On IPFS everything is public all the time. The whole thing is way to much focused on being a globally spread protocol, while it neglects the benefits it could provide on the local PC, by just being a file format.

What I am missing is something like Git build in top of IPFS hashes. Something that allows me to manage my files on my local PC without any of the networking, but with the content addressing. Something that allows me to quickly publish them to a wider audience if I desire, but doesn't force me to. Or even just something I can use as a way to access my local files via content address instead of filename.


You're looking for Peergos [1]. One of the authors is in the comments here also.

[1]: https://peergos.org/


> You don't really "upload" anything to IPFS, ever, that's not how the protocol works. You "provide" something and then if someone requests it, you upload it directly to them.

This model should be changed... I should be able to just send something to the network, having other users store it for me, and come fetch it back later.

The whole idea that I am constantly online 'pinning' files is a bad one. The whole idea that I must store the specific files I want to make available to others is also a bad one. The network protocol should mix file data beyond recognition, and the exact data on my hard drive should have little correlation to the data I specifically am sharing with others.


That particular thing is one of the fundamentals of the protocol. If you want something else, then IPFS really isn't what you're looking for. If that model is what expected from reading the landing page, then I guess the landing page doesn't clearly communicate how it works.

What you're asking for is a bit like asking Bittorrent to suddenly change their model to a model where people can push data onto other nodes.

If that is truly what you want, probably something like Freenet (https://en.wikipedia.org/wiki/Freenet) would be more suitable, as it's highly unlikely IPFS would change the protocol in such a major way.


What you want then is the old Freenet.

But that model also sucks. People have to contribute a pool of storage to the network, and data is spread through it by uploads and usage. It's extremely prone to losing data especially with big files, and it's low capacity because content needs a lot of duplication to actually be reliably retrievable.

And if nobody wants it for long enough it'll fall off the network entirely.


Works fine for us so far. Discovery of newly added files is immediate. Downloading speed is fast. It’s quite easy to get this to work, you need to have a few or more instances with these objects pinned. And make sure the bandwidth and other resources are sufficient and the servers are always online. Or use a reliable pinning service that can do this for you.


I still think the biggest problem with IPFS is that they put every block of every file in the DHT. It's just insane compared to BitTorrent, which only puts the top level torrent info in the DHT.

Having the option to pin just one file is useful, but they could greatly reduce DHT traffic if they didn't need to allow access to arbitrary resources without starting at some parent block.

BitTorrent requires you access files via a collection, and only the collections are stored in the DHT, and the bandwidth use when idle is single digit kb.

I think BitTorrent itself could be extended to cover most IPFS use cases, possibly better than IPFS itself, although IPFSes database-like stuff is pretty unique.


Yes, you are completely correct. And BitTorrent is absolutely usable everywhere that IPFS is being used.

https://github.com/anacrolix/btlink

https://news.ycombinator.com/item?id=37771434


I clicked About and received a 500 Error “Importing a module script failed”.


Ah, so you pulled it from IPFS. That's the usual experience.


It would be nice if there was an IPFS implementation with much lower memory requirements. I tried a while back on something equivalent to a “free tier VM” and it quickly ate all available RAM.


Am I especially dumb, or is IPFS messaging really flaky?

For example, I still don't understand how to access their resources. Do I need a special client?

And this is coming from a person who LOVES torrents...


There are a few client options; the most widely used one (to my knowledge) is https://github.com/ipfs/kubo for CLI. There's also a desktop client that's pretty nice: https://docs.ipfs.tech/install/ipfs-desktop/


I'm the techy "self hosting" provider for my friend group.

Is kubo the right way to give my non-technical users access to ipfs? Do they need any extra tools like a browser extension?


Probably IPFS desktop is best for them. There is also a browser extension that hooks into (brings with it?) IPFS desktop


It is more complicated than it needs to be, but very similar to the BitTorrent ecosystem.

https://docs.ipfs.tech/install/

IPFS is a protocol, it needs a client. That client can be a web page or local software, just like torrents.

Curl now supports downloading IPFS resources (but uses an HTTP gateway, it doesn’t talk to the DHT directly).


Brave has IPFS support built in now.


> assuming a network size of ~25k DHT Server nodes

I guess they've given up on the idea of end users running full nodes.

There still might be some value in having a federated CDN service, but I think they will struggle to compete with centralized CDNs for all the same reasons other federated services have struggled.


I never really understood IPFS... It seems to be something similar to Torrents, but with a subtle smell if crypto bullshit smell attached to it.


It's like bittorrent (for sending chunks between peers) + DHT (for discovering which peers have which chunks) + magnet links (to identify files based on a hash of their chunks).

It's unlike bittorrent in that there are no '.torrent' files (only content hashes), no trackers, and chunks are global/pooled/shared (i.e. they're not specific to the file they came from).

The crypto bullshit is "filecoin", which tries to incentivise people to host other people's stuff. It can be safely ignored (I certainly do).


What makes it desirable as opposed to bittorrent?


Bittorrent is based around individual "torrents", which may contain many files. Separate torrents are completely unrelated to each other: for example, if I'm seeding an old Ubuntu release, and you're trying to download a newer release, then you won't connect to me; even if many of the files in that release are identical (e.g. config files from /etc, Python libraries that haven't seen an update, etc.).

Since IPFS is a global swarm (or, in principle, "interplanetary"), then your download will fetch the files I'm seeding. Indeed, you can fetch chunks from different files, if they happen to share some contents (e.g. if a Fedora release patches the end of a script, you can still fetch the initial part without the patch).

Since we're not artificially partitioning/siloing our data, two people who happen to share the same file will end up generating the same URL: fetching that URL can get chunks from either. This is nice since it avoids any need to coordinate, or even know of each others' existence: we can just share whatever we like, and the network will ensure downloaders will find seeders. This can even happen across time: if some files lose all their seeders then their URLs will stop resolving; but if someone later happens to share the same files, then those same URLs will start resolving again.

This makes it easy to host Web sites without needing a reliable machine or connection (just seed it from a few machines; as long as one's up, it should work); it also lets us refer to URLs that are host-agnostic, and which we can even seed ourselves (rather than e.g. npm.org URLs, which may be deleted like "leftpad"; or github.com URLs which may be deleted, e.g. when projects jumped ship after Microsoft bought the site).


Thanks for the insights!



That's a pretty elaborate website just to say one word that's wrong.

The IPFS blockchain is called filecoin.

You don't have to use filecoin to use IPFS, but it's all tied together.


IPFS is a network protocol, and various parts of it have multiple inter-operating implementations. Can you show which part of the protocol has anything remotely close to a blockchain?

If I am serving content via IPFS, or downloading content via IPFS, or writing my own IPFS implementation, where is the point where I end up using this blockchain that is tied together with IPFS?


I didn't say it's directly part of the protocol. But it's part of the ecosystem, made by the same people concurrently.

Go to the Filecoin Wikipedia page. See how it says made by Protocol Labs. Click Protocol Labs, see how it actually takes you to the IPFS page. That's how synonymous the protocol and the company are. So anything made by the IPFS company, designed to operate with the protocol? I think it's reasonable to say it's something IPFS "has". At the very least it's misleading to say "no" without a footnote about Filecoin.


It is not.

IPFS exists entirely independently from Filecoin. There is no blockchain. Full stop.

That Filecoin uses some common tech with ipfs like libp2p and cids doesn’t change that.

If/when filecoin disappeared, ipfs would keep working exactly the same. Because ipfs does not have (or use or depends on) a blockchain.


IPFS doesn't build on top of a block chain, but it has a blockchain.

It's not just common tech, it's made by the same people. It's not independent. Full stop.

And even if the question was "Does IPFS use/depend on a blockchain?", the appropriate answer would be "no*" or "no, but" and then something that mentions filecoin.

Let me put it in another context, would you say Brave has a blockchain? I'd definitely say it does.


Plenty of blockchains—Filecoin included—use some amalgamation of IPFS, but the reverse isn’t true.

Heck, even Bluesky uses some IPFS tech. Does that mean it has a blockchain? Of course not.

> Let me put it in another context, would you say Brave has a blockchain? I'd definitely say it does.

I don’t use Brave, but from a cursory glance I think probably yes, it “uses/depends on” a blockchain for some functionality.

https://brave.com/1.39-release/

Can you show an example where that is the case for IPFS?


> Plenty of blockchains—Filecoin included—use some amalgamation of IPFS, but the reverse isn’t true.

But one of them is directly tied to it.

> I don’t use Brave, but from a cursory glance I think probably yes, it “uses/depends on” a blockchain for some functionality.

It's just a web page, the actual browser functionality doesn't depend on it. But maybe it wasn't the best example.


The interesting part of it, IMO, can be pitched as "decentralized S3".


S3 + IPFS = Filebase (https://filebase.com)


Oh, that's very neat!


Is any of you currently using IPFS? What's your use-case?


IPFS is trash. The APIs and interfaces of which there are millions change signatures every 6 months. Your 4 month old code will not run anymore and fixing it is a real slog.

Sigh.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: