Except, you're gonna have to have servers unless you've got the entire web backed up on everyone's computer. Otherwise, you don't and can't know how many copies of a page or other file are out there. But who's going to pay for servers to retain random peoples' and companies' web detritus? This whole project exists because that's not feasible in the long term...
Because IPFS uses a distributed hash table like Bittorent, you don't have to know where stuff is--that's the problem with the HTTP, which is location addressed. IPFS is content addressed—the hash is the location.
You never host anything unless you want to; you don't host random stuff.
It's 2017, if you're using Google Docs or an instant messenger program and lose access to the backbone, you can't communicate with somebody who's in the same room with you. That's kinda silly. IPFS solves that issue.
IPFS is censorship resistance because it's a distributed protocol that can use a variety of transports. If I run example.com, people can DDOS it; it's much harder to do that when hundreds or thousands of nodes have the same content and you can connect to any of them. Sure worked in Turkey: http://observer.com/2017/05/turkey-wikipedia-ipfs/.
Filecoin is a cryptocurrency that will be mined by providing storage via IPFS.
Folks may want to read the white paper before they make assumptions about what's possible and what's hype: https://filecoin.io/filecoin.pdf
Someone has to have replicated the file though, else it can be lost. Yeah, you still have the hash, but if no one is left that stored the file, what are you gonna do - brute-force search for a preimage of the hash to get your content?
As long as IPFS requires replication to be voluntary on the side of the nodes, the argument of the parent holds.
That's the same thing we've got now, except that going through the process of "replicating the file" usually means either paying an a hosting company and learning how to maintain a server or signing your control and rights over to a company like Facebook.
There's nothing preventing anyone from going through these same measures with IPFS or dat. It's just that you don't have to in order to get started hosting something.
But that really only holds true for content that hundreds of thousands of nodes find interesting enough to hold on to. IPFS is definitely a step up in that it allows for more than just the originating party to preserve it, but this isn't too different from http mirroring save for the obvious advantages to discoverability.
"You never host anything unless you want to; you don't host random stuff."
Then explain that to the author of TFA, because they seem to imagine that websites and whatnot are just somehow going to be out there forever with IPFS, without having to rely on one's server. In reality, unless the content is popular and anyone's bothering to replicate it, it's going to still fall off the internet the moment the server's down in an IPFS-based web.
Except for bittorrent, in my experience unless someone dedicates themselves to full time "hosting" of some torrent, it will be seedless within a month or so, after the initial wave of popularity.
Right, if you're running a site you'll still need to have your own infrastructure to host the authoritative source. IPFS is supposed to reduce the need for CDNs.
- Even assuming all this, a hybrid approach of HTTP + IPFS (or DAT) is still better than what we have now, since IPFS is essentially a worldwide CDN for static files.(Sorry: an inter-planetary one.)
- The content-addressing aspect makes it perfect for distributing commonly used libraries.
- We already cache all this content locally. What a waste! Why do I have to fetch jQuery from fricking California when it's sitting on my girlfriend's phone in the other room?
- This extends beyond the web: think about the benefits (both in security, practicality, and performance) of content addressing introduced into package managers (take it one step further even: combine this idea with the new move towards reproducible builds (https://reproducible-builds.org) and package managers like guix and nix and things get really interesting).
- It's actually easier to use for the average person. If you don't think this is the case I propose a simple experiment: download the beaker browser and set up a simple static site. I recently did this. It really is one-click hosting! Considering how complicated web hosting is to the average person (ever try to walk a friend through setting up a website? not. fun.) -- people would love to be able to set up personal websites this easily... and for free?
- As others have mentioned, there are many solutions being worked on for the mirroring of data (Filecoin etc).
- For websites that are visited regularly, this is not an issue -- all content is cached temporarily. It suddenly becomes basically free to serve an audience of millions... again: with one click.
- If history serves as precedent, if it does fail it would be in spite of being an objectively superior, practical solution. Getting a critical mass of people on this thing is the hardest problem to figure out. -- I suspect package management, academic data are the best place to start, then one-click personal hosting -- not even think about "apps" for now.
- Didn't you just read the web is about to go permanent? Do you really want to be archived for all history as one more nay-sayer? ;)
>since IPFS is essentially a worldwide CDN for static files.(Sorry: an inter-planetary one.)
Sorry but IPFS is interplanetary in the same way a Boeing 747 is capable of orbital flight.
Last I checked IPFS will not tolerate minute long latencies and requires a bandwidth above several kilobits per second which would immediately disqualify it for anything farther than the moon.
And I'm not sure it would work on the moon since that is a 2 second latency and I had issues with it when I used it on a mobile phone network with 800ms latency.
>I recently did this. It really is one-click hosting!
Except it isn't hosted unless atleast one person keeps a copy online, otherwise it goes offline or you pay money to some hoster or filecoin (not that I think that filecoin isn't a huge scam at this point)
>- Didn't you just read the web is about to go permanent? Do you really want to be archived for all history as one more nay-sayer? ;)
Since the number of people interested in the content of this page is declining with every decade passing, I think I'll make a bet it'll be no longer available on an IPFS after a mere two decades.
> Last I checked IPFS will not tolerate minute long latencies and requires a bandwidth above several kilobits per second which would immediately disqualify it for anything farther than the moon.
>
>And I'm not sure it would work on the moon since that is a 2 second latency and I had issues with it when I used it on a mobile phone network with 800ms latency.
Fair points :) We'll be addressing this in the coming months with increased work on the network stack (libp2p).
> Last I checked IPFS will not tolerate minute long latencies and requires a bandwidth above several kilobits per second. [...] It isn't hosted unless atleast one person keeps a copy online.
And the vacuum tubes in my Colossus might overheat at that rate too! -- Damn, you're right, we're just not smart enough to solve those problems.
---
edit:
> Does sarcasm prove your point?
Fair enough, sarcastic Parthian shot removed. I get overexcited sometimes.
Funny you should say that. Just hours ago I released an app that helps you pin your most important IPFS hashes to your phone[0]. Works well together with IPFSDroid[1].
Battery usage is definitely noticable, but my phone has a good battery life overall, and IPFS on the phone is a priority for me.
Data usage would be a problem, but I use it only when connected to a portable WIFI hotspot I am carrying with me.
In a post-carrier world, she would be paid with tokens that she could use to buy faster network access later in the day or sell any surplus to heavy network users (probably indirectly through a brokerage, perhaps even run by a company that used to be a carrier). The tokens might even buy electricity from the neighbor's solar panel to charge the phone.
If I understand correctly, the argument isn't against content addressing but against sharing the content with anyone. While you have an incentive to store it locally and reach for that first, you (currently) have a negative incentive (=cost of data/electricity) to share it with someone.
Why do I have to fetch jQuery from fricking California when it's sitting on my girlfriend's phone in the other room?
Just for one? Because, if you have the ability to do that, your girlfriend's phone has the ability to detect whether anybody nearby is accessing any arbitrary file or page. It just has to host a copy of that page and see whether anyone pulls it.
I don't know the internals of IPFS DHT implementation, but the whitepaper mentions Kadmelia and Coral. Coral tries to optimize for ping latency (you're not literally fetching from nearest geographical neighbor, I simplified to make a point).
Unless I misunderstand your point, but honestly it seems like people here are engaging more in "gotcha" nay-saying than honest efforts of criticism... it would've taken you two minutes of googling to find out this is a non-issue.
I have, but I'm no security expert so perhaps I'm not seeing something obvious. -- Do you have a specific attack in mind? Is it an insurmountable vulnerability?
Nobody is saying we can't or shouldn't have servers.
"But who's going to pay for servers to retain random peoples' and companies' web detritus?"
The same people who do this now. Because nothing in this scheme says you can't have servers just like you do now.
The difference is that you don't need a server to get started collaborating. If you want to host something to have it available offline, that's a built in feature. If you and some friends want to host an event invite, grass roots, you can do that. No need for facebook, no need for big cloud platforms. Share around a web page. People will host it while it's needed.
Personally I'm more into dat than IPFS... but they each of their own use cases. I like stuff that's not going to be "permanent". There is plenty of room for stuff like that. Not everything has to be in some permanent public record for all time. We need more accessible ways to share stuff like that. We need good ways to share stuff privately. Skip this ridiculous idea that Mark Zuckerberg should be privy to everybody's private, personal information. No thanks. Share that stuff on LANs, on encrypted p2p connections. Keep it nearby.
If you want to publish, that's what IPFS is for. And if you want it to stick around, invest the resources to make sure there are servers, whether they be on digital ocean or a bunch of raspberry pi's plugged into you and your friends' walls, that are seeding it. That's on you as someone who's committed to publishing information.
"Because nothing in this scheme says you can't have servers just like you do now."
Pretty much all the defenders of IPFS here give me the impression they haven't bothered to read the article hyping IPFS that's linked at the top of this page.
I'm thinking of a "server" as a dedicated computer connected to the network. With IPFS, if you want to ensure your data is available, seed it with one or more dedicated computers, like you would now. Seed it on digital ocean or amazon even, if you want. Nothing prevents you from doing this.
In the future, there will be new ways of incentivizing groups of people to seed data that isn't naturally viral. But nothing is stopping people from using the time tested, old fashioned ways in the meantime.
I don't buy the idea that IPFS stuff is inherently permanent, but I don't care. Even the way that IPFS handles broken links is way better than how they're handled now. With IPFS you at least get a hash of what you're looking for. That's a lot more useful than what you get now. The only thing you get now is "404".
I'm seeing a specific disconnect between the people who are into it and people who don't get it: they have ways of answering the question "why would someone want to host your content?"
The people who are into this idea realize that the content itself often carries its own incentive to share. Given the right infrastructure, a lot of stuff will host itself because people will want to share it. That's how bittorrent works.
Also, there will likely be stuff that falls out of fashion. The test of time will not disappear, but these technologies make it much easier for people who care about preservation.
There are over 5 billion files hosted on IPFS and over 500 GB per day going through the IPFS gateway. Not bad for something that supposedly doesn't work and that's only been around for a few years.
When people can get paid to make content available on IPFS… well, that's going to to be a quite a thing.
I can purchase a droplet on digital ocean than has gigabit and a terabyte of transfer every month for $5. A terabyte is more than enough content for me. I am sure most people could afford that to manage their interneting.
>I can purchase a droplet on digital ocean than has gigabit and a terabyte of transfer every month for $5
>I am sure most people could afford that to manage their interneting
And before the spec has even seen real adoption, we've already seen it centralize into a few major providers.
A bit tongue-in-cheek, but it's a real issue. HTTP isn't the reason things are centralizing so much as economies of scale and convenience are. I see nothing about IPFS that fundamentally changes that, and think we'd likely see similar centralization over time.
Git, one of the inspirational technologies, is in theory distributed as well and in practice hyper-centralized to only a few major providers.
Network infrastructure is also a powerful driver of centralisation.
I'd much rather host content from my home (and, living in a very sunny Australian city, power that with PV and battery storage at a low amortised cost) - but I can't, because the network to support that isn't there.
I get about 7Mb/s down and and 1Mb/s up - my link is highly asymmetric. When I finally get off ADSL and on to the new National Broadband Network, that'll still be asymmetric.
I can see why networks are built that way, given the current centralisation of infrastructure, but the build also reinforces centralisation.
Think back to 20 years ago when most business network connections, even for small business, were symmetric. Hosting stuff on-site (email servers, web servers, ...) was far more common.
Distributed technology keeps centralized providers honest. If github got complacent their customers could migrate their most important data in a very short time.
Github is complacent but people haven't moved because it's difficult. The issue tracker is proprietary and losing all of that and the account references for the comments makes moving non trivial.
> Git, one of the inspirational technologies, is in theory distributed as well and in practice hyper-centralized to only a few major providers.
Git repositories are replicated all over.
My laptop has mirrors of all my work's projects and many open source projects.
Imagine how many secure mirrors of, say, the React repository is out there. GitHub is basically just a conveniently located copy.
That's real and tangible decentralization. It's a magical property of the DVCS architecture that it's decentralized even when it's centralized, so to speak.
I agree that there are issues with central hubs though. Maybe the most significant one is that organizational structures and rules are defined in a centralized way on GitHub.
If you look at blockchains as another kind of DVCS that's totally focused on authority, legitimacy, and security, then it seems pretty likely that we'll end up using those to control commits and releases.
> who's going to pay for servers to retain random peoples' and companies' web detritus?
You mean Filecoin?
Anyway, I'm also a skeptic about this model but I do think there is a sliver of chance that it may work. There ALWAYS is a sliver of chance that something crazy may work. That's how it's always been.
IBM laughed when personal computer vendors and OS creators thought that they will put computers on everyone's desk, and I'm pretty sure if I was back then I would have thought the same.
Also, before criticizing some technology it would help to actually understand how the technology actually works. As far as I know, IPFS is working on all the problems you mentioned. Now whether they will succeed or not is a whole different issue, but it's not such a trivially obvious thing that one could easily say that it's a "thoughtless hype".
First, let me say that I think that IPFS is a good idea and that it has applications that are useful now. However, if I interpret the parent's "thoughtless hype" as "hopelessly naive", I'm pretty much in agreement.
Checkout Freenet[0]. And while trying to maintain anonymity makes the problem even more difficult, there are fundamental problems in Freenet that make it basically unusable (mostly around cache coherency and latency). Freenet has been around since Ian Clarke's paper about distributed information storage and retrieval systems in 1999. They haven't managed to fix these fundamental problems in nearly 20 years of trying. I see absolutely no discussion of the same problems in IPFS (though abandoning anonymity is a good start).
It's one thing to say, "Hey distributed file system -- awesome". Then you can build all the easy bits and say, "Well, maybe cache coherency and latency won't be a big problem". But now look at what IPFS has to say about cache coherency on their wiki [1]. There is nothing at all that identifies or addresses the problems they will run into -- just a definition of the term and some links to random resources.
It's all well and good to say, "Eventual consistency", but what about guarantees of consistency? If I'm a vendor and I have a 1 day special offer, can I get a guarantee that caches will be consistent before my special offer is over? How do you deal with network partitions? Etc, etc, etc.
Before you start calling HTTP "obsolete", how about solving these kinds of problems? I have absolutely no problem with projects like these. They are awesome and I encourage the authors to keep working towards solving hard problems like the above. But announcing your solution before you've even realised that the problem is hard is pretty much the epitome of naivety.
First, it was not IPFS people who said HTTP was obsolete. If you check out the original post, it's from Neocities blog.
Second, people tried "sharing economy" startups back in the web1.0 era when everything went down crashing. But in 2017 we have Uber.
The freenet project doesn't change my argument at all because like I said, I'm not saying IPFS will succeed. I'm saying there's always a chance because the world is constantly changing. If you're lucky, you're at the right place at the right time building the right thing. If you're not, you fail.
In 1999 this wouldn't have worked of course, and that's my point. Successful projects succeed not just because of the product but also because of luck, timing, etc. There are so many new powerful technologies coming out nowadays, not to mention the societal change.
This is definitely a different world than what it was in 1999 and I'm saying just because it didn't work in 1999 doesn't mean it won't work in 2017.
One very important vector for adoption that often gets overlooked is interoperability. The cost of adoption can be significantly reduced by making sure the new thing nicely interoperates with the existing deployments. We're attempting to do this well with IPFS and libp2p.
In fact the “web” is consolidating onto those who have the capital to do servers at scale.
In some ways this is good as it makes computing power more accessible to the masses with good ideas but on the other hand it puts the power of what happens with that business in the hands of far fewer people.
Except, you're gonna have to have servers unless you've got the entire web backed up on everyone's computer. Otherwise, you don't and can't know how many copies of a page or other file are out there. But who's going to pay for servers to retain random peoples' and companies' web detritus? This whole project exists because that's not feasible in the long term...
It's not "crazy", it's pure, thoughtless hype.