Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> IPNS isn’t done yet, so if that link doesn’t work, don’t fret. Just know that I will be able to change what that pubkeyhash points to, but the pubkeyhash will always remain the same. When it’s done, it will solve the site updating problem.


Yeah, I saw that, but it's two years later and it still doesn't work? Not to be snarky, but my limited understanding is that IPNS is really the only novel part of IPFS anyway; if I just want to share a file peer-to-peer based on the hash of the content, bittorrent has existed for ages.

It just seems silly to talk about how HTTP is unreliable because your severs might go down, when the alternative "serverless" architecture you're hyping doesn't work either. I'm totally on board with the aims of IPFS and hope they accomplish all the things they're trying to do, but to say HTTP is obsolete when HTTP works and IPFS doesn't (yet) is just a little too much...


That's how these things always work, by saying "it will work one day."

If it would work, and the cost benefit ratio were there, people would adopt it quickly. That's what happens with just about everything else.


> If it would work, and the cost benefit ratio were there, people would adopt it quickly. That's what happens with just about everything else.

Great point! Just like:

* Betamax * HD DVD * Minidisc * Hoverboards * IPV6 * DNSSEC * PGP & PKI * Linux desktops * Dvorak keyboards * The metric system * Decimal time * [flavour-of-the-month programming language] * [flavour-of-the-month database] * [flavour-of-the-month cypher] * ...

The factors that influence the proliferation of a technology are wildly divergent from the criteria 'works well, cost/benefit'. I'm not even sure those are weakly correlated proxy indicators of technology uptake.


> The metric system

While I agree with your sentiment, this one is a bad example.

I grew up with the metric system, as did the vast majority of the world. I have an intuition for "meter", "kilograms", "seconds", and so on.

I need to convert to cumbersome stuff like "miles", "inches" or "pounds" only when reading articles written by, you know, inhabitants of that strange, large country over there.


It's still a good example because the government in your country probably mandated it. People didn't just switch on their own accord.


IBTD. This is mostly an educational issue. Here in Germany the metric system was introduced in 1872 [1], and compared to other European countries we were already late to the party. That's plenty of time for transition. The last generation who didn't work with the metric system is dead for a very, very long time.

[1] The history is actually more complicated, but let's not get into that.


The metric system has been taught in US schools for decades. Still no one uses it, because no one else uses it. Breaking out of network effect traps requires coordination only a government can provide.


Nicholas Nassim Taleb on the logic of the imperial system: https://www.facebook.com/nntaleb/posts/10153932393103375


> A furlong is the distance one can sprint before running out of breath

That doesn't seem very logical at all, that's entirely subjective. I'm fairly certain a top sprinter would easily be able to sprint much further than my (admittedly) unfit self before running out of breath.


One, furlong actually comes from "furrow length" which how long an ox could plow before tiring.

The point isn't that the measurement is precise, the point is that it's useful. The unit has an intuitive and tangible meaning in the real world that let's people ballpark. This doesn't mean we should start doing precision work in furlongs but demanding that everyone switch away from measures that are still useful is silly. As long as the measurements are standardized using metric units who cares that you have a funny name for 201.168m?


If there is one thing that people know deep in their guts today, its how long an ox could plow before tiring.


Which is why nobody really uses furlongs anymore but there are plenty of other units that are still in use. One example I think we're all familiar is the 'Rack Unit' for servers (i.e 1U, 2U) where 1U is 44.45mm. I don't think there would be any additional clarity gained by saying, "I bought a few 88.9mm servers".


But that's a context specific unit, not intended for general use.

Metric is great for general use simply because of it's multipliers: (...) 1G = 1,000M = 1,000,000K = 1,000,000,000 = 1,000,000,000,000m (...)

And also the simple way many units are related as well, like 1L of water having 1kg of mass (yes, with a certain temperature, pressure, yada yada yada)


I think the best situation is when you use sensible units for general situations, and when the funny units remain domain-specific.

Another example of a funny name is two-by-four, which - for some typically American reason - is understood not to actually be two inches by four inches...


> who cares that you have a funny name for 201.168m

You do care if you frequently have to convert between all those funny units.


Actually, it's pretty simple.

"Better" has to actually "be better ENOUGH" to warrant all of the retooling of existing systems. I've got plenty of clients who would happily run Windows 2003 ("it's paid for") if it weren't for changing standards that aren't compatible (newer TLS, Exchange, etc) and security breaches. They only upgrade because they have to. "E-mail is e-mail" to them.

But if you sell them some magical new technology that promises to meet new features, like tons of data analysis tools and easily graphs and charts in a new version of CRM, they'll happily upgrade.


Another important factor for adoption/adoptability is how well the new system integrates with existing deployments of older systems. Ideally it completely interoperates with the older systems, while providing you with additional value right from the start.


Agreed with lgierth and I believe this is what sets IPFS apart from many similar technologies: integration path for existing technologies. As far as I can tell, it has been an important design decision from early on for IPFS.


That only answers some of those examples.

It's pretty visible in tech that it's not actually the only (or main) reason, especially when you see companies continuously switching from one crappy tool to another. Tech is a fashion-driven industry; companies use what is hot and/or what everyone else is using. Both of those create a positive feedback loop that amplifies brief spikes in popularity (easily exploitable through marketing) beyond any reasonable proportion.

The worst thing is, though, that it kind of makes sense from the POV of management. The more popular something is, the less risk there is in using it, especially when the decisionmaker doesn't have enough knowledge to evaluate the options. Also, the more mainstream a given technology is, the cheaper and easier to replace programmers.


It was a good list until...

> The metric system

Really? You know that the whole world is on it, right? And that it makes far more sense than whatever nonsense someone came up with before.


You're implying that the metric system and IPv6 aren't being used in great numbers today, which is false.


I guess metric system is there to drive the point home to the Americans in the audience, and IPv6 as an example of something used but not enough to matter.

(Here's my new conspiracy theory: lack of adoption of IPv6 is caused by SaaS companies colluding to keep people and companies from being able to trivially self-host stuff.)


Wow! What an ignorant view of technology adoption! Almost, every revolutionary technology you see today (right from radio and A/C current to personal computers and deep learning) did not work fine once upon a time. It is because people kept saying, "it will work one day", and continued working on them that we have these technologies making our life simpler these days.


> "it will work one day", and continued working on them that we have these technologies making our life simpler these days.

You unknowingly make my point. I have nothing against people working on new technologies until they work. That's a strawman on your part.

But, this isn't a case of working on something until it works. The headline of this blog is, "HTTP is obsolete. It's time for the Distributed Web." IPFS is not ready to replace http, and it won't be until the cost vs. benefit ratio works out for enough people.


true for revolutionary new technology. Not true for incremental technology that aims to replace an existing similar technology. Especially if the incremental tech is something that most "normal" people don't really care about (such as hosting content on the internet).


Ask any non-technical person whether they've ever been bitten by link-rot! :)

Content-addressing doesn't alleviate the problem 100%, since content can still fall off the network - but it improves the structure of the network in a way that makes it tremendously easier to keep content around. It's not up the original source of the content (owner of the domain name) to keep the content around - anyone can help out by keeping a copy.

My colleague Matt addressed this beautifully in a recent talk at the NSDR Symposium: https://archive.org/download/ndsr-dc-2017/04_Speaker_3_Matt_...


A distributed web (including proper mesh networks) has the potential of changing the status quo from constantly worrying about data limits and "I don't have any wifi" to "normal" people having constant "internet" access everywhere they go.

I think they don't care much about the underlying technology, but they will notice when some apps work faster and without a mobile data connection when others don't.


> changing the status quo from constantly worrying about data limits and "I don't have any wifi" to "normal" people having constant "internet" access everywhere they go.

So will rapidly increasing data allowances and cellular coverage. Plenty of European countries have effectively unlimited data packages and effectively complete network coverage.

Crucially, whatever gaps there are in this are very likely to get filled through already underway progress much faster than a distributed web on mesh networks will get to a usable stage.


I am glad you are in a country where that seems to be the case or are just more optimistic.

The status quo however is colleagues of mine discussing which of the main carriers to choose to get decent 3G/LTE coverage in Berlin(!), after all these years of progrrss, so I think it's worth to consider other options. Rural Germany is even worse. My data caps are also about the same (less than double) as they have been 5 years ago.

Even if it won't take over as the dominant technology, it might create enough pressure on the carriers to act.


Couldn't people build webapps like this today by heavy caching and storing data locally?


To some degree yes, but with IPFS that becomes easier in my experience.

Even with static websites you usually need to have a web server you are able to connect to, or have to go out of your way to add a Webworker that makes is offline-capable. There a single address that is served via an IPFS gateway behaves better with less additional tooling.


In terms of caching, think of IPFS as making every node in the network also a dynamic CDN, with content automatically moving closer to the people who use it - including into your LAN.


Mo it isn't. I was trying to get small-medium businesses interested in a thing called email in 1990 and it was a hard sell.


Of course it was a hard sell! The cost/benefit wasn't right for that business in 1990...


DNS does that too?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: