If I understand this right, the article proposes IP addresses as a replacement for domain names in URLs. This would be a bad idea:
- They'd break if you wanted to load-balance with something like round-robin DNS.
- They'd be a disaster whenever you changed hosts.
- They make it impossible to tell (without clicking) where a link points to.
- They're broken with respect to IPv6 (and any future protocols). By specifying an IPv4 address, you're intermixing a transport-related detail with the semantics of the page.
The point is that the connection between a domain name and what a site is about is broken.
All your problems can be solved with a Domain Numbering System I mention in the comments. Map a nondescript number or another IP to your real IP. This way, people don't even expect a connection to what you do. They'll just find you through a search engine anyway.
Yes. But that isn't to say people will be giving out strings of numbers as identifiers. The point is that those identifiers are pretty useless today, so may as well stop using them for human consumption.
This has already been happening. I remember seeing a story on News.YC about ads in Japan (I think) that told you what to search for instead of what site to visit. And spammers have been doing it for a couple years.
37signals is a case in point. They're currently the #1 Google hit for the words "basecamp", "backpack", "highrise", and "campfire", despite not owning a single one of those domains.
Indeed. iminlikewithyou is already pretty close to 67.192.37.226
I'd like it to be a bit more formal, with institutional support for the forward naming. Otherwise, awkward company names will only become more common :)
People do it here too. In fact, at startup school, Jessica told people just to search for something (I can't remember the specific topic, or find the video, since Omnisio clipped out the intros from their videos).
I don't think we should go to the extreme of losing domain names though, I get to maybe as many as half of the sites I visit in a day by typing the url. Plus, legible domain names are an important part of human readable URLs, which are pretty valuable things.
This is a UI issue that can be solved with good search.
When loading a page, the browser can do a "reverse lookup" on the URL through a search engine and display common terms associated with it instead of (or in addition to) the URL. If you're expecting Bank of America, and it shows "sex, porn, xxx" in your status bar, you may not want to click it.
The problem isn't porn sites masquerading as banks, it's fake banks masquerading as banks.
A reverse lookup through a search engine is more akin to a blacklist of bad sites, whereas DNS and SSL is essentially a whitelist of good sites. The problem with blacklists is it's inevitable that bad sites will slip through the cracks.
DNS solves the UI problem: the inability of humans to easily remember IP addresses.
As I mentioned in another comment, DNS isn't the problem, it's the allocation of domain names. If squatters and spammers couldn't register domains so easily and cheaply it wouldn't be a problem.
I don't think that'll work at all. Instead of competing for presence in domain names, squatters will simply compete for presence in keywords and phrases. Getting a user to a specific website could become kinda challenging.
As one example, in my soon-to-end part-time job, I had to ask a user almost every day to go to "www.whatismyip.com". Occasionally they would type this into their search bar instead of their URL bar, and would magically end up at something that wasn't helpful at all. (When I try it, it works, but who knows what they're typing.)
This would just move the problem of competition for recognition to a new area.
Hmm. So, it might work for specific cases, just not generically.
Maybe it's time for the search engines to support a "certainty" statistic, an algorithm which calculates the probability that the top result is what the user is looking for, based on a combination of the user's search request and the linking for the top result.
The problem with an IP address based solution is that it wouldn't scale well for web farms, load balancers/fail-over equipment, etc.
Also end-users/web businesses can't really "own" an IP like they do a domain name, only ISP's can technically be assigned IP's. IPs aren't portable like domain names, so you'd be stuck with 1 ISP once you built up your site.
This doesn't imply anything about load balancers not working. Load balancers work by creating what are called VIPs - virtual IPs, which aggregate the IP addresses of the balanced services. In a lot of cases, the individual services need not even be routable from the public internet as long as they are reachable from the load balancer's internal interface.
The problem isn't with the domain name system itself, it's with the allocation of domain names.
Recently a huge problem was so-called "domain name tasting", which allowed squatters to register hundreds of thousands of domain names, monitor the traffic (typically from typos, etc) for a week, then return the duds for a refund. Fortunately ICANN is putting an end to that.
I think the best solution would be to raise the barrier to registering domain names. Sure, it's nice that it's convenient and cheap to register domain names, but that really doesn't matter if all the good ones are taken by squatters, now does it?
If you had to either pay a larger fee (something on the order of a trademark registration fee -- ~$300?) or demonstrate you had a legitimate use for a domain, I think the squatters would back off. Of course, ICANN and the registrars don't have any incentive to limit the number of domains registered.
Unfortunately I don't think there's a good way to reverse the current situation, short of creating new top-level domains and enforcing stricter guidelines. Perhaps certain TLDs could be designated for "premium" or "verified" domain names (basically like .edu) while .com, etc remain free-for-all.
I would like a way to blacklist domains that are being squatted. I’ve thought it might work to have a Firefox plugin to report and block these sites. If you could get a large enough community using it, the list would grow pretty quickly. You could maybe even have a script that would gray out any of those sites on Google. Would this be enough of a disincentive to squat?...if the domain was effectively "removed" from the Internet for those using the tool. If a domain was actually bought and contained real content, it could be removed from the blacklist...but the process to do so would have to be difficult enough to not negate the disincentive of the blacklist. Any other ideas on how to harm the value of a squatted domain enough to make squatting not worth while? (When I say squatting I also mean domains that are just parked and for sale. Technically I don’t think that is squatting, but it seems like that is being lumped in the conversation here...could be misunderstanding that though.)
You might have a web browser that allows the user to assign pseudonyms to websites. A user could have the word "news" mapped to "news.ycombinator.com" for his browser. If such a browser collected pseudonyms from enough people, then the pseudonyms applied to each site could start to have importance and make it easy to give an intelligent guess on what the user might want. I guess the idea is to apply del.icio.us type information to the url bar.
That is an interesting idea. It sounds kind of like a user-driven pagerank concept. Since the mappings would be different for everyone (most of the population would not map 'news' to 'news.ycombinator.com') could even introduce a recommendation system where your unmapped pseudonyms came from users whose pseudonym mappings were most like yours.
oddly enough, i have news.yc mapped to this site...and /. to slashdot, proggit to..proggit, trac for the trac site of one of my projects, gmail for gmail, g for google and so on thanks to a convenient Safari addon called Saft. :)
The problem with the group version of this is that while people here may map "news" to "news.ycombinator.com", someone else might map it to cnn or news.google.com or drudge report or any other news site. it would be inconvenient.
(so I guess what I mean is that that's a sweet idea, but it needs to be able to be overridden by the end user.)
I should mention that I thought of this in 10 minutes, and don't feel too strongly about my proposed solution. I do feel strongly that the current system is broken and in need of institutional change.
I think lots of people have this issue, and it's backwards.
Domains are used as identifiers, so people associate them with brands.
They are horrible identifiers. It needs to stop. Your domain is not your brand.
I'm incredibly lucky to have tipjoy.com.
It's such an excellent and short name, on a .com. I'm amazed it was available. That this is the exception is an indication of the problem.
- They'd break if you wanted to load-balance with something like round-robin DNS.
- They'd be a disaster whenever you changed hosts.
- They make it impossible to tell (without clicking) where a link points to.
- They're broken with respect to IPv6 (and any future protocols). By specifying an IPv4 address, you're intermixing a transport-related detail with the semantics of the page.