And they aren't. The nic.ly page says about 10k have been registered, meaning they make about $750k/year. The country's GDP is five orders of magnitude larger.
The code looks great. The mapping features seem strong (thanks in part to the power of ActiveModel). However, by restricting the adapter interface to a simple key/value store, don't you lose access to many of the features that make each backend distinct?
I think Redis, Cassandra, MongoDB, etc. are great, but to me it't the differences amongst them which are interesting, not the similarities.
So John in the above link was talking about how when you scale, you end up looking at bottlenecks, which are usually slow data accesses, and moving them into some sort of key-value store--that usually are based on a key-value access pattern. So by the time you're well into scaling your app, you noticed that much of your data access has turned into limiting yourself to doing key lookups to attain the performance you need.
Since (according to his experiences scaling) most of the performance bottlenecks seem to have this pattern, he asked, "what if you restricted your data access to just using key-value pairs from the very beginning?" That way you avoid some of the data access headaches later on. In answering that question, this is what they got.
The same tactic was used by Google for Google App Engine - "if it doesn't scale then we're not including it as a feature". This is why so much of Google App Engine's documentation focuses on scaling [1]. Depending on who you ask this is genius or folly.
By forcing yourself into this tactic you end up having to consider all the scaling complexity in the prototyping stage before your app has even proven itself potentially successful. If your app becomes wildly popular then the combination of your early work and Google's behind the scene scaling means there are far less problems for you to worry about.
My major concern with this method is that people already focus far too much on premature optimizations before they even know where the real bottlenecks are in their application. If premature optimization leads to burn out or less features then it's a poison to our projects and should be avoided.
I think that's a good tactic; certainly a valid problem to solve. I think the headline on HN oversells the ability to use different backends, whereas the real value (as explained on railstips.org) is much more about destructuring data into this scalable key-value access pattern.
You'll also have to do 'bundle update <broken-gem>', commit the resulting Gemfile.lock and hopefully wait for your CI build to pass before deploying. Doing that 15 times would be onerous.
I wouldn't be swayed either way by a high Stack Overflow rating.
That said, asking and answering technical questions can be really beneficial. It's much harder to write about something clearly than some people realise, and a great skill to develop.
Also, rather than the whole reputation, I might take interest in particular answers: a single solid answer to a difficult question demonstrates a lot more than many point scoring answers to easier ones.
He should also define respond_to? on the DoNotDisturb class. Relying on method_missing to pass calls to respond_to? to the proxied class will not work, as it is defined in Object. Responding to a method when respond_to? returns false breaks the class contract.
From the OP: I agree with you guys (or gals) on both calling super and redefining respond_to?(). Those are two of the "caveats" I mentioned at the end of the original post. I considered mentioning them explicitly, but the post is long enough already.
That's still the wrong approach (if it's the only part of the solution) and I wouldn't be surprised that there's still a problem in there somewhere. That's the entirely wrong place to deal with this. The correct solution is the moral equivalent of "<a href='" + html_escape(url) + "'>", where "html_escape" converts the URL into a properly encoded HTML string regardless of contents, and for simplicitly I'm assuming some other cleansing process has run on the url elsewhere (to ensure http: or https: is the only legal beginning, etc). (This is the way you ensure you don't get XSS in your link. Other security properties that you may desire, such as controlling what the user can link to, get enforced elsewhere.)
Then it simply doesn't matter what the user has managed to get down to the link generation code, the html_escape code should at least ensure that the user is stuck in the link itself. There are some paranoia things such a function should still do, such as remove all characters that are not legal in links or removing all invalid characters (incorrect UTF-8, for instance), consult the relevant standards standard for a full description. But this is still way easier and therefore more likely to correctly avoid XSS than trying to pick up all possible badness at the parse step.
It continues to astonish me how hard people make this and how much developers resist being told that their code is problematic, and how surprised they are when their site gets taken down by the stupidest errors....
Also, if at all possible, I strongly endorse environments where you don't literally type "<a href='" + html_escape(url) + "'>", because you will forget the html_escape. There are a variety of ways to reach this goal, depending on language.
I don't understand what they are doing? I don't recall @ having special significance in a URL?
I can only guess that they have two separate steps for transforming URLs into links and transforming @replies into links. Then they first run the URL transformer and then the @replies transformer, which would of course mess up the URL.
I have solved that problem in one of my Twitter apps (transforming both in one go), maybe I should send them a code snippet...
They are trying to match URLs so that they can turn them into links. The @ character is valid in a URL. What I don't understand is why they don't URL encode the matching text.
As Rule #5 of your own link states: "WARNING: Do not encode complete or relative URL's with URL encoding! URL's should be encoded based on the context of display like any other piece of data. For example, user driven URL's in HREF links should be attribute encoded."
URL encoding is for querystring parameters. The HTML escaping is for the inside of attributes. You need to do both, in the proper place; I assumed you already had a URL with the proper escaping at the time that I was discussing, again, for simplicity, because the full story doesn't really fit in an HN comment: http://www.jerf.org/iri/post/2548
That's also why I mention you need a separate phase specially for URLs, where you will for instance immediately reject any URL that does not start with one of your whitelisted protocols, which "javascript:" won't be on. "javascript:" is far from the only protocol that can get you in trouble, it's just the most obvious.
The comments here remind me of an old Irish joke where a hopelessly lost tourist asks an old man by the side of the road "Can you tell me how to get to Dublin?". After a few minutes thinking, the man replies "Well, you don't want to start from here".
I doubt 37signals wanted to be in a place where an apparently simple change would involve so much work, but that's where they found themselves. They did what they had to do. There's no point snarking about their starting place without knowing how and why they got there.
Your French version feels authentic to me, but then so does the Irish one. I doubt one can say where jokes like these originate. All cultures and languages probably have versions of them.
The way I heard the joke in Vermont always involved a tourist asking an old farmer for directions and getting the answer "Well, you can't get there from here."
http://en.wikipedia.org/wiki/Oil_reserves_in_Libya