The examples in the OP, as in basically every article that describes these kinds of practices, are horribly contrived.
The lighter-weight frameworks we have now in languages like Ruby and Python do have abstraction to deal with changing out bits of technology, but they've reached a point where they abstract only the things that experience has shown are likely to change, and support only the changes that are likely to happen.
Nobody in the real world is suddenly going to decide to "persist" their employee records to volatile local memory instead of something permanent like a database. Introducing new layers of abstraction -- with the attendant increase in complexity and potential abstraction leak -- to support those types of contrived hypotheticals is how overabstracted systems like J2EE come to be.
> And make no mistake: it is always about supporting the hypotheticals, never about supporting what's actually really needed by the system.
> The lighter-weight frameworks we have now in languages like Ruby and Python do have abstraction to deal with changing out bits of technology, but they've reached a point where they abstract only the things that experience has shown are likely to change, and support only the changes that are likely to happen.
So is abstraction always about supporting hypotheticals or only when you're exaggerating?
Implementing a switch statement and hardcoding references into 10,000 line modules with different behaviour (e.g. different rules for different jursidictions) is untestable and unmaintainable. Abstraction has value in these scenarios. And just like everything, it can be abused (e.g. abstracting over 3 scenarios with 5 lines of code in total to support a particular once off business case that is going to be discarded after running once). That doesn't mean abstraction no longer has value.
Furthermore, languages with static type checking require different styles of testing and coding (e.g. abstraction) vs languages with dynamic type checking. Neither approach is universally better for all problem solving. Criticizing features of well designed code in one language that wouldn't be necessary in code written in another language is like criticizing a car for having wheels given that boats do fine without them.
Let's take a framework I know pretty well: Django.
It's not uncommon to switch from one database to another (say, MySQL to Postgres). Django's DB abstractions keep you from having to really worry about what actual database you're running on; unless you had hard-coded dialect-specific SQL somewhere, you just flip a couple settings and now you're talking to the other database.
Same for changing replication setups, for changing authentication mechanisms, for changing logging setup, for changing how you do caching... all of these are things that can and in the real world do change, either from testing to production environments or over the life of a production application.
So it makes sense to abstract those, and the abstraction is backed by "these are things people have really needed to do frequently".
What I have a problem with, and what I criticize as overabstraction, is when someone then comes along and says "well, what if you replace the persistence layer with something that's not even persistent, like volatile memory or stdout (which is actually logging, not persistence, at that point -- a confusion of concerns!)" And then they write a blog post explaining how really you should keep abstracting to the point that the code can "persist" data to those things.
And that's why I say that the examples almost always feel incredibly contrived; it's like somebody didn't know when to stop, and just kept abstracting everything they could find until they ended up with an overengineered mess. Static/dynamic actually has very little to do with this, since even languages that do static typing in overly-verbose and un-useful ways can handle the kinds of abstractions people actually use.
So I don't see a point in re-architecting for these weird contrived hypotheticals, which always seem to be the focus of whatever we're calling the indirect-abstraction-for-everything pattern nowadays; it produces code that's more complex than necessary, has more layers of indirection (and hence bugs) than necessary, and doesn't actually gain any utility in the process.
Correct me if I'm wrong, but one of the points of using dummy persistence is testing. You can delay using a database for a long time this way, have tests that finish quickly etc. Doing this within the confines of a Rails like MVC is next to impossible.
The lighter-weight frameworks we have now in languages like Ruby and Python do have abstraction to deal with changing out bits of technology, but they've reached a point where they abstract only the things that experience has shown are likely to change, and support only the changes that are likely to happen.
Nobody in the real world is suddenly going to decide to "persist" their employee records to volatile local memory instead of something permanent like a database. Introducing new layers of abstraction -- with the attendant increase in complexity and potential abstraction leak -- to support those types of contrived hypotheticals is how overabstracted systems like J2EE come to be.