I have a deep dislike for overabstracting a system merely because someone has a list of hypothetical use cases. And make no mistake: it is always about supporting the hypotheticals, never about supporting what's actually really needed by the system.
Down this road lie dozens of layers of abstracted factories and strategy implementations which exist just in case someone wants to "persist" a relational object over a serial printer line. YAGNI.
I don't understand how anything on HN that discusses a more complex software architecture is immediately called J2EE/enterprisy and dismissed.
Is this because the majority of the community is self thought? Or is this because most of you only build MVPs which are mostly CRUD apps (and therefore don't know from first hand experience the benefits of a modular system)?
The constant negative reaction to anything a little more complex is frankly laughable.
Similarly, any criticism of overcomplexity is immediately met with a dismissal as anti-intellectualism rather than a justification of why it's necessary.
> Or is this because most of you only build MVPs which are mostly CRUD apps (and therefore don't know from first hand experience the benefits of a modular system)?
This is what I use Rails for, and its entire reason for existing. It's a set of conventions for CRUD apps, not a general-purpose language for building distributed databases and coordinating space missions.
The problem with these complex software architectures is that they're decoupling Rails from the database, but why in the world would you want to do that? Rails is great because it manages the the meeting point between web requests and relational databases in an elegant, repeatable, commonly-understood way. If you need a complex, general-purpose system that only sometimes will talk to a database for persistence, why in God's name did you use Rails?
Come on. Nobody is defending "overcomplexity", which is by definition a rather indefensible position. The issue is the frequent arrogance exhibited here by commenters who insist that certain commonly and successfully used design patterns have no place in the world.
Of course I mean defending against the charge of overcomplexity rather than defending overcomplexity itself.
I am sure there are arrogant commenters around, but these commonly used design patterns applied in this case have made his code worse for next to no practical benefit. Besides test speed, there's no reason you'd want to divorce your Employee model from the database it lives in -- the fact that ActiveRecord reflects against the database to decide what attributes an Employee has should be a clue that coupling to the DB is the point of using it in the first place.
> who insist that certain commonly and successfully used design patterns have no place in the world.
I don't think the architecture skeptics claim as much. I believe you're restating the claim in a more extreme way that it's commonly expressed, thus making it clearly indefensible.
The claim isn't that certain architecture patterns have no place in the world. Rather, the claim is that those patterns aren't a good fit for most Rails apps.
Pretty much any pattern (that's not a commonly-accepted antipattern) has some good use case, somewhere.
Because it is actually a very good, mature application server with lots of sane defaults and good, mature plugins, even if you aren't using a database. Basically, people are interested in using it for more complex applications than you are, because lots of its conventions are still quite good for those applications, even if it makes sense to reject or reconsider some other conventions that don't work quite as well.
> using it for more complex applications than you are
Are you sure? How complex do you mean? How complex do you presume the parent commenter's apps to be? Could you give an example of one of those more complex apps? I'm wondering if I'm currently building apps of the "simple" or "complex" variety, according to your terminology.
It actually wasn't my terminology... If you read the parent of my comment, the self-proclaimed complexity was, literally, "MVPs which are mostly CRUD apps". I think most Rails applications I've used defy that description, and I don't agree with that comment that it is "the entire reason for Rails existing" any more than I would agree that the entire reason for PHP existing is to make personal home pages.
Test speed is not the only limiting factor in your ability to produce better software faster. Readability, comprehensibility, and approachability all matter too, and this architectural technique is bad for all of those.
In principle, the current Rails architecture can support that. Rails permits the test environment to use a different database adapter. There's nothing stopping anyone from decoupling the database at that level. I.e. you can pick a super-fast persistence strategy for the test database adapter, if you so choose. If there isn't an adapter you find fast enough, you can even create one. (Such a project would not doubt be well-received.)
Anyway, for my part, I prefer to run my tests against the same kind of DB I use in production. It gives me greater assurances. Especially when a web app can sometimes depend on the peculiarities of a certain persistence layer. E.g., your app has a search engine that runs custom SQL--you really want to test that against the DB.
Would it be nice to run some of my tests without the DB? Absolutely. Some tests just don't need to test anything DB-related. My guess is that Rails' support for that use case will grow organically over time.
I think that was the conclusion the OP arrived to actually: for a hexagonal application you don't need Rails anymore. Hence the "Rails is gone" in the title.
He clearly needs Rails still, unless he's going to distribute his app through IRB. The post is about pulling Rails out of the domain logic and making it replaceable; I'm saying that's pointless.
The examples in the OP, as in basically every article that describes these kinds of practices, are horribly contrived.
The lighter-weight frameworks we have now in languages like Ruby and Python do have abstraction to deal with changing out bits of technology, but they've reached a point where they abstract only the things that experience has shown are likely to change, and support only the changes that are likely to happen.
Nobody in the real world is suddenly going to decide to "persist" their employee records to volatile local memory instead of something permanent like a database. Introducing new layers of abstraction -- with the attendant increase in complexity and potential abstraction leak -- to support those types of contrived hypotheticals is how overabstracted systems like J2EE come to be.
> And make no mistake: it is always about supporting the hypotheticals, never about supporting what's actually really needed by the system.
> The lighter-weight frameworks we have now in languages like Ruby and Python do have abstraction to deal with changing out bits of technology, but they've reached a point where they abstract only the things that experience has shown are likely to change, and support only the changes that are likely to happen.
So is abstraction always about supporting hypotheticals or only when you're exaggerating?
Implementing a switch statement and hardcoding references into 10,000 line modules with different behaviour (e.g. different rules for different jursidictions) is untestable and unmaintainable. Abstraction has value in these scenarios. And just like everything, it can be abused (e.g. abstracting over 3 scenarios with 5 lines of code in total to support a particular once off business case that is going to be discarded after running once). That doesn't mean abstraction no longer has value.
Furthermore, languages with static type checking require different styles of testing and coding (e.g. abstraction) vs languages with dynamic type checking. Neither approach is universally better for all problem solving. Criticizing features of well designed code in one language that wouldn't be necessary in code written in another language is like criticizing a car for having wheels given that boats do fine without them.
Let's take a framework I know pretty well: Django.
It's not uncommon to switch from one database to another (say, MySQL to Postgres). Django's DB abstractions keep you from having to really worry about what actual database you're running on; unless you had hard-coded dialect-specific SQL somewhere, you just flip a couple settings and now you're talking to the other database.
Same for changing replication setups, for changing authentication mechanisms, for changing logging setup, for changing how you do caching... all of these are things that can and in the real world do change, either from testing to production environments or over the life of a production application.
So it makes sense to abstract those, and the abstraction is backed by "these are things people have really needed to do frequently".
What I have a problem with, and what I criticize as overabstraction, is when someone then comes along and says "well, what if you replace the persistence layer with something that's not even persistent, like volatile memory or stdout (which is actually logging, not persistence, at that point -- a confusion of concerns!)" And then they write a blog post explaining how really you should keep abstracting to the point that the code can "persist" data to those things.
And that's why I say that the examples almost always feel incredibly contrived; it's like somebody didn't know when to stop, and just kept abstracting everything they could find until they ended up with an overengineered mess. Static/dynamic actually has very little to do with this, since even languages that do static typing in overly-verbose and un-useful ways can handle the kinds of abstractions people actually use.
So I don't see a point in re-architecting for these weird contrived hypotheticals, which always seem to be the focus of whatever we're calling the indirect-abstraction-for-everything pattern nowadays; it produces code that's more complex than necessary, has more layers of indirection (and hence bugs) than necessary, and doesn't actually gain any utility in the process.
Correct me if I'm wrong, but one of the points of using dummy persistence is testing. You can delay using a database for a long time this way, have tests that finish quickly etc. Doing this within the confines of a Rails like MVC is next to impossible.
> I don't understand how anything on HN that discusses a more complex software architecture is immediately called J2EE/enterprisy and dismissed.
I wouldn't say that always happens. It often happens when a 500-word blog post suggests effectively re-architecting a mature, successful framework like Rails. Let's think about why such a post is problematic.
Rails has evolved over quite a few years in response to actual needs on the ground. Tremendous amounts of ink have been spilled, and tremendous amounts of brainpower have been expended to create a mature framework like Rails (or similar frameworks).
Despite the battle-tested history of the framework, so many of these architecture blog posts imply that Rails' architecture is somehow insufficient. And then propose to fix that architecture, spending, say, 500 words explaining the idea. There's hardly any discussion of the idea's wide-ranging implications, of the trade-offs, or of the conveniences that are lost. If this idea is so good, why hasn't it ever found its way into Rails, even in diluted form? Why has the Rails team built the architecture they have, instead of yours? (Hint: They probably have a good reason.) Do you have strong evidence that your proposed architecture will serve me better than the one that I've been using successfully for years?
All that being said, I have no problem with idiosyncratic Rails techniques that only step a little outside what the framework provides. For example, service objects. Used appropriately (which usually means sparingly), they can help with organization without fundamentally warping the Rails app. Using a few service objects is a departure similar in magnitude to writing your JS in TypeScript instead of CoffeeScript. It's not built into Rails, but it doesn't really change any of the core concepts either.
It does get a little tiring when somebody tells you "you're doing it wrong. It has to be more abstract and complex because you might become the next Facebook." And then everything turns into a StrategySingletonProxyFactoryBeanFactory.
It totally is. And the opposite is also really tiring. I feel like the silent majority are just trying to find a good middle path that works well with their specific applications, while the loud minority on both sides are yelling about how the other side is doing it wrong.
I have a deep dislike for overabstracting a system merely because someone has a list of hypothetical use cases. And make no mistake: it is always about supporting the hypotheticals, never about supporting what's actually really needed by the system.
Down this road lie dozens of layers of abstracted factories and strategy implementations which exist just in case someone wants to "persist" a relational object over a serial printer line. YAGNI.