The problem with software is that it is too easy to get it working. If you would do completely stupid shit while designing an analog electronic circuit you would get completely stuck in your "inventions" very soon and would not be able to deliver anything working beyond the simplest stuff, the difficulty would just force you to adapt a sensible design approach or resign from doing any electronics in the first place.
In software engineering, on the other hand, the kind of wankery presented here lives on for years because there is no reality check - as long as a group of people feels good about themselves inventing fancy words and "techniques" without putting in too much effort into anything of substance, fads of this kind can live on. They can even deliver products written in this manner and get paid, for the projects work, as far as most business projects are concerned, it is simply that the code is just awful to look at.
Meanwhile, serious software from "firmware" for space shuttles, through huge game 3d engines, compilers, operating systems, to scientific software manages to get developed without having any need for this sort of thing. Somehow it's always the rather trivial software that gets those super-"architectures".
All of your examples don't exactly make the point you're trying to make:
Ask anyone who's had to debug game engine code, compilers, operating systems, or the worst of the lot, scientific code.
Most of those systems would actually benefit from proper design and architecture, but people like you decrying their "wankery" have relegated such introspection to the dustbin.
And that attitude is why software engineering is mostly a dark joke.
I've written more scientific code than most people here, and the OP is right: it's the last place you want wanker abstractions.
KISS applies to scientific code in spades. The concepts are hard enough to get right without all sorts of meta-logic muddling your thinking. Only when applications are truly trivial (i.e. "boring") do developers go on architecture spaceflights to keep themselves entertained.
The best way to keep things simple is to keep them small and composable, right? Like, say, in the Unix way?
I'm all for brutally-efficient and single-minded code in domains like mathematics or simulation modeling, where things like object encapsulation maybe don't make a great deal of sense (classic array-of-structs vs struct-of-arrays sort of thing).
That said, keeping that stuff neatly boxed and then moving everything else out (say, file I/O, logging, whatever) into neatly abstracted boxes makes sense. More sense than most of the monolithic piles of Matlab and Java and C and Fortran I've seen, because people were rushing to finish a paper.
But hey, it's not like the next generation of grad students has anything to do but spend long hours debugging poorly-documented and poorly-designed code, right?
And it's not like there is any sort of commercial financial or reaction modeling software that needs to be anything other than inscrutable monoliths, right--you know, something which could actually hurt somebody physically or fiscally. Performance is king, after all.
(Hint: people who sacrifice engineering and design for performance or simplicity deserve neither.)
The choice is not "monolithic piles of Fortran" or abstraction hell...there is a middle ground.
All of your comments on this thread seem to be predicated on the assumption that this blog post is an example of something good. It isn't. It's a complicated, overwrought non-solution to an imagined problem.
Nobody here is arguing against abstraction, but abstraction needs to be used judiciously, and the best abstractions don't stray far from the application domain. It's a subjective thing that comes largely from experience, but you've likely gone off the path to enlightenment when you start creating ("hexagonal") meta-frameworks to create framework frameworks.
You and I know there is a middle ground, but there is something very appealing about deriding the (hard) work of building abstractions and meditating on design principles in favor of "get shit done" coding.
This article was an interesting little exploration--mostly point out that by not using rails but by conforming to the interfaces you could do some interesting things. It was a harmless little piece, not something you might want to do in production but interesting nonetheless in its own right.
The tone of my comments is in regards to the top-level objections and phrasing these people are using--"wankery", "architecture astronauts", etc.
Look, I get it; most of us at one point or another have dealt with JavaEnterpriseFactoryBridgePatternAdaptorAnnotationAnnotationFlyweights and have had to meander through scores of abstracted calls to finally figure out something that should've been a direct function invocation. The dot-com bubble hurt many programmers, sure. Enterprise Jabba is a bloated king in a crumbling castle; whatever floats your boat.
There are people--in this very thread!--who are arguing against any abstraction. They do so in the name of performance, and in the name of domain modeling, and in the name of a dozen other half-articulated little fears and inconveniences.
I've had to deal with code written by academics that implemented brilliant solutions to some problems, and yet remained impenetrable. If asked, "figure it out yourself". These are the same chucklefucks that don't comment their code, that are uncooperative on projects trying to clean up and make libraries out of their work, and generally are people who think that making it out of a doctorate program with a paper about their tiny slice of new human knowledge somehow grants them any weight whatsoever in discussion about software engineering or working on teams.
It makes sense that they'd be against abstraction--them and the trading folks and game developers and everyone else whose livelihood depends on excruciatingly-specific implementation details which are nothing more than an artifact of their time, a form of technical arbitrage which pays for their miserable existence.
Of course they hate abstraction, because it is very hard to convince somebody of something when their livelihood depends on them not understanding it!
Abstractions and "proper design architecture" of the sort in this article have very little place in scientific code meant for the efficient study of complicated systems, at least.
It usually gets in the way of what's important: performance and scientific correctness.
That might be fine if the majority of software developers worked on game engines, compilers, or operating systems. But we're talking about Rails applications here.
Yes, Web applications are somehow magically exempt from sound engineering practices.
If you want to make the argument that shitty Rails hairballs that don't scale aren't a problem, because they fulfill a business need and fill it quickly, I'll agree.
That said, that's a business decision, not a technical one.
Every application can benefit from sound engineering practices. Creating a complicated monstrosity when you're not going to need it is not sound engineering practice. At least I don't think it is, but then again I don't consider myself an "engineer."
Once again, Java is a great example. For over a decade, Java developers have piled on more and more "sound engineering practices." Maybe it's just me, but these Java applications are not easy to extend or modify in any way.
I have a deep dislike for overabstracting a system merely because someone has a list of hypothetical use cases. And make no mistake: it is always about supporting the hypotheticals, never about supporting what's actually really needed by the system.
Down this road lie dozens of layers of abstracted factories and strategy implementations which exist just in case someone wants to "persist" a relational object over a serial printer line. YAGNI.
I don't understand how anything on HN that discusses a more complex software architecture is immediately called J2EE/enterprisy and dismissed.
Is this because the majority of the community is self thought? Or is this because most of you only build MVPs which are mostly CRUD apps (and therefore don't know from first hand experience the benefits of a modular system)?
The constant negative reaction to anything a little more complex is frankly laughable.
Similarly, any criticism of overcomplexity is immediately met with a dismissal as anti-intellectualism rather than a justification of why it's necessary.
> Or is this because most of you only build MVPs which are mostly CRUD apps (and therefore don't know from first hand experience the benefits of a modular system)?
This is what I use Rails for, and its entire reason for existing. It's a set of conventions for CRUD apps, not a general-purpose language for building distributed databases and coordinating space missions.
The problem with these complex software architectures is that they're decoupling Rails from the database, but why in the world would you want to do that? Rails is great because it manages the the meeting point between web requests and relational databases in an elegant, repeatable, commonly-understood way. If you need a complex, general-purpose system that only sometimes will talk to a database for persistence, why in God's name did you use Rails?
Come on. Nobody is defending "overcomplexity", which is by definition a rather indefensible position. The issue is the frequent arrogance exhibited here by commenters who insist that certain commonly and successfully used design patterns have no place in the world.
Of course I mean defending against the charge of overcomplexity rather than defending overcomplexity itself.
I am sure there are arrogant commenters around, but these commonly used design patterns applied in this case have made his code worse for next to no practical benefit. Besides test speed, there's no reason you'd want to divorce your Employee model from the database it lives in -- the fact that ActiveRecord reflects against the database to decide what attributes an Employee has should be a clue that coupling to the DB is the point of using it in the first place.
> who insist that certain commonly and successfully used design patterns have no place in the world.
I don't think the architecture skeptics claim as much. I believe you're restating the claim in a more extreme way that it's commonly expressed, thus making it clearly indefensible.
The claim isn't that certain architecture patterns have no place in the world. Rather, the claim is that those patterns aren't a good fit for most Rails apps.
Pretty much any pattern (that's not a commonly-accepted antipattern) has some good use case, somewhere.
Because it is actually a very good, mature application server with lots of sane defaults and good, mature plugins, even if you aren't using a database. Basically, people are interested in using it for more complex applications than you are, because lots of its conventions are still quite good for those applications, even if it makes sense to reject or reconsider some other conventions that don't work quite as well.
> using it for more complex applications than you are
Are you sure? How complex do you mean? How complex do you presume the parent commenter's apps to be? Could you give an example of one of those more complex apps? I'm wondering if I'm currently building apps of the "simple" or "complex" variety, according to your terminology.
It actually wasn't my terminology... If you read the parent of my comment, the self-proclaimed complexity was, literally, "MVPs which are mostly CRUD apps". I think most Rails applications I've used defy that description, and I don't agree with that comment that it is "the entire reason for Rails existing" any more than I would agree that the entire reason for PHP existing is to make personal home pages.
Test speed is not the only limiting factor in your ability to produce better software faster. Readability, comprehensibility, and approachability all matter too, and this architectural technique is bad for all of those.
In principle, the current Rails architecture can support that. Rails permits the test environment to use a different database adapter. There's nothing stopping anyone from decoupling the database at that level. I.e. you can pick a super-fast persistence strategy for the test database adapter, if you so choose. If there isn't an adapter you find fast enough, you can even create one. (Such a project would not doubt be well-received.)
Anyway, for my part, I prefer to run my tests against the same kind of DB I use in production. It gives me greater assurances. Especially when a web app can sometimes depend on the peculiarities of a certain persistence layer. E.g., your app has a search engine that runs custom SQL--you really want to test that against the DB.
Would it be nice to run some of my tests without the DB? Absolutely. Some tests just don't need to test anything DB-related. My guess is that Rails' support for that use case will grow organically over time.
I think that was the conclusion the OP arrived to actually: for a hexagonal application you don't need Rails anymore. Hence the "Rails is gone" in the title.
He clearly needs Rails still, unless he's going to distribute his app through IRB. The post is about pulling Rails out of the domain logic and making it replaceable; I'm saying that's pointless.
The examples in the OP, as in basically every article that describes these kinds of practices, are horribly contrived.
The lighter-weight frameworks we have now in languages like Ruby and Python do have abstraction to deal with changing out bits of technology, but they've reached a point where they abstract only the things that experience has shown are likely to change, and support only the changes that are likely to happen.
Nobody in the real world is suddenly going to decide to "persist" their employee records to volatile local memory instead of something permanent like a database. Introducing new layers of abstraction -- with the attendant increase in complexity and potential abstraction leak -- to support those types of contrived hypotheticals is how overabstracted systems like J2EE come to be.
> And make no mistake: it is always about supporting the hypotheticals, never about supporting what's actually really needed by the system.
> The lighter-weight frameworks we have now in languages like Ruby and Python do have abstraction to deal with changing out bits of technology, but they've reached a point where they abstract only the things that experience has shown are likely to change, and support only the changes that are likely to happen.
So is abstraction always about supporting hypotheticals or only when you're exaggerating?
Implementing a switch statement and hardcoding references into 10,000 line modules with different behaviour (e.g. different rules for different jursidictions) is untestable and unmaintainable. Abstraction has value in these scenarios. And just like everything, it can be abused (e.g. abstracting over 3 scenarios with 5 lines of code in total to support a particular once off business case that is going to be discarded after running once). That doesn't mean abstraction no longer has value.
Furthermore, languages with static type checking require different styles of testing and coding (e.g. abstraction) vs languages with dynamic type checking. Neither approach is universally better for all problem solving. Criticizing features of well designed code in one language that wouldn't be necessary in code written in another language is like criticizing a car for having wheels given that boats do fine without them.
Let's take a framework I know pretty well: Django.
It's not uncommon to switch from one database to another (say, MySQL to Postgres). Django's DB abstractions keep you from having to really worry about what actual database you're running on; unless you had hard-coded dialect-specific SQL somewhere, you just flip a couple settings and now you're talking to the other database.
Same for changing replication setups, for changing authentication mechanisms, for changing logging setup, for changing how you do caching... all of these are things that can and in the real world do change, either from testing to production environments or over the life of a production application.
So it makes sense to abstract those, and the abstraction is backed by "these are things people have really needed to do frequently".
What I have a problem with, and what I criticize as overabstraction, is when someone then comes along and says "well, what if you replace the persistence layer with something that's not even persistent, like volatile memory or stdout (which is actually logging, not persistence, at that point -- a confusion of concerns!)" And then they write a blog post explaining how really you should keep abstracting to the point that the code can "persist" data to those things.
And that's why I say that the examples almost always feel incredibly contrived; it's like somebody didn't know when to stop, and just kept abstracting everything they could find until they ended up with an overengineered mess. Static/dynamic actually has very little to do with this, since even languages that do static typing in overly-verbose and un-useful ways can handle the kinds of abstractions people actually use.
So I don't see a point in re-architecting for these weird contrived hypotheticals, which always seem to be the focus of whatever we're calling the indirect-abstraction-for-everything pattern nowadays; it produces code that's more complex than necessary, has more layers of indirection (and hence bugs) than necessary, and doesn't actually gain any utility in the process.
Correct me if I'm wrong, but one of the points of using dummy persistence is testing. You can delay using a database for a long time this way, have tests that finish quickly etc. Doing this within the confines of a Rails like MVC is next to impossible.
> I don't understand how anything on HN that discusses a more complex software architecture is immediately called J2EE/enterprisy and dismissed.
I wouldn't say that always happens. It often happens when a 500-word blog post suggests effectively re-architecting a mature, successful framework like Rails. Let's think about why such a post is problematic.
Rails has evolved over quite a few years in response to actual needs on the ground. Tremendous amounts of ink have been spilled, and tremendous amounts of brainpower have been expended to create a mature framework like Rails (or similar frameworks).
Despite the battle-tested history of the framework, so many of these architecture blog posts imply that Rails' architecture is somehow insufficient. And then propose to fix that architecture, spending, say, 500 words explaining the idea. There's hardly any discussion of the idea's wide-ranging implications, of the trade-offs, or of the conveniences that are lost. If this idea is so good, why hasn't it ever found its way into Rails, even in diluted form? Why has the Rails team built the architecture they have, instead of yours? (Hint: They probably have a good reason.) Do you have strong evidence that your proposed architecture will serve me better than the one that I've been using successfully for years?
All that being said, I have no problem with idiosyncratic Rails techniques that only step a little outside what the framework provides. For example, service objects. Used appropriately (which usually means sparingly), they can help with organization without fundamentally warping the Rails app. Using a few service objects is a departure similar in magnitude to writing your JS in TypeScript instead of CoffeeScript. It's not built into Rails, but it doesn't really change any of the core concepts either.
It does get a little tiring when somebody tells you "you're doing it wrong. It has to be more abstract and complex because you might become the next Facebook." And then everything turns into a StrategySingletonProxyFactoryBeanFactory.
It totally is. And the opposite is also really tiring. I feel like the silent majority are just trying to find a good middle path that works well with their specific applications, while the loud minority on both sides are yelling about how the other side is doing it wrong.
Completely agree with you. Many of the tests don't do any kind of assertion, simply ensure that an exception isn't thrown on the invocation of a method. An interface defined in a static language would render the majority of these tests unnecessary.
Having once been a platonist about this sort of stuff, I've rarely seen heavy code architecture work out well in practice (not so with system architecture.)
I'm reminded of the various Evolution Of a Programmer jokes which end with the master programmer writing the same code as the beginner programmer.
I've always read into those Evolution of a Programmer jokes that what the master learned through their evolution is the right and wrong time for all the complicated stuff they did in the earlier phases, and that anything simple enough to form the basis of a joke is merely the wrong time.
Excellent. Proof that you can make your code an unapproachable mess to a new team member by throwing out Rails conventions, and in return you get the much greater benefits of shaving 100 ms off your test run times and the ability to run it from the terminal.
This code is painful to read. As somebody new to a team, how can you "extend" code when you don't have a clue how the rest of the system works because somebody else decided it'd be a good idea to unit test controllers and abstract everything away because "we might need it someday"?
I'm not a huge fan of Rails architecture neccesarily, but I have to admit I have trouble following the 'hexagonal architecture' stuff, even in simple examples like this.
It does seem to be a lot of abstraction. Of course, with abstraction comes flexiblity, that's the point, I get it. But with (some kinds? all?) abstraction also comes complexity and cognitive load for the developer. If you're not careful, you end up in Java FactoryGeneratorFactory land.
(I hope the next step in the 'hexagonal architecture' isn't using an XML file to specify all the linkages and concrete classes in use. How else do you specify what concrete @db etc your controller is instantiated with?)
... and of course for those who applies modern Java development patterns, we no longer see plenty FactoryGeneratorFactory.
I find it a bit odd that the community who has railed Java hard suddenly came up with something more complex than... gasp... the solution in Java-land.
I didn't mean to imply that everything in Java was over-engineered, just that certain historical common Java practices were examples of what happens when you over-abstract and over-engineer. Certainly over-abstracting and over-engineering happens in every language too though, it's a hazard of the trade.
What community is it that you think has "suddenly" come up with what solution that's more complex than what "the" solution in Javaland? I'm not even sure what you're talking about. There are of course many solutions in Javaland, naturally, and in every other code land.
I think engineering code, especially code shared between multiple developers/installations, is a constant tension between simplicity and abstraction/flexibility. It's sometimes possible to optimize both, but it requires a lot of skill and a lot of domain experience, and domain experience especially seems to be under-valued and under-present in the current environment. (Plus if the domain changes fast enough, nobody ever has enough domain experience!)
I think individual developers, as well as teams and communities (language-based or industry/domain-based) often swing from end to the other. This legacy thing is too complicated, let's start over with new principles to keep it simple! This simple thing isn't as flexible as I want, let's add in some abstraction to make it possible to do what everyone needs; then do it again; then do it again; then start over at 0.
Like others here I'm not sure if this is a joke post or not, but... the TerminalAdapter example is not an abstraction. It's simply reimplementing a very specific protocol that implements "render" and "redirect_to". This protocol is not something that could automatically be adapted to other scenarios than HTTP — say, to a mobile app or to a desktop GUI app. The terminal, for one, might "render", but it cannot "redirect".
I might just not be very good at this programming thing, but to me these kinds of articles on testing often (not always) remind me of my annoying tendency to implement a comprehensive productivity approach (often combined with buying yet another task app) that, on paper, really should help me keep track of my life. Usually it's a variation on Getting Things Done, which stands out in its exhaustive, all-encompassing, fine-grained approach.
In practice the whole falls apart after a week or so because I forget to do things the right way or because I don't feel like processing my 'inbox'...
Instead, I seem most effective when I write down my top three tasks of the day on a piece of paper.
Maybe I get that feeling because I've mostly experienced companies where testing didn't really seem to have much of an effect. Whenever I asked about this lack of efficacy, the answer was usually "that's because we're not implementing it completely!"
Which is exactly what I hear (and suggest) whenever I or someone else falls off the GTD wagon.
Am I completely wrong about that feeling? I mean that as an honest question, as I truly don't want to be negative and I'm way to inexperienced to be cynical about these things :-).
I love thinking up intricate software design patterns as much as the next guy. They'll cater not only to the current requirements for my project, but also to all possible future eventualities. It'll allow me to, by simply changing one or two lines of code, change my entire database backend, implement system-wide magic caching and to expose my HTTP service as a custom telnet protocol.
The issue I have with posts like this is that they are decidedly NOT just about thinking up wild new designs. Instead, they claim that these designs are somehow BETTER. Unless you can give me a real world use case, I won't believe you.
I kind of skimmed the post, so maybe I'm missing something, but what exactly is the "architecture" described here? Dynamically-typed languages will allow the passing around of objects of different types as long as they implement the necessary methods. This is a nice property in some cases, but a lot of times I'd prefer to have static-typing and well defined protocols. It seems like that post ignored the more important parts of the discussion.
The delivery of the page has been abstracted via protocols, and is keeping the application logic independent of persistence and UI implementation details. The 50X is just a side-effect. :)
In software engineering, on the other hand, the kind of wankery presented here lives on for years because there is no reality check - as long as a group of people feels good about themselves inventing fancy words and "techniques" without putting in too much effort into anything of substance, fads of this kind can live on. They can even deliver products written in this manner and get paid, for the projects work, as far as most business projects are concerned, it is simply that the code is just awful to look at.
Meanwhile, serious software from "firmware" for space shuttles, through huge game 3d engines, compilers, operating systems, to scientific software manages to get developed without having any need for this sort of thing. Somehow it's always the rather trivial software that gets those super-"architectures".