The pattern you talk about is mainly a product of not wanting to squeeze a whole essay into an HN comment.
I also suspect the problems with OOP are hard to communicate. I for one always had a problem with OOP, but I could never quite point it out. Sure, when faced with an OOP design, I could almost always find simplifications. But maybe I never saw the good designs? Maybe this was OOP done wrong?
I do have reasons to think OOP not the way (no sum types, cumbersome support for behavioural parameters, and above all an unreasonable encouragement of mutability), but then I have to justify why those points are important, and why they even apply to OOP (it's kind of a moving target). Overall, all I'm left with is a sense of uneasiness and distrust towards OOP.
> The pattern you talk about is mainly a product of not wanting to squeeze a whole essay into an HN comment.
That may very well apply to the GP's comment—but, my observation of the pattern is derived from a mix of mini-essay comments, and articles people are writing on Medium or their blogs or whatever, where the space constraints aren't so tight.
There are a couple things you'll regularly find: laughably bad straw-men (GP is free of these), overly vague statements that only survive scrutiny because of their vagueness (e.g. when the GP says OOP produces "abstractions that do not easily model computation"), and unjustified claims.
The net effect is something that sounds bad, but if looked at closely carries very little force.
I suspect the reasons for it are:
1) Actually evaluating a language paradigm is more difficult than these folks suspect. Their view matches their experience and they assume their experience is more global than it really is. Additionally, we don't have a mature theoretical framework for making the comparisons.
2) People are arguing for personal reasons. They have committed themselves to some paradigm and they want to feel secure in their justification for doing so.
> Additionally, we don't have a mature theoretical framework for making the comparisons.
This is really the problem. As much as I have strong opinions and beliefs about how to architect code, every argument I come up with boils down to some flavor of "I like it better this way". Which is true -- I do like it better this way -- but hardly actionable, and it doesn't get at the essence of why I like it better.
The problem with making everything an object -- or more precisely, having lots of mutable objects in an object space with a complex dependency graph -- is that it becomes very hard to model both how the program state changes over time and what causes the program state to change in the first place. I think the prevailing OOP fashion is to cut objects and methods apart into ridiculously small pieces, which takes encapsulation and loose coupling to an extreme. This gives rise to the popular quip, "In an OOP program, everything happens somewhere else." I can't think straight in this kind of setting.
I believe that mutable state should be both minimized and aggregated. As much as is humanly possible, immutable values should be used to mediate interactions between units of code (be those functional or OO units), and mutation should occur at shallow points in the call stack. Objects can work well for encapsulating this mutable state, but within the scope of an object, mutation should be minimized and functional styles preferred.
Using a functional style doesn't mean giving up on loose coupling or implementation hiding. Rust, Haskell, and plenty of other languages support these same concepts in the form of parametric polymorphism, e.g. traits or typeclasses. It does mean giving up on the idea that you can mutate state whenever it's convenient. Instead, you have to return a representation of the action you'd like to take, and let the imperative shell perform that action.
Speaking of imperative shells and functional cores, Gary Bernhardt's talk called "Boundaries" is an excellent overview of this kind of architecture [1]. There was also a thread here on HN about similar principles [2].
That makes a lot of sense to me. Looking forward to checking out "Boundaries".
Btw, one other idea I've had on the subject is that the problems with mutable state can be mitigated if we were able to more easily see/comprehend the state as it's being modified by a program; without that capability the only recourse we're left with is our imagination, which of course is woefully inadequate for the task. You can see more concretely what I'm talking about in my project here (video): http://symbolflux.com/projects/avd
From what I've seen, structuring a program to not modify state is almost always more difficult than the alternative[0]. There are certain problems where this difficulty is justified (because of, e.g., reliability demands); but I think most problems in programming are not those, and if we could just mitigate the error-proneness of state mutation, that may leave us at a good middle ground.
[0] The exception is when you're in a problem domain that can naturally be dealt with via pure functions, where you're essentially just mapping data in one format to another (i.e. no complex interaction aspects).
Oh, that's very cool! I had a similar idea years ago, but I didn't have the technical chops to pursue it at the time, and I ended up losing interest. I think this would actually be even more useful in the kind of architecture I'm describing, since the accumulated state has a richer structure, and many of the smaller bits of state that would be separate objects are put into a larger context.
> From what I've seen, structuring a program to not modify state is almost always more difficult than the alternative
You're not wrong! I don't think we should get rid of mutable state, but I do think we should be much more cognizant of how we use it. Mutation is one of the most powerful tools in our toolbox.
I've found that keeping a separation between "computing a new value" and "modifying state" has a clarifying effect on code: you can more easily test it, more easily understand how to use it, and also more easily reuse it. My personal experience is that I can more easily reason locally about code in this style -- I don't need to mentally keep track of a huge list of concepts. (I recall another quip, about asking for a monkey and getting the whole jungle.)
There is a large web app at my workplace that is written in this style, and it is one of the most pleasant codebases I've ever been dropped into.
Interestingly, I think I built that project with an architecture somewhat reminiscent of the 'boundaries' concept (still just surmising at this point). It's a super simple framework with two types of things 'Domains' and 'Converters'. Domains are somewhat similar to a package... but with the boundaries actually enforced, so that you have to explicitly push or pull data through Converters to other Domains; Converters should just transform the format from one Domain to that of another (they are queue-based; also sometimes no translation is necessary).
I'll quote from the readme:
> This Domain/Converter framework is a way of being explicit about where the boundaries in your code are for a section using one ‘vocabulary,’ as well as a way of sequestering the translation activities that sit at the interface of two such demarcated regions.
Inside each Domain I imagine something like an algebra... a set of core data structures and operations on them.
But yeah, I have very frequently thought about visualizing its behavior while working on that visualizer :D
Is your research related to programming languages?
Also I'm going to have to think about "computing a new value" vs. "modifying state" —not sure I quite get it...
> Is your research related to programming languages?
Yep: I just finished a Master's degree with a focus on programming language semantics and analysis. I'm interested in all kinds of static analyses and type systems -- preferably things we as humans can deduce from the source without having to run a separate analysis tool.
> Also I'm going to have to think about "computing a new value" vs. "modifying state" —not sure I quite get it...
It's kind of a subtle distinction. A value doesn't need to have any particular locus of existence; semantically, we only care about its information content, not where it exists in memory. As a corollary, for anyone to use that value, we have to explicitly pass it onward.
On the other hand, mutation is all about a locus of existence, since ostensibly someone else will be looking at the slot you're mutating, and they don't need to be told that you changed something in order to use the updated value. (Which is the root of the problem, quite frankly!)
I also suspect the problems with OOP are hard to communicate. I for one always had a problem with OOP, but I could never quite point it out. Sure, when faced with an OOP design, I could almost always find simplifications. But maybe I never saw the good designs? Maybe this was OOP done wrong?
I do have reasons to think OOP not the way (no sum types, cumbersome support for behavioural parameters, and above all an unreasonable encouragement of mutability), but then I have to justify why those points are important, and why they even apply to OOP (it's kind of a moving target). Overall, all I'm left with is a sense of uneasiness and distrust towards OOP.