The problem with this argument is that no matter how much you make the developers aware that they are doing something wrong, there will be developers who do it when under pressure. (Meaning to go back and clean it up, but we know how that one goes.) And then when you break stuff, people OTHER than that developer will not be aware of how well you warned the developer.
In that situation, you'll be the one blamed. It is unfair, but that is what will happen.
For a case in point, Microsoft encountered this one repeatedly in the 80s and 90s. (It didn't help that sometimes it probably was their fault..but most of the time it wasn't. It really, really wasn't.)
I'm in nearly complete disagreement with the author here. I guess it helps that I've been burned more than once when using code from two third parties where party A reached into the "private areas" of party B's code. Rather than working with party B to resolve the incompleteness of the solution, they work around it and then push it to everyone. Later, when party B makes a change ("Hey, it's private, and the public interface doesn't change!") and suddenly breaks A's code - there are a bunch of un-related other parties which now have a nightmare on their hands.
Case in-point (of which there are many, I'm sure): QExtSerialPort. The author needed access to underlying Windows functionality that Qt didn't publicly provide, however, there was this nice, private header file laying around they could use. The Qt team later decided they wanted to remove the contents of that file, because no one should ever be using it. Anyone who wanted to build QExtSerialport had to go and grab the original file, and put it into the correct location. If they had instead submitted a patch to Qt to fix the problem, many hours would have been saved.
The author might get more points with me if they added "keep private usage private," but instead they are advocating accessing private internals of 3rd party tools in new open-source projects, which restricts the original developer from making changes without impacting users of the third party tools. Privacy is important - if you want to go around it fine, but you have to expect the price for you and your users, to hand-wave around that is naive at best.
"advocating accessing private internals of 3rd party tools in new open-source projects": no - I didn't advocate that as it's obviously insane :)
I said advise your users and trust they won't do insane things. This allows your users to patch/hack in their application code, as a temporary or exploratory thing. I didn't suggest releasing libraries that themselves monkey-patch dependencies (ugh).
Not really, it's the same thing except from the other point of view. If you allow people to patch/hack in their application code, then someone will patch/hack in their application code without telling you (because it works and they're under time pressure), then once you change something it will break.
Exactly this. Whatever you allow people to do - they will, at some point. At some point, monkey-patched code gets used in someone's library, at some point they put it on Github, and at some point it gets forked.
The OP is literally suggesting replacing "private" with either "public" or "protected" in C++ code, and then expecting comments/documentation to communicate what was already communicated previously by "private".
It seems that the post argues that enforced privacy is too theoretical, in practice, advice is better than enforcement.
I'd argue that the post is too theoretical, in practice, enforced privacy works much better. People will do stuff against your advice. That's OK, you say - they'll get in trouble eventually, but it was their decision. However, it affects you, the library (app, etc.) developer, as your clients might turn out to be more powerful than you.
Think of Linus' rant on not breaking userspace. He's right I believe. In general, you are not allowed to break client code, even the client did something he was discouraged to do.
Interfaces are contracts. The ultimate documentation is the code, not the comment. You are saying that in the following case,
// do not access;
public int getSize();
comment has precedence over the visibility modifier. Well, no.
In practice enforced privacy is rarely used in languages other than Javascript. In Java, C#, Ruby, Python, PHP advisory privacy is the default.
You would enforce privacy when you need to run untrusted code in same process (like the Servlets model), which is not the case in any Javascript application.
Enforced privacy in Javascript is also very inflexible, disabling many ways of extensiond and reuse and encouraging god objects because you cannot fine-tune the access - other internal "classes" are just as unpriviledged as some random application code.
So in my eyes there are plenty of downsides for no visible upside - in all other languages the developers seem to play well and not write code to access privates. Maybe that's due to incompetence though.
Advisory privacy is not usually a matter of comments. In Ruby, for example `some_object.private_method` won't work, but `some_object.send('private_method')` will.
The object's creator has made a machine. There are external controls and parts you need a screwdriver to get to. If you're using your screwdriver, you know it and should be OK with some risk. But the box isn't welded shut.
When I learned to drive, my tester required that I demonstrate competence with the controls, using them at the appropriate times, in the appropriate ways.
If my car exposed all of its private internal operations, I would have needed to know how to use them, and demonstrated that I know that. I haven't a clue about fuel flow and gear ratios, or air/fuel mixture or how the thermostat affects how the engine operates. I'm quite happy that I didn't have to demonstrate all of that too.
What the article ignores is that a good API provides everything the consumer needs, while keeping the API small and easily comprehensible. A driver who has to keep track of 5 details is more likely to learn to use his car more quickly, and less likely to crash than one who has to keep track of 200 details and make decisions about each one.
You give a great analogy, but IMO extract the wrong conclusions. The car exactly follows the philosophy outlined in this article; the driver/user doesn't need to know the internals, but you can always open up the hood and adjust/fix/modify the inner working.
There's one problem with the analogy - exactly the point what you're trying to use... a car is almost never updated. There's no patch for the turbocharger, the piston design stays the same after manufacturing, etc. So it's OK to retrofit a custom turbocharger.
You would have had to know how the internals work? Why?
I often hear people nostalgically talking about how it was better in the olden days when people could fix/tweak their engines as they wished. If you lift the bonnet of my car it's a sheet of plastic and a few colour-coordinated filler lids so that I can top-up fluids. No hackability whatsoever.
There is a strong difference between exposing everything as part of the API, and making it permanently inaccessible. Taking your car example, imagine if the engine were in a locked black box. Any modifications or changes necessary would require replacing or duplicating the ENTIRE ENGINE. Instead, those with the proper know-how, can get under the hood when necessary. Making the engine accessible doesn't force drivers to know everything about it. It's hidden, but accessible when needed.
To round-trip this analogy, I can't edit the source code of a system library or purchased component unless I have the source, and have the rights to edit it, neither of which are a given in today's world.
Sure, an epoxy-encased engine isn't going to let me tinker with it, but I will never tinker with my engine. That's one of the wonderful things about separation of concerns that OO gives me, including all the perceived badness of enforced privacy: I can work on my particular thing without needing to know anything about the innards, because it is not my responsibility to do so, neither explicitly through delegation of concern and documentation, or implicitly through presentation of a semi-private API.
If I'm able to see it, then I'm expected to see it, and that blurs the lines of the API.
You're right, but you misunderstand the OP. Ruby, for example, has "advisory privacy". Private methods aren't shown in generated documentation but can be called if you really want.
On your analogy, the car has a small number of controls visible and they're all you need 99% of the time. But the engine compartment isn't welded shut.
Enforced privacy is advisement. It's a signal that if you want access to a library's internal behaviours or state you should either communicate your use cases to the library's maintainer or fork it and manually integrate upstream changes. This leaves the maintainer free to make changes to internal behaviours and state without breaking an implicit API contract they didn't realize they had made.
"Advisory privacy" allows all the options you listed. It also allows me to write the lightweight patch that I need and use it - immediately, temporarily, and at my own risk.
Unenforced privacy is a Bad Idea for (shared) libraries that may be upgraded independently of whatever uses them. It means that any internal change becomes a breaking change, and when some application stops working after a library upgrade it is your fault regardless that you told the application developers "don't do that" (the users don't know or care that you said that, they only know that upgrading your library broke their stuff).
There's a middle ground between public and private, and some languages call it "protected".
In FOSS libraries that I maintain, methods and properties that I don't want to expose are prefixed with an underscore and designated as protected. Protected members are not directly accessible, but anyone who wishes to play with them can create a subclass to access them. They also don't need to make any further changes other than subclassing, whereas private members might need to be overridden or (even worse) reimplemented depending on the language. So I think "protected" hits a nice balance between simplicity, openness, and maintainability.
The requirement to create a subclass to access protected members might come across as an inconvenience, but it sends the same message as the article's "dodgy" JS syntax: Here be dragons, tread carefully and don't blame me if your app breaks. It would be very nice if users understood that the leading underscore is meant to send the same message, but since they're apparently not getting the message, a little more inconvenience might be needed.
The two footnotes are good counter-arguments. The only time I can imagine making a class final (can't extend) is when it's a matter of security (e.g., Java's String class).
Then there is a matter of "well, it's not MY fault if you didn't use the public API and your code is now broken." I recall even Steve Jobs chastising developers for doing this.
Building something with a sensible yet strict privacy model takes a lot of upfront design. Makes sense for code that will be used by the masses, but maybe not for a small project.
In that situation, you'll be the one blamed. It is unfair, but that is what will happen.
For a case in point, Microsoft encountered this one repeatedly in the 80s and 90s. (It didn't help that sometimes it probably was their fault..but most of the time it wasn't. It really, really wasn't.)