I also agree with this, security is always in a balancing act with convenience. Yahoo fell to far into the convenience side on this one but that debate on security vs convenience is happening in everywhere. The issue I've seen is that many companies are bad at doing risk analysis about these choices. That's the bigger issue in my view.
> security is always in a balancing act with convenience
I don't think that's always the case. A whole lot of security can be had with little or no inconvenience, given an appropriate mindset, though one might argue that such a mindset is an inconvenience in itself. :)
> many companies are bad at doing risk analysis about these choices
Amen to that!
I think that having a basic, security aware mindset goes a long way, even if there is very little 'budget' or 'ability' to do inconvenient things.
Philosophically speaking, you cannot improve security without sacrificing usability. What I mean by usability is the capability for someone to do something, not simply convenience for the users themselves. No amount of security can be added without a concurrent decrease in usability, even if that usability is something you didn't expect or want to do.
For example, the user might not see a capability decrease if you use MD5 or bcrypt, but you certainly see a capability decrease because you can no longer see their passwords and you have to do extra work to maintain them securely. Sometimes security decisions are easy, like hashing passwords, because these days no one wants that capability. But sometimes they are not easy decisions.
You can pass a lot of convenience savings on to users by assuming the capability sacrifice yourself (for example, choosing the password hashing algorithm behind the scenes), but you can't do this for everything (for example, mandating two-factor authentication or password resets be masse).
This might come across as pedantic, but it's very important to maintain a mental model this way because it helps you understand risk analysis for more complicated security and usability tradeoffs. Starting from the premise that you can have any security without a decrease in usability is not helpful in that regard.
Your argument is assuming something that I don't believe is true, which is that we're already on the Pareto optimality frontier for security/convenience. It is certainly true that you can not forever increase security without eventually impacting usability, but I don't think many people are actually in that position.
I've improved a lot of real-world security by replacing functions that bash together strings to produce HTML with code that uses functions to correctly generate HTML, and the resulting code is often shorter, easier to understand, easier to maintain, and would actually have been easier to write that way in the first place given how much of the function was busy with tracking whether we've added an attribute to this tag yet and a melange of encoding styles haphazardly applied. What costs you can still come up with ("someone had to create the library, you have to learn to use it") are generally trivial enough to be ignored by comparison, because the costs can be recovered in a single-digit number of uses.
"Your argument is assuming something that I don't believe is true, which is that we're already on the Pareto optimality frontier for security/convenience. It is certainly true that you can not forever increase security without eventually impacting usability, but I don't think many people are actually in that position"
That's true that we aren't at the sweet spot yet but that what I meant by companies being bad about doing the risk analysis judgement of security versus usability.
On you second point languages have gone through that cycle. Look at Java doing boundary checks. That helps avoid a whole class of security issues but at the cost of making things that C was able to do easily more difficult. These tradeoffs happen at every layer.
> No amount of security can be added without a concurrent decrease in usability, even if that usability is something you didn't expect or want to do.
It seems strange to describe this this way for something like fixing a memory corruption bug or switching from a vulnerable cryptographic algorithm to a less vulnerable one. The capability that you're giving up is ... potentially breaking your own security model in a way that you weren't even aware was possible?
I think I might not be conveying my point very well. Let me clarify this as succinctly as I can.
Usability doesn't just mean things users want to do. Usability means things anyone (users, developers) can do. By definition, "securing" things means limiting the capability of certain users or developers to do (hopefully) specific things. How efficient you are at this determines whether or not you'll also reduce the capability users or developers want to have when you reduce the capabilities they don't want to have.
To give a concrete example: using a cryptographic algorithm immediately impacts usability along performance and capability axes. Previously, you could arbitrarily read and manipulate that data because it was plaintext. Afterwards, you could not. Now you need to be careful about handling that data and spend developer time and resources implementing and maintaining the overhead that protects that data and reduces its direct usability.
It doesn't matter if you wanted that capability - it's gone either way. That was a trade-off, and it is an easy decision to make, but not all decisions are easy to make. Every security decision can be modeled as a trade-off.
I fondly remember the convenience advantages of plaintext password storage, both as a user and somebody supporting users.
Occasionally I wonder if there are user accounts in my life that are irrelevant enough I'd be happy to buy that convenience advantage with the necessary security risks ... but of course people's tendency towards password re-use makes that trade-off basically unofferable in any sort of ethical way.
At least bcrypt makes it moderately easy to not completely screw up the hashing part.
That's a good example but a bit cherry picked. I could just as easily point out the opposite with accessing an account. If insecure, it still requires a certain amount of information and time upfront then some login just to identify the user. The server will compare that to its local data. The time due to network latency or server load means it usually happens in seconds.
Adding a password that gets quickly hashed by the application to be sent instead costs some extra time. Almost nothing given libraries are available and CPU cycles cheap. If remembering the password, the user has to just type it in once or rarely. The hashing happens so fast that the user can't tell it happened on top of already slow network. Most of the time the user of this properly-designed system will simply type the URL, the stuff will auto-fill, and exchange will take same time. No loss in usability except one time whose overall effect is forgotten by many interactions with identical, high usability.
Likewise, a user coding on a CPU like SAFE or CHERI tagged for memory safety in a language that's memory-safe will not be burdened more than someone coding in C on x86. They will be burdened less by less mental effort required in both prevention and debugging of problems. They could theoretically get performance benefits without the tagging but that's only if incorrect software + much extra work is acceptable. If premise is it's correct, which requires safety much of the time, then the more secure CPU and language are better plus improve productivity. Easier to read, too.
A final example is in web development. The initial languages are whatever crap survived and got extended to do things it wasn't meant to. So, people have to write multiple kinds of code with associated frameworks for incompatible browsers, server OS's, and databases. Many efforts to improve this failed to generate productivity/usability and security. Opa shows you can get both by designing a full-stack, ML-like language with strong types that makes many problems impossible by defaults. Easier to write and read plus more secure. Ur/Web does something similar but it's a prototype and functional programming rather than production.
Conclusion: usability and security aren't always at odds. They are also sometimes at odds in some technical, philosophical way that doesn't apply in real-world implementations. Sometimes getting one requires a small sacrifice of the other. Sometimes it requires a major sacrifice or several.
It's not consistently a trade-off in the real world.
Note: I cheat with one final example. An air-gapped Oberon System on Wirth's RISC CPU uses far less transistors, cycles, energy, and time than a full-featured, Internet-enabled desktop for editing documents + many terminal-style apps. Plus you can't get hacked or distracted by Hacker News! :P
> Likewise, a user coding on a CPU like SAFE or CHERI tagged for memory safety in a language that's memory-safe will not be burdened more than someone coding in C on x86. They will be burdened less by less mental effort required in both prevention and debugging of problems.
In the parent commenter's framework, I suppose the safer language still comes at a cost in terms of the ability to use unsafe programming techniques -- like type punning and self-modifying code.
Hmm. You could use those as examples. There would be cases where type punning might be true in developer time. There would be cases where self-modifying code might buy you better memory or CPU efficiency. Yet, self-modifying code is pretty hard to do and do right for most coders that I've seen. Type punning happens automatically in dynamic, safe language with decent conversion rules. You often only do the conversion rules once or when you change the class/type but you mentally do that in your head anyway if you're analyzing for correctness. Difference is you typed it with conversions being mechanically checked.
These you bring up seem to be double-edged swords like the others that can have about no negative impact or significant one depending on context.
They already demonstrated that integrating POLA at language and security level with simple, user authorizations could knock out most problems automagically. Did a web browser that way, too. KeyKOS previously used that model for whole systems that ran in production on IBM's mainframes with checkpoints of apps and system state every 30 seconds on top of that.
Still think you have to screw usability to improve security? And does it matter that it might be true in an absolute sense of some sort if in practice it might be no different (eg File Dialog on Windows vs on E/CapDesk)?