This is a perfect case of iatrogenic security. When the systems get so
complex and remote that security experts are caught out, they do more
harm than good.
It's also a consequence of solutionism, systematic monotonicity,
mother-knows-best and externalising costs such that we:
Only add more security solutions on top of existing ones to fix their holes.
Deny the user any choice or agency in setting their own security terms
Never revoke or remove a feature (that would be admitting defeat)
Push the burden in every process on to the user
Create fear in the user - that any misstep will cause them more
inconvenience and trouble.
Make security an authoritarian culture such that user will not
question or be sceptical.
All of these are antithetical to civic cyber-security that we need
available so educated and empowered users can operate technology under
their control.
It's also a consequence of solutionism, systematic monotonicity, mother-knows-best and externalising costs such that we:
Only add more security solutions on top of existing ones to fix their holes.
Deny the user any choice or agency in setting their own security terms
Never revoke or remove a feature (that would be admitting defeat)
Push the burden in every process on to the user
Create fear in the user - that any misstep will cause them more inconvenience and trouble.
Make security an authoritarian culture such that user will not question or be sceptical.
All of these are antithetical to civic cyber-security that we need available so educated and empowered users can operate technology under their control.