> but this feature is clearly gated behind user permission
You're right that the owner of the device should have the ultimate say. But the sad reality is that most owners aren't necessarily good caretakers of those devices. They don't understand what that permission entails, and they don't actually want to take responsibility for the outcome of the decision. But they will want to hold the manufacturer accountable for the damage.
I can't count how many times I heard people say "this decision should be mine to make" only to follow it up after some time with "somebody should have warned me not to do it". It's human nature and the solution for this can't/won't be technical.
Windows XP was a good example of letting the person decide what's good for their device and it was also the OS with the slowest adoption of updates. People collectively decided that the discomfort of rebooting once in a while was worse than letting malware completely wreck their device and data.
> I can't count how many times I heard people say "this decision should be mine to make" only to follow it up after some time with "somebody should have warned me not to do it".
The correct response, if (as in this case) they were warned, is to say “someone did warn you, pay more attention next time”, then walk away[1].
Just like if a beginner ignores the black piste markets and the “for good skiers only” sign at the top of a slope then complains that they fell over.
It is problematic to create in users an expectation that if they blindly mash at their globally-networked, bank-account-connected devices without paying a modicum of attention to anything that appears on the screen when they do so, that everything will be fine, and if it’s not it’s someone else’s fault.
> The correct response, if (as in this case) they were warned, is to say “someone did warn you, pay more attention next time”, then walk away[1].
In reality, this does far more harm than good. In almost all cases this goes wrong because of the 'little learning is a dangerous thing' problem. People tend to be in two camps:
- Don't care, don't want to fiddle with the thing, the manufacturer has to do everything
- Knowing just enough to break things, but not enough to fix it (and thus it is the fault of the manufacturer)
Other types like the 'I am the owner, I make the rules' crowd are insignificantly small.
This means that in the real world (so not in an echochamber) you only get one scaled and realistic scenario: the user creates problems (for themselves, others), but cannot fix them, and everyone/everything not-user then has to care for them to deal with it.
In an ideal theoretical world we might say that the end-user has to be responsible, and they have to make infinite mistakes and learn everything so they can become good caretakers of their networked systems. But that is not reality, and is not realistic.
Harm reduction isn't always the most important goal, especially when it's other people's harm and reducing it also involves restricting what they can do.
You don't have to allow all users everything, but you should allow those who want, to do as they please.
You can always hide the option behind some kind of mechanism. A mechanism a general user wouldn't use because they don't if the rest works as intended. Those who still do, should suffer the consequences, but this is not the manufacturers' problem. They have all kinds of safeguards to prevent liability because of those "special choices".
People would go to great lengths to follow tutorials on the internet to disable things they were told were bad for them. The less qualified, the more likely that they fell for the "updates are bad, they ruin your computer" narrative. As long as there's an option that can be abused, people will be tricked into allowing it.
This is less relevant for the current discussion about the FireTV and this feature. It's for the more general discussion of being able to do whatever you want on a device you own.
But why should everyone else suffer because of that small fraction?
The real answer: users are captive. For the vendors, they're cattle. And like with any good big farm, it does not matter how much it sucks for the cattle - but it does matter the cattle is safe, because few bad cases can become known and risk your farm getting shut down.
'Krasnol argued for keeping powerful/dangerous features, but making them opt-in (and a bit of a hassle to enable). You countered that there will be "some fraction" of users incapable of not hurting themselves with those features, who "will still stumble upon it anyways and will still refuse to take any responsibility for enabling it". My counter to that is that we shouldn't remove such power features just because "some fraction" may find and misuse them.
That's the should/should not part. The rest is my take on why companies remove those features anyway - they have no incentive to provide anything above bare minimum, especially not when they could be on the hook for "some fraction"'s mishaps.
I didn't raise the 'should/should not part' at all, you are the one who raised the point. I'm focused on actual facts and possibilities in this comment chain.
You're right that the owner of the device should have the ultimate say. But the sad reality is that most owners aren't necessarily good caretakers of those devices. They don't understand what that permission entails, and they don't actually want to take responsibility for the outcome of the decision. But they will want to hold the manufacturer accountable for the damage.
I can't count how many times I heard people say "this decision should be mine to make" only to follow it up after some time with "somebody should have warned me not to do it". It's human nature and the solution for this can't/won't be technical.
Windows XP was a good example of letting the person decide what's good for their device and it was also the OS with the slowest adoption of updates. People collectively decided that the discomfort of rebooting once in a while was worse than letting malware completely wreck their device and data.