"You can only turn off this setting 3 times a year."
Astonishing. They clearly feel their users have no choice but to accept this onerous and ridiculous requirement. As if users wouldn't understand that they'd have to go way out of their way to write the code which enforces this outcome. All for a feature which provides me dubious benefit. I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?
Privacy legislation is clearly lacking. This type of action should bring the hammer down swiftly and soundly upon these gross and inappropriate corporate decision makers. Microsoft has needed that hammer blow for quite some time now. This should make that obvious. I guess I'll hold my breath while I see how Congress responds.
It's hilarious that they actually say that right on the settings screen. I wonder why they picked 3 instead of 2 or 4. Like, some product manager actually sat down and thought about just how ridiculous they could be and have it still be acceptable.
My guess is it was an arbitrary guess and the limit is due to creating a mass scan of photos. Depending on if they purge old data when turned off, it could mean toggling the switch tells microsoft's servers to re-scan every photo in your (possibly very large) library.
Odd choice and poor optics (just limit the number of times you can enable and add a warning screen) but I wouldn't assume this was intentionally evil bad faith.
I would be sceptical too, if I was still using Windows.
I’ve seen reports in the past that people found that syncing to the cloud was turned back on automatically after installing Windows updates.
I would not be surprised if Microsoft accidentally flip the setting back on for people who opted out of AI photo scanning.
And so if you can only turn it back off three times a year, it only takes Microsoft messing up and opting you back in three times in a year against your will and then you are stuck opted in to AI scanning for the rest of the year.
Like you said, they should be limiting the number of times it can be turned back on, not the number of times it can be turned off.
Yep. I have clients who operate under HIPAA rules who called me out of the blue wondering where their documents had gone. Microsoft left a cheery note on the desktop saying they had very helpfully uploaded ALL of their protected patient health data into an unauthorized cloud storage account without prior warning following one a Windows 10 update.
When I used to work as a technician at a medical school circa 2008, updating OS versions was a huge deal that required months of preparations and lots of employee training to ensure things like this didn't happen.
Not trying to say that you could have prevented this; I would not be surprised if Windows 10 enterprise decided to "helpfully" turn on auto updates and updated itself with its fun new "features" on next computer restart.
Can't speak for the medical school but my guess is familiarity. Can't remember what the Mac landscape was like at that point but it probably wasn't vetted enough for HIPAA. And windows 7 wasn't that shitty at the time.
And even so, let's say they didn't use Windows — I'd still expect the same rigor for any operating system update.
OneDrive is HIPAA, and IRS-740, and FIPS, for this reason. It’s an allowed store for all sorts of regulated data, so they don’t have to care about compliance risk.
I'm not sure the next Joint Commission audit will be totally cool with them randomly starting to store files in the cloud with zero policy/anything around the change.
3 is the smallest odd prime number. 3 is a HOLY number. It symbolizes divine perfection, completeness, and unity in many religions: the Holy Trinity in Christianity, the Trimurti in Hinduism, the Tao Te Ching in Taoism (and half a dozen others)
I'd rather guess that they've pick 3 as a passive-aggressive attempt to provide a false pretense of choice in "you can change it but in the end it's gonna be our way" style than thinking they're attributing some cultural significance of number 3 behind this option. But that's still interesting concept tho
> I'd rather guess that they've pick 3 as a passive-aggressive attempt to provide a false pretense of choice in "you can change it but in the end it's gonna be our way" style
The number seems likely to be a deal that could be altered upward someday for those willing to rise above the minimal baseline tier.
Right now it doesn't say if these are supposed to be three different "seasons" of the year that you are able to opt-out, or three different "windows of opportunity".
Or maybe it means your allocation is limited to three non-surveillance requests per year. Which should be enough for average users. People aren't so big on privacy any more anyway.
Now would these be on a calendar year basis, or maybe one year after first implementation?
And what about rolling over from one year to another?
> Why is Microsoft so eager to also be able to know this?
A database of pretty much all Western citizen's faces? That's a massive sales opportunity for all oppressive and wanna-be oppressive governments. Also, ads.
I agree with you but there's nothing astonishing about any of this unfortunately, it was bound to happen. Almost all of cautionary statements about AI abuse fall on deaf ears of HN's overenthusiastic and ill-informed rabble, stultified by YC tech lobbyists.
Worst part about it was all the people fretting on about ridiculous threats like the chatbot turning into skynet sucked the oxygen out of the room for the more realistic corporate threats
Right. But then the AI firms did that deliberately, didn't they? Started the big philosophical argument to move the focus away from the things they were doing (epic misappropriation of intellectual property) and the very things their customers intended to do: fire huge numbers of staff on an international, multi-industry scale, replace them with AI, and replace already limited human accountability with simple disclaimers.
The biggest worry would always be that the tools would be stultifying and shit but executives would use them to drive layoffs on an epic scale anyway.
And hey now here we are: the tools are stultifying and shit, the projects have largely failed, and the only way to fix the losses is: layoffs.
Actually, most users probably don't understand, that this ridiculous policy is more effort to implement. They just blindly follow whatever MS prescribes and have long given up on making any sense of the digital world.
Can someone explain to me why the immediate perception is that this is some kind of bad, negative, evil thing? I don't understand it.
My assumption is that when this feature is on and you turn it off, they end up deleting the tags (since you've revoked permission for them to tag them). If it gets turned back on again, I assume that means they need to rescan them. So in effect, it sounded to me like a limit on how many times you can toggle this feature to prevent wasted processing.
Their disclaimer already suggests they don't train on your photos.
This is Microsoft. They have a proven record of turning these toggles back on automatically without your consent.
So you can opt out of them taking all of your most private moments and putting them into a data set that will be leaked, but you can only opt out 3 times. What are the odds a "bug" (feature) turns it on 4 times? Anything less than 100% is an underestimate.
And what does a disclaimer mean, legally speaking? They won't face any consequences when they use it for training purposes. They'll simply deny that they do it. When it's revealed that they did it, they'll say sorry, that wasn't intentional. When it's revealed to be intentional, they'll say it's good for you so be quiet.
A bug, or a dialog box that says ”Windows has reviewed your photo settings and found possible issues. Press Accept now to reset settings to secure defaults”
This is how my parents get Binged a few times per year
This feels different though. Every time you turn it off and then on again it has a substantial processing cost for MS. If MS "accidentally" turns it on and then doesn't allow you to turn it off it raises the bar for them successfully defending these actions in court.
So to me it looks like MS tries to avoid that users ram MS's infrastructure with repeated expensive full scans of their library. I would have worded it differently and said "you can only turn ON this setting 4 times a year". But maybe they do want to leave the door open to "accidentally" pushing a wrong setting to the users.
As stated many times elsewhere here, if that were the case, it'd be an opt in limit. Instead it's an opt out limit from a company that has a proven record of forcing users into an agreement against their will and requiring an opt out (that often doesn't work) after the fact.
Nobody really believes the fiction about processing being heavy and that's why they limit opt outs.
Aren't these 2 different topics? MS and big-tech in general make things opt-out so they can touch the data before users get the chance to disable this. I expect they would impose a limit to how many times you go through the scanning process. I've run into this with various other services where there were limits on how many times I can toggle such settings.
But I'm also finding a hard time giving MS the benefit of the doubt, given their history. They could have said like GP suggested that you can't turn it "on" not "off".
> As stated many times elsewhere here .... Nobody really believes the fiction
Not really fair though, wisdom of the crowd is not evidence. I tend to agree on the general MS sentiment. But you stating it with confidence without any extra facts isn't contributing to the conversation.
A lot of people have a terabyte or more of OneDrive storage. Many people have gigantic photo collections.
Analyzing and tagging photos is not free. Many people don't mind their photos actually being tagged, but they are a little more sensitive about facial recognition being used.
That's probably why they separate these out, so you can get normal tagging if you want without facial recognition grouping.
If you have a large list of scenarios where Microsoft didn't respect privacy settings or toggles, I would be interested in seeing them.
I know there have been cases where software automated changes to Windows settings that were intended to only be changed by the user. Default browsers were one issue, because malicious software could replace your default browser even with lower permissions.
Are you talking about things like that, or something else?
If that's the case, limit opt ins so Microsoft doesn't have to pointlessly scan data. But they're limiting opt outs, which forces people into that endless scanning of their data.
Nobody. Absolutely nobody. Believes it's to save poor little Microsoft from having their very limited resources wasted by cackling super villain power users who'll force Microsoft to scan their massive 1.5 GB meme image collections several times.
If it was about privacy as you claim in another comment, it would be opt in. Microsoft clearly doesn't care about user privacy, as they've repeatedly demonstrated. And making it opt out, and only three times, proves it. Repeating the same thing parent comments said is a weird strategy. Nobody is believing it.
Because many people want it, expect it and value it.
Most moms and old folks aren't going to fuss or understand privacy and technical considerations, they just want to search for things like "greenhouse" and find that old photo of the greenhouse they setup in the backyard 13 years ago.
It's one thing if all of your photos are local and you run a model to process your entire collection locally, then you upload your own pre-tagged photos. Many people now only have their photos on their phones and the processing doesn't generally happen on the phone for battery reasons. You CAN use smaller object detection/tagging models on phones, but a cloud model will be much smarter at it.
They understand some of this is a touchy subject, which is why they have these privacy options and have limitations on how they'll process or use the data.
I'm sorry, are you working for Microsoft? Because the level of commitment to explain things the corporate way you did in these comments is... quite impressing.
I’d be willing to believe this if they didn’t repeatedly and consistently nuke the settings where I turned this off during some random windows update, and only discover it after all my stuff got moved/uploaded to cloud against my previous express wishes. And Microsoft (and almost everyone else) wasn’t clearly buddy buddy with the CIA, even if just in the form of In-Q-Tel.
Of course, the problem with having your data available even for a day or so, lets say because that day you didn't read your e-mails, will mean, that your data will be trained on, used for M$ purposes. They will have powerful server farms at the ready holding your data at gun point, so that the moment they manage to fabricate fake consent, they are there to process your data, before you can even finish reading any late notification e-mail, if any.
Someone show me any cases, where big tech has successfully removed such data from already trained models, or in case of being unable to do that with the blackboxes they create, removed the whole blackbox, because a few people complain about their data being in those black boxes. No one can, because this has not happened. Just like ML models are used as laundering devices, they are also used as responsibility shields for big tech, who rake in the big money.
This is M$ real intention here. Lets not fool ourselves.
If that was the case, the message should be about a limit on re-enabling the feature n times, not about turning it off.
Also the if they are concerned about processing costs, the default for this should be off, NOT on. The default should for any feature like this that use customers personal data should be OFF for any company that respects their customers privacy.
> You are trying to reach really far out to find a plausible
This behavior tallies up with other things MS have been trying to do recently to gather as much personal data as possible from users to feed their AI efforts.
Their spokes person also avoided answering why they are doing this.
On the other hand, you comment seem to be trying to reach really far trying to find portray this as normal behavior.
Yeah exactly. Some people have 100k photo collections. The cost of scanning isn’t trivial.
They should limit the number of times you turn it on, not off. Some PM probably overthought it and insisted you need to tell people about the limit before turning it off and ended up with this awkward language.
If it was that simple, there would be no practical reason to limit that scrub to three ( and in such a confusion inducing ways ). If I want to waste my time scrubbing, that should be up to me -- assuming it is indeed just scrubbing tagged data, because if anything should have been learned by now, it is that:
worst possible reading of any given feature must be assumed to the detriment of the user and benefit of the company
Honestly, these days, I do not expect much of Microsoft. In fact, I recently thought to myself, there is no way they can still disappoint. But what do they do? They find a way damn it.
No, but the scanning is happening on Microsoft servers, not locally, I am guessing.
So if you enable the feature, it sends your photos to MS to scan... If you turn it off, they delete that data, meaning if you turn it on again, they have to process the photos again. Every time you enable it, you are using server resources.
However, this should mean that they don't let you re-enable it after you turn it off 3 times, not that you can't turn it off if you have enabled it 3 times.
where does it say turning it off deletes the data? it doesn't even say that turning it off stops them scanning your photos. the option is "do you want to see the AI tags" Google search history is the same. Turning off or deleting history only affects your copy of the data.
Just because you can't personally think of a reason why the number shall be 3, and no more than 4, accepting that thou hast first counted 1 and 2, it doesn't mean that the reason is unthinkable.
I feel like you're way too emotionally invested in whatever this is to assess it without bias. I don't care what the emotions are around it, that's a marketing issue. I only care about the technical details in this case and there isn't anything about it in particular that concerns me.
It's probably opt-out, because most users don't want to wait 24 hours for their photos to get analyzed when they just want to search for that dog photo from 15 years ago using their phone, because their dog just died and they want to share old photos with the family.
This doesn't apply to your encrypted vault files. Throw your files in there if you don't want to toggle off any given processing option they might add 3 years from now.
It's easy for people to forget that being overly emotionally invested in their argument can cloud their judgement. Most of us do it at some point, I am not immune, but if someone has any reasonability in them then it can actually help at least reflect on why they are championing their position. They may not change their position, but they might try to form a better argument that has more solid grounds.
After all, sometimes an emotional reaction comes from a logical basis, but the emotion can avalanche and then the logical underpinnings get swept away so they don't get re-evaluated the way they should.
Since you seem to like advice on the internet: Try engaging with people as if they are smarter than you and more unbiased than you, rather than less. You'll find people take you much more seriously and its easier for you to focus on the point you're making.
Clearly, you personally can't think of a reason yourself based on that 'probably' alone.
<< I feel like you're way too emotionally invested
I think. You feel. I am not invested at all. I have.. limited encounters with windows these days. But it would be silly to simply dismiss it. Why? For the children man. Think of the poor children who were not raised free from this silliness.
<< I only care about the technical details in this case and there isn't anything about it in particular that concerns me.
I can respect that. What are those technical details? MS was a little light on the details.
"Microsoft collects, uses, and stores facial scans and biometric information from your photos through the OneDrive app for facial grouping technologies. This helps you quickly and easily organize photos of friends and family. Only you can see your face groupings. If you share a photo or album with another individual, face groupings will not be shared.
Microsoft does not use any of your facial scans and biometric information to train or improve the AI model overall. Any data you provide is only used to help triage and improve the results of your account, no one else's.
While the feature is on, Microsoft uses this data to group faces in your photos. You can turn this feature off at any time through Settings. When you turn off this feature in your OneDrive settings, all facial grouping data will be permanently removed within 30 days. Microsoft will further protect you by deleting your data after a period of inactivity. See the Microsoft account activity policy for more information."
I turn all Co-Pilot things off and I've got all those AI/tagging settings off in OneDrive, but I'm not worried about the settings being disingenuous currently.
There's always a worry that some day, a company will change and then you're screwed, because they have all your data and they aren't who you thought they were anymore. That's always a risk. Just right now, I'm less worried about Microsoft in that way than I am with other companies.
In a way, being anti-government is GOOD, because overly relying on government is dangerous. The same applies to all these mega-platforms. At the same time, I know a lot of people who have lots a lot of data, because they never had it backed up anywhere, and people who have the data, but can't find anything, because there's so much of it and none of it is organized. These are just, actual real world problems and Microsoft legitimately sees that the technology is there now to solve these problems.
> Their disclaimer already suggests they don't train on your photos.
We know all major GenAI companies trained extensively on illegally acquired material, and they were hiding this fact. Even the engineers felt this isn't right, but there were no whistleblowers. I don't believe for a second it would be different with Microsoft. Maybe they'd introduce the plan internally as a kind of CSAM, but, as opposed to Apple, they wouldn't inform users. The history of their attitude towards users is very consistent.
I am aware that many companies train on illegally acquired content and that bothers me too.
There is that initial phase of potential fair use within reason, but the illegal acquisition is still a crime. Eventually after they've distilled things enough, it can become more firmly fair use.
So they just take the legal risk and do it, because after enough training the legal challenges should be within an acceptable range.
That makes sense for publicly released images, books and data. There exists some plausible deniability in sweeping up influences that have already been released into the world. Private data can contain unique things which the world has not seen yet, which becomes a bigger problem.
Meta/Facebook? I would not and will never trust them. Microsoft? I still trust them a lot more than many other companies. The fact many people are even bothered by this, is because they actually use OneDrive. Why not Dropbox or Google Drive? I certainly trust OneDrive more than I trust Dropbox or Google Drive. That trust is not infinite, but it's there.
If Microsoft abuses that trust in a truly critical way that resonates beyond the technically literate, that would not just hurt their end-user personal business, but it would hurt their B2B as well.
Then you would limit the number of times the feature can be turned on, not turned off. Turned off uses less resources, while turned on potentially continues using their resources. Also I doubt if they actually remove data that requires processing to obtain, I wouldn't expect them to delete it until they're actually required to do so, especially considering the metadata obtained is likely insignificant in size compared to the average image.
It's an illusion of choice. For over a decade now companies either are spamming you with modals/notifications up until you give up and agree to a compromising your privacy settings or "accidentally" turn these on and pretend that change happened by a mistake or bug.
Language used is deceptive and comes with "not now" or "later" options and never a permanent "no". Any disagreement is followed by a form of "we'll ask you again later" message.
Companies are deliberately removing user's control over software by dark patterns to achieve their own goals.
Advanced user may not want to have their data scanned for whatever reasons and with this setting it cannot control the software because vendor decided it's just 3 times and later settings goes permanent "on".
And considering all the AI push within Windows, Microsoft products is rather impossible to assume that MS will not be interested in training their algorithms on their customers/users data.
---
And I really don't know how else you can interpret this whole talk with an unnamed "Microsoft's publicist" when:
> Microsoft's publicist chose not to answer this question
and
> We have nothing more to share at this time
but as a hostile behavior. Of course they won't admit they want your data but they want it and will have it.
Both enabling and disabling incur a cost (because they delete the data, but then have to recreate it), but they wouldn't want to punish you for enabling it so it makes sense that the limitation is on the disabling side.
It's harder to find a reasonable use case to constantly opt-in and opt-out, incurring server side costs. Generally you either want it on or want it off. They do limit the cost of disabling it some, because they cache that data for 30 days, but that still means someone could toggle it ~11 times a year and incur those costs.
I don't know what they're seeing from their side, but I'm sure they have some customers that have truly massive photo collections. It wouldn't surprise me if they have multiple customers with over 40TB of photos in OneDrive.
It sounds like you have revoked their permission to tag(verb) the photos, why should this interfere with what tag(noun) the photo already has?
But really I know nothing about the process, I was going to make an allegory about how it would be the same as adobe deleting all your drawings after you let your photoshop subscriptions lapse. But realized that this is exactly the computing future that these sort of companies want and my allegory is far from the proof by absurdity I wanted it to be. sigh, now I am depressed.
Honestly, I hated when they removed automatic photo tagging. It was handy as hell when uploading hundreds of pictures from a family event, which is about all I use it for.
Precisely. The logic could just as easily be "you can only turn this ON three times a year." You should be able to turn it off as many times as you want and no hidden counter should prevent you from doing so.
"You can only turn off this setting 3 times a year."
I look forward to getting a check from Microsoft for violating my privacy.
I live in a state with better-than-average online privacy laws, and scanning my face without my permission is a violation. I expect the class action lawyers are salivating at Microsoft's hubris.
I got $400 out of Facebook because it tagged me in the background of someone else's photo. Your turn, MS.
If you don't trust Microsoft but need to use Onedrive, there are encrypted volume tools (e.g. Cryptomator) specifically designed for use with Onedrive.
You seem to be implying that users won't accept this. But users have accepted all the other bullshit Microsoft has pulled so far. It genuinely baffles me why anyone would choose to use their products yet many do and keep making excuses why alternatives are not viable.
Yes, any non E2EE cloud storage system has strict scanning for CSAM. And it's based on perceptual hashes, not AI (because AI systems can be tricked with normal-looking adversarial images pretty easily)
I built a similar photo ID system, not for this purpose or content, and the idea of platforms using perceptual hashes to potentially ruin people's lives is horrifying.
Depending on the algorithm and parameters, you can easily get a scary amount of false positives, especially using algorithms that shrink images during hashing, which is a lot of them.
Yeah, it’s not a great system due to the fact that perceptual hashes can and have been tricked in the past. It is better than machine learning though because you can make any image trigger an ML model without necessarily looking like a bad image. That is, perceptual hashes are much harder to adversarially fool.
I agree, and maybe I'm wrong, but I see a similarity between phash quantization and DCT and ML kernels. I think you could craft "invisible" adversarial images similarly for phash systems like you can ML ones and the results could be just as bad. They'd probably replicate better than adversarial ML images, too.
I think the premise for either system is flawed and both are too error prone for critical applications.
I imagine you'd add more heuristics and various types of hashes? If the file is just sitting there, rarely accessed and unshared, or if the file only triggers on 2/10 hashes, it's probably a false alarm. If the file is on a public share, you can probably run an actual image comparison...
A lot of classic perceptual hash algorithms do "squinty" comparisons, where if an image kind of looks like one you've hashed against, you can get false positives.
I'd imagine outside of egregious abuse and truly unique images, you could squint at a legal image and say it looks very much like another illegal image, and get a false positive.
From what I'm reading about PhotoDNA, it's your standard phashing system from 15 years ago, which is terrifying.
But yes, you can add heuristics, but you will still get false positives.
I thought Apple’s approach was very promising. Unfortunately, instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked and the conversation was dominated by uninformed outrage about things that weren’t happening.
> Unfortunately, instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked
Folks did read. They guessed that known hashes would be stored on devices and images would be scanned against that. Was this a wrong guess?
> the conversation was dominated by uninformed outrage about things that weren’t happening.
The thing that wasn't happening yet was mission creep beyond the original targets. Because expanding-beyond-originally-stated-parameters is thing that happens with far reaching monitoring systems. Because it happens with the type of regularity that is typically limited to physics.
There were 2ndary concerns about how false positives would be handled. There were concerns about what the procedures were for any positive. Given Gov propensities to ruin lives now and ignore that harm (or craft a justification) later, the concerns seem valid.
That's what I recall the concerned voices were on about. To me, they didn't seem outraged.
> Folks did read. They guessed that known hashes would be stored on devices and images would be scanned against that. Was this a wrong guess?
Yes. Completely wrong. Not even close.
Why don’t you just go and read about it instead of guessing? Seriously, the point of my comment was that discussion with people who are just guessing is worthless.
> Why don't you just explain what you want people to know instead of making everyone else guess what you are thinking?
I’m not making people guess. I explained directly what I wanted people to know very, very plainly.
You are replying now as if the discussion we are having is whether it’s a good system or not. That is not the discussion we are having.
This is the point I was making:
> instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked and the conversation was dominated by uninformed outrage about things that weren’t happening.
The discussion is about the ignorance, not about the system itself. If you knew how it worked and disagreed with it, then I would completely support that. I’m not 100% convinced myself! But you don’t know how it works, you just assumed – and you got it very wrong. So did a lot of other people. And collectively, that drowned out any discussion of how it actually worked, because you were all mad about something imaginary.
You are perfectly capable of reading how it worked. You do not need me to waste a lot of time re-writing Apple’s materials on a complex system in this small text box on Hacker News so you can then post a one sentence shallow dismissal. There is no value in doing that at all, it just places an asymmetric burden on me to continue the conversation.
The actual system is that they used a relatively complex zero-knowledge set-matching algorithm to calculate whether an image was a match without downloading or storing the set of hashes locally.
That said, I think this is mostly immaterial to the problem? As the comment you’re responding to says, the main problem they have with the system is mission creep, that governments will expand the system to cover more types of photos, etc. since the software is already present to scan through people’s photos on device. Which could happen regardless of how fancy the matching algorithm was.
Among many many issues: Apple used neural networks to compare images, which made the system very exploitable. You could send someone an image where you invisibly altered the image to trip the filter, but the image itself looked unchanged.
Also, once the system is created it’s easy to envision governments putting whatever images they want to know people have into the phone or changing the specificity of the filter so it starts sending many more images to the cloud. Especially since the filter ran on locally stored images and not things that were already in the cloud.
Their nudity filter on iMessages was fine though (I don’t think it ever sends anything to the internet? Just contacts your parents if you’re a minor with Family Sharing enabled?)
> once the system is created it’s easy to envision governments putting whatever images they want to know people have into the phone
A key point is that the system was designed to make sure the database was strongly cryptographically private against review. -- that's actually where 95% of the technical complexity in the proposal came from: to make absolutely sure the public could never discover exactly what government organizations were or weren't scanning for.
Sorry, but you're relaying a false memory. Conversation on the subject on HN and Reddit (for example) was extremely well informed and grounded in the specifics of the proposal.
Just as an example, part of my responses here were to develop and publish a second-preimage attack on their hash function-- simply to make the point concrete that varrious bad scenarios would be facilitated by the existence of one.
> instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked and the conversation was dominated by uninformed outrage
I would not care if it worked 100% accurately. My outrage is informed by people like you who think it is OK in any form whatever.
No amount of my device spying on me is acceptable, no matter how cleverly implemented. The fact that your comment said anything positive about it at all without acknowledging that it is an insane idea and should never be put into practice is what I was referring to.
I read the whitepaper they published and worked at Apple at the time this idea was rightly pulled. I understand it perfectly fine and stand by my words.
Astonishing. They clearly feel their users have no choice but to accept this onerous and ridiculous requirement. As if users wouldn't understand that they'd have to go way out of their way to write the code which enforces this outcome. All for a feature which provides me dubious benefit. I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?
Privacy legislation is clearly lacking. This type of action should bring the hammer down swiftly and soundly upon these gross and inappropriate corporate decision makers. Microsoft has needed that hammer blow for quite some time now. This should make that obvious. I guess I'll hold my breath while I see how Congress responds.