> Meta introduced a feature known as Tag Suggestions to make it easier for users to tag people in their photos. According to Paxton’s office, the feature was turned on by default and ran facial recognition on users’ photos, automatically capturing data protected by the 2009 law.
Meta should also be forbidden from using any features derived from this data, or open source any trained model from the data. 1.4B settlement is too small compared to the long term gains of a company
I believe they asked things like “Get notified when you appear in your friends photos” or something along those lines. My memory is fuzzy though. Phrasing it like that would probably get more opt-ins. It’s a lot less scary sounding.
Yes, but the other side is true too. For example, if the prompt just said "Allow Facebook to scan all your personal photos and do facial recognition, and also capture some of that data for future use?" then it's not at all clear why they are asking for this. Also it's going to all be "no" from the users.
Much better would be a compromise, that describes both why and what. Though, that will be a lot of text which isn't good either. Hard problem.
In general, when I come across some sort of opt in I get offered a choice between allowing something to work and it not working and having some sort of a sub optimal experience. I can't even try before I buy either because once I have consented, the data/image/whatever is already "released".
Most opt-in choices, unless coerced by laws, will be heavy handed nudges at best.
I'm an IT consultant and stand more of a chance than most at making an informed choice but please don't wave "opt in" as some sort of laundering procedure. I am deliberately juxtaposing with money laundering.
Many of Facebook's "opt-in" things aren't actually consent, so I'm prejudicially wary of any numbers Facebook gives. (I have no insight into this particular case.)
Hopefully other states and countries sue them too. Facebook has taken massive liberties with our private data and they should be held accountable for it.
This is always my frustration with class action lawsuits. The people who've been affected by the company never get anything of value. Its always the attorney's and state that started the suits that make all the money.
The worst was back in the day when Chevy used some bogus primer for their trucks. They refused to say it was their fault owners paint was coming off their trucks sans any rust. A class action was started and then over the course of several years, the government and owners finally won.
What did the owners get? Coupons for $100 off a paint job at the local Chevy dealer.
So not only did the owners not get anything to help them fix their problems, they were forwarded to a local dealership where the dealership could then take advantage of them again and charge them thousands of dollars and essentially make all the money back they were paying the attorney's and government.
Much of the time the actual harm to individuals is fairly small, but there are so many that the aggregate harm is hard to ignore.
It's a quandary. You can ignore it, but that encourages criminal behavior. Or you can pursue it, but proving your case beyond a reasonable doubt is time consuming and difficult. Often, there is no money to pay the lawyers in advance, so they expect to be compensated for the risk of getting nothing.
If the lawyers worked for free you might get double or triple the settlement, but it's hardly better to get $300 off a paint job than $100. The cash equivalent is probably only $30.
>If the lawyers worked for free you might get double or triple the settlement, but it's hardly better to get $300 off a paint job than $100. The cash equivalent is probably only $30.
No, the cash equivalent is probably $-200 or so.
The problem is that the individuals affected didn't get anything of value. They got a $100 off coupon at the dealership for a paint job there. Which means the dealership is just going to inflate the price by at least that much, and dealerships are already known to have higher prices than competing businesses anyway.
If they had gotten a $100 voucher for a paint job at any paint business, or better yet a $100 check to spend as they wish, then this would have cost GM something at least. But instead, they stood to profit from the "payout".
I'm not in that class, but I think that (like any other case) cash is the right way to settle, in the absence of a mutual agreement to the contrary. The beneficiaries didn't get to bargain for a voucher, and shouldn't have to go to the dealer in order to be made whole.
The only thing that seems fair to me is if the settlement is held in trust and disbursed with minimum friction to anyone eligible. If Chevy wants to give those people a $100 voucher for thirty bucks if they show their eligibility for the action then go ahead!
> proving your case beyond a reasonable doubt is time consuming and difficult
Civil cases aren’t generally required to be decided beyond a reasonable doubt but merely with a preponderance of the evidence. It’s more that court in general is often time consuming and difficult.
> they expect to be compensated for the risk of getting nothing
It depends on the lawyer and the case, e.g. most class action suits the lawyers charge nothing up front but take a cut of the damages.
The problem in large class settlements is the “damages” awarded per capita are often a trifle compared to a similar situation with a smaller set of plaintiffs.
Look at the award from Equifax data breach settlement… The CFPB fine was about $0.66 per person. The class action award averages to about $2 per person.
Could your bank plaster your personal info on a billboard for two years, but make it up to you with a nice paper cup of coffee? (How would you even prove damages?!?). Don’t get your hopes up — that coffee would be too expensive…
Would $1000 be fair for you? Multiply that by 150 million…
Same with the airbag recalls.
The problem is worse than lawyers’ cuts… Proper restitution isn’t even possible (orders of magnitude greater than company’s assets).
At the individual level, a $10k automobile can do far more damage to people and property than just $10k…
But, As a society, we often fail to punish companies that do more harm than good.
Let me get this straight, you're saying I should take a megacorp to small claims court because lawyers that will inevitably end up costing >$10k might take my case for free if the case is strong enough? Does that logic make any sense to you?
I wish this were true. IANAL, I just play one, but for example, in Illinois, anything under $15K is Small Claims, but only those below $1500 are lawyer-free. And if you're suing a corp I don't think that applies -- the law generally says a corp can't defend itself and must be represented by a lawyer in all occasions.
I absolutely agree with you. However, living in Illinois I received a $397 dollar check. I had to go look it up to make sure I got the number right. On one hand, a massive chunk of the $650 million went to the lawyers. But $397 for each user, when over a million signed on to the lawsuit is pretty significant.
Let's not pretend like that is a positive outcome. All of your personal data being irrevocably made available to persons unknown, and used for unknowable purposes for untold decades... but at least you got $100 one time.
I don't believe that was what happened in this instance, but your point stands that some violations are not able to be adequately remedied with cashdollars.
> Facebook has taken massive liberties with our private data
Gotta remember the quote from Zuckerberg during the early days, when someone asked him how he got all that private data. He said, and I quote, "People just submitted it. I don't know why. They 'trust me'. Dumb fucks."
this is the core Facebook (and google et al) business model
whatsapp and google system apps scan your contact list every two seconds (not hyperbole) and send the diff to their servers. if you are in someone's phonebook, you're very well cataloged on their systems. and those contact entries tie phone, email, name, other grouping, association list by crossing lists, address, aliases, etc.
they have to claim they don't map those identities for ad targeting, but everything else they can claim you "asked for it" like suggesting new contacts to add is fair game.
Anyone who clicks agree without reading and carefully interpreting every word of the terms of service is in fact consenting to almost anything. Almost anything you can think of could be there and you don't care.
I mean even things which are not actually written in the TOS. If you don't know what's in the agreement, you're agreeing to almost anything in principle.
The number of Facebook users who have actually read all the legal crap probably wouldn't fill a small lecture hall.
Do fines to big corporations even work? I tend to suspect paying them is a part of the business model - they will keep doing whatever they want to generate huge profits covering whatever fines they may have to pay.
Well, I would expect that, in addition to paying money, this settlement also obliges Meta to stop doing whatever they were doing. And if they don't, then I would expect fine for repeated offense to be much higher, like 10x. At some point that would make ignoring law unprofitable.
Seems to be a pretty strong incentive for companies to make sure that, for anything unethical that they actually want to do, the fines will be large enough to satisfy the median voter, but small enough not to impede their business.
Empirically, large companies seem to have little problem following this incentive.
That's actually kind of the problem here. The assumption built into this is that the company is doing the math. More likely the company is just so big that if even one mid-level person makes a bad call, the consequence gets propagated across a billion users. But there are so many mid-level people that it's guaranteed to happen.
Which is the same reason that "higher penalties" wouldn't work either. The problem is not that the penalties aren't sufficient to deter, the problem is that the right hand doesn't know what the left hand is doing, so even if the lawyers tell them "you must never do this" they've got thousands of independent chances to screw it up. One person who doesn't heed their training and oof.
The actual problem is that these companies are too big. Mistakes will always happen, but one mistake by one team in a huge bureaucracy shouldn't affect a population the size of a major country.
Let’s go nuts and just take for granted the good faith and best intentions of megacorp, against all evidence. Even then, we all know that ignorance of the law doesn’t get people out of a speeding ticket. Why should the argument from ignorance hold up then with megacorp, who is staffed with hundreds of legal compliance professionals? Better yet, why should it hold up for their nth example of systematic abuse?
I have some sympathy for the idea that these fines are a form of "stochastic taxation". There are so many rules, arbitrarily enforced and ever changing, that a large company will bleed off a billion now and then whatever you do.
I think that definitely happens, but I'd guess it's the smaller part of the equation.
from the better ap-news article linked by the top comment:
> The company announced in 2021 that it was shutting down its face-recognition system and delete the faceprints of more than 1 billion people amid growing concerns about the technology and its misuse by governments, police and others.
The corporation cannot repeatedly violates nor make a policy that violates the same thing over and over. The punishment would be increased exponentially...
That doesn't work, though. In practice, most states probably won't sue for this. And even if they do, Texas is one of the most populous states in the US. Presumably the per-state settlements will be proportionate to the number of people harmed. So it's not going to be 1.5B * 50 = $75B.
It doesn't matter if the states do or not. It's a deterrent.
You go into your quarterly earnings call and say "we're conducting illegal business practices which expose us to up to $75B of risk" and see how the investors like that.
You don't get caught every time you speed, but the fine certainly is a deterrent against doing 100mph everywhere.
But that's not what they say to shareholders. These deals always come along with "admitting no fault", so they don't have to say to anyone that they're conducting illegal activities. They're perfectly free to cite the fine up to overzealous prosecution or whatever. I think that usually the case that's made is "we didn't do anything wrong, but it was cheaper to settle than to fight".
Under GAAP, the company is legally mandated by law to disclose it in the FS. Since the FS is the primary was the company communicates performance to investors, they in fact are saying this to investors.
Its called a legal provision or legal reserve. they need to set aside money for the eventual expense of the setlement. This is going to eat right into EPS.
And, any material change in provision will be discussed. Plus, any investor worth their salt is going to be poring over FS disclosures, including any legal provisions.
Admission of fault has nothing to do with FS. Its purely an insurance topic
Venture capitalists aren't stupid. Even if Zuck says everything is okay they come to their own conclusions.
Obviously everything is NOT okay, otherwise the company wouldn't be getting fined in court for breaking the law.
VCs know just as well as we do that Facebook's business model (privacy-invading targeted advertising) exists because law has not caught up to technology yet. That's starting to change, as this settlement shows.
If Meta suffer many more of these legal settlements which wipe >1% off the annual profit then investors will start to divest. The value of the company will fall.
Keep in mind this was a settlement amount which suggests the legal liability was actually a lot higher than $1.5B
So I think my statement is supported: fining corporations works.
That’s still a fine of 1% of revenue, which seems like a small percentage, but is still a sizable hit. It’s enough to make investors notice, but probably not substantially if it is a once-off event. If it isn’t a one-off event though and states/countries start fining it all over the place, then there might be more of an issue.
Texas passed a pretty solid CCPA-like data privacy law (TDPSA) which went into effect July 1st of 2024. That start date was announced when it was passed in JUne of 2023. Meta needs to get with the times, they're wildly out of compliance.
This is a trend not only among banks but more and more among big tech - just include the (future) fine into your product price, then settle (if they investigate at all). No harm, nobody goes to prison, everybody happy.
> The settlement, announced Tuesday, does not act as an admission of guilt and Meta maintains no wrongdoing.
> In 2011, Meta introduced a feature known as Tag Suggestions to make it easier for users to tag people in their photos. According to Paxton’s office, the feature was turned on by default and ran facial recognition on users’ photos, automatically capturing data protected by the 2009 law. That system was discontinued in 2021, with Meta saying it deleted over 1 billion people’s individual facial recognition data.
> The 2022 lawsuit
> We are pleased to resolve this matter, and look forward to exploring future opportunities to deepen our business investments in Texas, including potentially developing data centers
Each statement makes it increasingly harder to view it as a fine than a tax. An offence that lasted 11 years and got prosecuted a year after it ended can be explained in no other way than being an excuse dug out of the ground to make a ransom
$1.4B is serious money though. You can't be paying that out left and right, and there's no way Meta made more than $1.4B on doing facial analysis of photos uploaded by Texans. This was an actual loss by them, and thus an error.
The right figure to compare against is their annual profit, which is around $45B. A single state in the US in a single enforcement action against them taking off over 3% of their entire profit for the year is a big deal.
Also feels like a trend among states to end-run the commerce clause. Want to tax imports despite the Constitution? Just pass regulations that only affect out of state industries and fine them for non-compliance.
It should really be flipped. Wickard should be overruled, as a result reducing the federal power to regulate intra-state affairs, but state power better circumscribed by the Commerce Clause: no one state should be able to dictate nationwide commerce terms on account of its economy's size. Whether that would prevent Texas from regulating Meta as in this case is not clear -- the courts would have to figure that out.
> speeding on the highway, driving in the HOV lanes
This seems like it's apples to oranges. The people choosing to do these things aren't doing so because they consider it to be a cost of business, expecting that they'll generate more in revenue, but because they think the likelihood of being caught and the gravity of the offense are relatively low. This practice of disregarding regulations because the fines can be factored as a cost is fairly well confined to large corporations.
(I guess someone being paid per mile driven from advertising decals on their car would get a business benefit from the speeding; they may even factor in the probability of being caught with the amount of the fine to determine their speeding decisions. That's nobody I know, though.)
Time is money in the end. Speeding reduces time spent on the road and allows more time being spent elsewhere. The analogy still holds, just more abstractly.
Which is the reason why around here your license is void (for a certain time, as in years terms) if you speed too much (plus you can show up for a behavioural course which not only costs quite a bit for people with less money but also takes three days for those who can afford it).
And no chauffeur is going to risk that since it will void the means of living.
edit: I do not see any reason to not apply "license of cooperation in [state/country]" void in case of continued breaking of laws.
The trick (at least in the USA) to use an out of state licence which isn't connected to the DMV system of the state you're driving in. Sure you'll get fines, but your licence won't be revoked.
> So it seems to me that there’s a fundamental human trait (“what can I get away with” ??) that warrants thinking about as well.
Idk, I suspect many "crimes" like jaywalking and speeding are more around, this rule errs too hard on safety, but I know I can do this safely, and if it's not safe enough I'll get to pay the consequences (fines, accidents, dying).
That's nothing like violating the privacy of other people for profit and no real penalty (actual jailtime for those involved, not just a --fine-- tax on the profits)
> this rule errs too hard on safety, but I know I can do this safely, and if it's not safe enough I'll get to pay the consequences (fines, accidents, dying)
The analogy here is that people, like corporations, are frequently very bad at assessing where the appropriate line for safety actually is, doubly so when the appropriate line personally inconveniences them. Some rules are perhaps too stringent, but frequently, the guidelines are akin to safe working loads, with safety margins built in, rather than do-not-exceed limits. Anyone who has to understand either of those things will tell you that if your operational envelope exceeds the safe working limit, you will eventually fail, catastrophically.
It's certainly true that traffic fines are not similar to corporate fines in the sense that you don't lose your chemical manufacturing license after committing 15 points worth of chemical safety infractions, but for other kinds of infractions, the fine for people is also frequently just part of the cost of doing the thing.
All of that said, I 100% agree that company leadership should see jail time for a variety of infractions, with a sort of inverted burden of proof as it pertains to determining who is at fault in an offending organization: you can only pass the buck down as far as the highest person who you can prove beyond a shadow of a doubt deliberately hid information from those above them.
When do I get my 46.6 dollars or even better my 933 dollars if we only look at active Facebook users either way it still seems like a great deal price for everyone's biometrics.
Undoubtedly a great price. At times like this I'd like to see the entire extent of the profits split up among the plaintiffs.
> “Wherever I'm going, I'll be there to apply the formula. I'll keep the secret intact.
It's simple arithmetic.
It's a story problem.
If a new car built by my company leaves Chicago traveling west at 60 miles per hour, and the rear differential locks up, and the car crashes and burns with everyone trapped inside, does my company initiate a recall?
You take the population of vehicles in the field (A) and multiple it by the probable rate of failure (B), then multiply the result by the average cost of an out-of-court settlement (C).
A times B times C equals X. This is what it will cost if we don't initiate a recall.
If X is greater than the cost of a recall, we recall the cars and no one gets hurt.
If X is less than the cost of a recall, then we don't recall.”
People always quote this like the company is doing something disreputable, but that's exactly what you want them to do as long as the cost of the lawsuits accurately represent the amount of the harm.
Recalling a hundred million cars at the cost of a hundred billion dollars because there is a 0.5% chance that one person across the entire hundred million could die is the wrong choice. Otherwise all products would have to be recalled continuously, because nothing is perfectly safe. Example: Many cars have been sold without lane departure warning systems, even though that could cause some people to die. Should they all be recalled?
The equation above is how you calculate the cutoff for whether to do the recall. What would you propose that they do instead?
> as long as the cost of the lawsuits accurately represent the amount of the harm
> nothing is perfectly safe
> Should they all be recalled?
My flippant quote from "Fight Club" isn't a great starting point - let me flush out my opinion a bit.
I believe that the types of judgements we're talking about here are not ambulance-chaser-type awards for a claimant proportional to the injury but actually punitive in nature towards the business entity itself.
GiantTechCorp isn't paying Alice 40 bucks because she's injured by losing her data, they're paying 10 billion as punishment to not do it [get caught] again. If the punishment doesn't incentivize the company not to reoffend, we need to change it until it does.
The penalty would be greater if they continued violating. First time penalties should probably work that way. However, there needs to be a 3 strike rule or something similar, where penalties of any type by the same entity grow exponentially and ultimately you get banned from operating. Status quo is a series of slaps on the wrists for differing infractions. It just trains them to hide their wrongdoing better.
Equitable and more specifically injunctive relief is the way out of this abuse of "crime is the cost of doing business" mentality. (That a lot of us raise here.)
What could that mean: Meta or whichever company breaks the law, loses ownership and rights to anything that is the result of the crime.
If it's a model, Meta can not use that or any other version of the model that utilized data illegally acquired. And that model becomes property of the victims.
That would be the equivalent of confiscating a fleet or trucks from a trucking company whenever a driver broke a law. A little extreme and will never happen. China doesn't even operate like this.
This would just create a perverse incentive for the government to confiscate everything it can. Then you would see laws being crafted that are increasingly ambiguous and difficult to follow in order to enable greater confiscation of goods.
Asset forfeiture isn't something we should be cheering on.
I would argue it depends on the provocation. If a little kid is hitting an animal with a stick and the animal attacks, that is self defense against assault with a deadly weapon. If the animal is a human anyway. If the animal is my dog I have to put it down.
I respect your space to air your opinions, but you are not doing so in a respectful or persuasive way. I'm sorry for whatever experiences have made you feel like dogs have no place in modern society. I wholeheartedly disagree.
At no point did I admit anything about my dog being aggressive towards humans. I speak from my experience spending my life around animals, having gone through the adoption process at a "difficult" shelter, and having to manage my dog's incompatibility and aggression towards other dogs.
In addition to interpreting my comment in the worst way possible, you are quick to dismiss the fact that dogs are conscious beings with their own emotions, past history, skills, and evolutionary training. As their handlers, I agree with you that we are absolutely responsible for their actions — that is the implicit choice in having a pet. However that does not mean we can predict their actions at all times.
Any dog can be provoked with the right stimulus, as can any human. I do not agree that applying absolutes in response to risk in the complex interactions between various animal groups, including humans, is a sensible or reasonable approach.
Also, your comment below "Shelter dogs do not have this training and are aggressive!" is extremely uninformed and incorrect. This is your opinion, not a fact. Espousing this attitude hurts the work that shelters are doing, reinforces dangerous stereotypes for specific dog breeds, and does a disservice to those who choose to adopt a pet.
Most privacy laws today are incredibly overwrought and vague. In this case, face recognition was illegal even if you opted into it. Try being a startup that needs any amount of customer data to operate, and then read the GDPR and realize it's better off to just move to the US.
Regardless, you are missing the point. It is a straightforward calculus that if you craft enough complicated and vague privacy laws, companies are bound to violate it, no matter what they do or how hard they try. All you have to do is craft a set of laws in which any company can always be interpreted as in violation of one.
If I was a state, I would go out of my way to craft these vague, overwrought laws so I could have a reliable source of an extra few billion dollars here and there whenever I needed it. If I was a regulator or legislator, I'd do it for the career clout of "going after the big bad guys." And no one will ever complain, because "big companies are evil and capitalism has never done anything for humanity," so the Overton window can only ever move in one direction.
And this is how we end up with the undeniable technological stagnancy of the EU, where they completely missed the www and mobile revolutions, and will certainly miss the AI revolutions. How many Industrial Revolutions can you miss out on before you fade into irrelevancy, having lost a meaningful portion of your financial/economic/military/technological power? I guess we will find out.
I'm not missing your point, I just don't agree with it. :P
If what you're suggesting were true we would have seen large numbers of EU unicorns based around gathering private data before GDPR that have somehow disappeared ...and we didn't.
GDPR wasn't brought in through calculus to shake down brave data innovators, striving to "make the world a better place".
GDPR came in reaction to large foreign entities taking the piss, stalking us with creepy adverts and dark pattens, refusing to take no for an answer.
There really needs to be a 3-strike rule type of thing with fines like these. It's ridiculous to me that they can continue to violate people's privacy without their consent, get fined a percentage of a percentage of the money they actually made on the practice, and that's the end of it.
These fines should be exponential in nature, and aggressively so. The 4th-in-a-row fine of this nature should basically take everything they earned in the whole year. Let's see how quick and efficient they suddenly become once there's actual consequences.
That, or Zucc and his cronies should be getting jailed. I'm fine with either option, or preferably both.
Yes. And facebook is not too big to fail. There are thousands of alternatives to Facebook, Whatsapp and Instagram.
And the few employees they have easily find another job.
Huh, I wondered why this feature stopped being available. It would be even more useful now that you could use AI to say "find me all photos of George riding his bike" or "find all pictures of me with Dave".
> This was the first lawsuit Paxton’s office argued under a 2009 state law that protects Texans’ biometric data, like fingerprints and facial scans. The law requires businesses to inform and get consent from individuals before collecting such data.
I hope it only restricts business, because I have an awkward amount of face blindness and would love to have an app that could put names to them. I wonder if the maker of such an app would be liable for my use of it in Texas.
Anything you can legally opt-in to online is effectively complete freedom to companies. Nobody reads click-through agreements or terms of service. Sites will just add "and also we can collect your biometric info" to the terms, and people will keep clicking accept.
Without some very strong, EU-type rules around how you have to ask, it's just another thing lawyers will know to add to the terms.
> At the time, more than a third of Facebook’s daily active users had opted in to have their faces recognized by the social network’s system. Facebook introduced facial recognition more than a decade earlier but gradually made it easier to opt out of the feature as it faced scrutiny from courts and regulators.
I think it does because my right to privacy is more important than your right to know who I am.
Your device would effectively give anybody the right to demand identification from random people in public places which would probably have a lot of negative consequences.
Were Facebook's lawyers asleep at the wheel for this one? It seems like they could have thrown in a clause about "pictures uploaded may be subject to facial recognition software" and no one would have batted an eye. How is it possible that they dropped the ball so hard here?
I wouldn't be surprised if they had that. They also could've added a clause that Facebook now owns your house, and it'd probably be roughly as interesting to a court.
The fact is that many things buried in EULAs and whatnot are not really enforceable nor constitute consent. Some things have to be agreed to more explicitly than putting them on page 50 of your fine-print.
It's especially problematic when companies start doing something you didn't directly sign up for or couldn't have expected to happen when you did. I don't think that many people who signed up for a social network in 2015 expected that their photos would later be scanned. It might surprise people even now.
Real question, how do you audit that data got deleted with certainty, and not just locked away out of reach for the time being? To me this seems like a fact hard to prove.
Maybe the state can direct some funds to TxDOT to properly mark hazards on their easements near roadways and properties to avoid my mom crashing her car into a hidden, unguarded, unmarked culvert.
Probably that data was used to train AI models too. I hope we establish a legal framework that prevents training models without proper permission, and the companies that have already trained their models will get fined and those models will be banned from commercial use.
I enjoy the rapid progress of LLMs. ChatGPT and Claude are already a critical part of my daily work. But I don't like the current situation where VCs and start-ups use unpermitted data to train the models, don't respect content creators, and take advantage of the lack of regulations.
I think this adware company is pure evil, but this feature seems like something a user might actually opt in to (as noted in the AP story, quite a few did).
Interesting to see that while meta at least publicly scraps the feature, a similar one on iOS Photos is not even an opt-in – you can't turn it off.
The difference is that the information gets processed, stored, and accessed only on your own devices with Apple, and on metas servers in the case of Facebook.
You split the money up among the users, everybody gets a $1.50 check in the mail that costs $0.50 to mail out. Most of the checks will get tossed in the trash. Might as well just burn the money.
> The attorney general’s office did not say whether the money from the settlement would go into the state’s general fund or if it would be distributed in some other way.
This is kinda foolish. I find this feature to be extremely useful. Governments need to do better at reducing the burden of choice for end users.
I as an end user of an app shouldn't have to go through every feature, how it is implemented and if that meets my personal privacy bar. Sensible defaults are important.
What Texas is doing here is telling companies that they don't need your permission to steal your facial recognition data. Companies just need to consider that after several years of profiting from that facial recognition data they might one day have to pay just a tiny fraction of the money they make to the state of Texas. Cost of doing business really.
This going to be a slippery slope as more governments start to use fines as an alternative type of tax unique to these tech companies. I don't know how Meta/Google can react to these fines (Except the whole opt-in part, but then you have a tradeoff with usability, and people against it usually think that Meta cares about their data outside the aggregate)
> I don't know how Meta/Google can react to these fines
They'll react by lobbying for fines while also lobbying to limit the amount of those fines. They love the fines. Fines are something they can budget for and can let them violate the laws as long as they are willing to pay the government a fee/toll/bribe. Without fines they might be held meaningfully accountable for their crimes. The last thing they want is to face a risk of ending up in prison the way that you or I most certainly would for repeatedly ignoring the law.
This is such a ridiculous attitude. Facial recognition in my photo albums is hugely useful. It makes searching for people a breeze. Just because you don't have a need for it does not mean it is "simply not needed".
Facebook does not need to use facial recognition on me and we both know they use it for more than tagging photos. If my phone does it I am asked for permission.
Can I consent to scan my photos if you're in them? I can certainly manually write the names on a physical photo of the people in the photo. That used to be a fairly common practice before digital.
They need it so that they can spy on you. It's not needed, but many companies are built on surveillance capitalism and an increasing number of companies are using that surveillance to gain a huge advantage over their customers. The more a company knows about you, the easier it is for them to take advantage of you.
There's a lot of money to be made exploiting the most intimate details of our lives. Nobody "needs" that money, but they sure don't want to leave it on the table when the government isn't going to stop them from violating our privacy and then stuffing their pockets with our cash.
If there's a lot of money to be made, could you give some concrete examples with that have wide applicability? Ideally I'd like to hear something better than just selling it to an advertiser or data broker.
Examples on how companies and people can make more money by exploiting the massive amounts of private data being collected and sold? I guess that's fair. No company will tell you when they exploit your data to their advantage. It's hidden.
Prices can be set according to the data companies have on you and the assumptions they make using that same data. The price you're asked to pay for something when you shop online isn't always the same price your neighbor would be asked to pay for the exact same item. Lots of potential here too when restaurants don't publicly disclose their prices, but insist that you use a cell phone app or scan a QR code just to see a menu. Your prices don't have to be the same as the person in line behind you for the same foods. Physical retailers have been trying to get this going for a long time.
Fast food chain Wendy's tried to move the needle closer to personalized pricing (aka discriminatory pricing) when they said they were moving to surge pricing and you'd never know how much a burger was going to cost you until you'd already waited in line at the drive through and were told what price you were getting. They backed down due to consumer backlash, but their desire to squeeze every last dime possible out of you by leveraging big data and algorithms is still there.
Health insurance companies want your data so they can charge you more for not moving enough, or because people in your zip code were logged eating more fastfood, or because you've been spending too much on alcohol at the store.
A lot of the tracking we see is explicitly trying to assess traits like intelligence, education level, and mental illnesses including dementia and bipolar disorder.
Here for example is a pizza shop that will "create a profile about you reflecting your preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes." You know, just normal pizza shop stuff! (https://pie-tap.com/privacy/)
Companies and scammers alike can easily target uneducated and low intelligence individuals, and machine learning algorithms can detect when bi-polar people are in a manic phase, or what time your ADHD meds usually start to wear off, or when someone with alzheimers starts sundowning and they can jump at those chances to hit people with ads, scams, and manipulations when they think their target is weakest/most confused/most impulsive. Even without a diagnosis your mental health is a huge business opportunity for the person willing to exploit it (https://www.politico.com/news/2022/01/28/suicide-hotline-sil...)
The data being collected on you is increasingly used for really big things like if you get a job, or if your rental lease agreement gets approved, but it's also used for really trivial things like determining how long to leave you on hold when you call a company (https://www.nytimes.com/2019/11/04/business/secret-consumer-...)
Companies aren't collecting huge amounts of facts about you and your life because it's fun for them. They pay a lot of money to purchase, collect, store, maintain, backup, and scrutinize all that data, and they do it because it's making them money. Almost always, their increased profits come at your expense.
All the things you mentioned, like discrimination, sound like ways that companies safeguard against losing money, not earning it like I asked. They all also sound like money losers to me in practice. For example, if someone refuses to hire me because they found out something unsavory about me from a data broker, then they're really only hurting themselves. If someone tries to charge me more because they think I can afford it, then they're going to lose my business or get bad PR on X once I realize I'm being treated unfairly. The best you could probably argue is that preventing corporations from having knowledge about people will protect the most vulnerable members of our society who can't fight or fend for themselves and are willing to tolerate being treated poorly.
> All the things you mentioned, like discrimination, sound like ways that companies safeguard against losing money, not earning it like I asked.
A distinction without a difference? A company that can raise their prices for you because they know (or think) that you can afford the price hike isn't safeguarding them, it's just screwing you out of money. How would you feel if you got an awesome 20% raise at work, only to find that the next day the top 10 things you most frequently buy at the store were all suddenly 20% more when you went to pay for them. Why shouldn't a loaf of bread cost a percentage of your total income?
> if someone refuses to hire me because they found out something unsavory about me from a data broker, then they're really only hurting themselves.
I could agree that it might not be smart for them to lose candidates based on random crap dug out of a background search, but it happens and however dumb it is for the company, you would still be out that job. They'll never tell you why you didn't get hired. You just get ghosted.
> If someone tries to charge me more because they think I can afford it, then they're going to lose my business or get bad PR on X once I realize I'm being treated unfairly.
All the recent inflation shows that companies can get away with arbitrarily raising prices a whole lot as long as they all do it around the same time. Have you been boycotting them all recently? I've gone out of my way to avoid eggs associated with Cal-Maine Foods over price gouging (https://www.newsweek.com/egg-producers-accused-price-gouging...) but it's not been easy. Not every brand I see in stores advertises itself as being related to them. Boycotts are growing more difficult thanks to the massive consolidation of our food industry (https://www.theguardian.com/environment/ng-interactive/2021/...).
Most of the time when you end up paying more because of big data you'll never be told that. All of this is almost totally hidden from consumers. You'll just be charged more or get a bill that seems higher and you won't be told why.
You're very likely already paying more for some things because of big data. Same with store policies. You ask a store what their return policy and they'll tell you one thing while the next person who asks gets told something different. You can't feel cheated or even like you're being treated special because your good consumer score is so high because you don't even know there are multiple policies in effect depending on who asks.
I'd be willing to bet that a lot of people on twitter have complained about companies like comcast, at&t, tyson foods, facebook, 3m, monsanto, etc, but what has it accomplished? Many of the most wealthy and powerful companies in the US are also the most hated by the public. They don't have good reputations to protect. They just don't have to care if you like them or not.
> The best you could probably argue is that preventing corporations from having knowledge about people will protect the most vulnerable members of our society who can't fight or fend for themselves and are willing to tolerate being treated poorly.
It would protect all of us. No one can "fend for themselves" and everyone is being treated poorly. You have been being treated poorly already. You will continue to be treated poorly until it hurts a companies profits to treat you badly. Right now, they're not just getting away with it, they are looking for ways to screw you over even more than they are already and in new and innovative ways using resources unlike anything you'll ever have. It's highly asymmetric warfare where consumers are divided into buckets and ultimately conquered.
> How would you feel if you got an awesome 20% raise at work, only to find that the next day the top 10 things you most frequently buy at the store were all suddenly 20% more when you went to pay for them.
This is how the California economy works and it's something that I like, because if I'm allowed to have more money, then I can use my brain to figure out a way to not be scammed out of it like everyone else.
As for the rest, I don't think indignant agitators online who stir up fear are really representative of public opinion. Yes corporations tend to be slimy, but that's because people are slimy. I don't want to live in a society that takes away my freedom just to prevent the worst of us from exploiting the weakest of us.
>I don't want to live in a society that takes away my freedom just to prevent the worst of us from exploiting the weakest of us.
Well that's pretty antisocial in general. Sure, you have worded this in such a vague way that it doesn't really provide for meaningful discussion. But, I'd just as obtusely respond that the entire point of society is to prevent the worst of us from exploiting the weakest of us.
It just really seems besides the point to flaunt your own individualism as a response to social questions, as if the issue was whether or not you in particular felt it was appropriate for you.
Society is a great big machine that gives people the opportunity to be a part of building something greater than themselves. The pharaohs for instance would not have been able to build the pyramids without society supporting them and the roles of those workers were immortalized in every stone that was laid. The way the world works hasn't really changed since then. The tech industry is building a new kind of pyramids and that wouldn't be possible if it weren't for all the people looking at ads, which works best when you know what things people want to buy. The digital trail they leave behind will also grant them a kind of immortality like no generation before. Wouldn't you say that's a great system? It's much better than breaking your back in the desert.
Maybe, but already two years ago, if you read the fine print for ex. by buying a ticket for Cirque de Soleil you accept they can use your face on the video for ai training.
But somebody did buy the ticket for you. Today you accept a lot when buying a ticket. Mostly you accept also to things like, they can use your picture on commercials / media and so on. Just the next time, take the time and read the full EULA when buying a ticket from a large event.
Oh yeah, big tech is going to be shaking in their boots. I'm sure Meta is really crying about the 1.4 billion they lost while they're rolling around in the 134 billion in revenue they made last year. They've even got a nice easy payment plan which allows them to invest and make money on the $225 million they're going to be paying each year from 2025 to 2028.
We need protections in law, but I can't say I'm a big fan of KOSA. It not only fails to address the problem for anyone other than children, but it enables a lot of harm. Censorship isn't the solution. Ending the buying and selling of personal data, outlawing ads that are targeted to individuals as opposed to targeting content/context, and requiring companies to apply the same policies and prices for all of their customers no matter who that customer is, or how much money that customer has in the bank would be a better approach.
Sometimes censorship is the solution, because humans are tricky and every human comes with their own baggage. Not always of course, but there is always nuance.
Revenue isn’t profit. This is gradeschool finance. Meta’s net income in 2023 was $39 billion. $1.4 billion is 3.5% of worldwide income for one US state. It’s an unsustainable penalty for Meta if more states and jurisdictions issue similar penalties.
No, and I didn't say that it was. Reported revenue was just the data Meta has made available. Unless I've missed it somewhere, they don't explicitly state exactly how much profit they made last year. I think it's reasonable to assume that it was several times more than the 1.4 billion dollar fine though, which is really the point. If Meta/facebook makes even just tens of billions in profit, 1.4 billion could easily be a sustainable penalty. The more years they are hit with a fine that size, and the more other states start demanding their cut of the action too, the less sustainable it becomes, probably, but for all we know paying this 1.4 billion fine (over several years) to Texas could actually be (or end up being) profitable for meta.
How much money did they make off the data they've been collecting and abusing since 2011? How much money will they make in the future from what they learned by abusing that facial recognition data for nearly 15 years? If it ever amounts to more than the fine, or if other incentives make it justifiable to shareholders then Meta is better off for having broken the law.
> Unless I've missed it somewhere, they don't explicitly state exactly how much profit they made last year.
They state this in their financial reports and it is readily available on financial news websites. I’m not sure how you found revenue without also finding net income (aka profit).
Type “meta profit” into a search engine and click the first result. This immediately gave me the answer in Bing, DuckDuckGo, Google, Kagi, and Yahoo.
You mentioned Kagi, so I thought I'd try asking kagi's AI search: "how much did Meta make in profits last year"
That gave me:
According to the available information:
In 2023, Meta Platforms reported annual revenue of $134.902 billion, which was a 15.69% increase from 2022.12 However, the information does not explicitly state Meta's profit for 2023.
The closest relevant information is that in 2022, Meta's total operating profit declined from $46.8 billion in 2021 to $28.9 billion.3 Additionally, in Q4 2023, Meta reported revenue of $40.11 billion, which was a 25% year-over-year increase.4
So while we don't have the exact profit figure for 2023, the available data suggests Meta's profits were likely substantial, though potentially lower than the previous year's $46.8 billion.3
The first result for me is https://investor.fb.com/investor-news/press-release-details/... on all of those engines. It’s really easy to find. Any search engine should show you this in the first handful of results. Or just look it up on Yahoo finance. This is really basic stuff.
I should have skipped the search engines all together and just pulled up the wikipedia article for meta. They have it listed right at the side of the page.
Have you considered the Occam's razor possibility that Europe genuinely doesn't want these companies doing business the way they do, rather than it being a conspiracy to increase government revenue? Remember, it's often illegal to take a photograph in public in Germany, and for this reason Google Street View is decades old.
I think if people of a country have a standard for how companies should act that's fair game. If they don't want companies to do that stuff than the companies leaving or crying isn't a bad thing, it's accomplishing the goal.
If I understood correctly, you think people genuinely wanting privacy, and preferring to have more privacy and no Facebook rather than less privacy and more Facebook, is inherently worse than actual corruption. This says more about you than about the EU.
This has nothing to do with Facebook or a specific entity. This is about accessing and capturing already available public data (e.g. someone's appearance in public).
Just like ad hominem framing tells more about you than me.
If those products the tech giants withhold are harming users right to privacy, then isn't it beneficial for it to happen?
You can think of Texas as leading the way in regulating tech.
I wonder if this, overall, is a good thing or at least, given the political climate, can be seen as positive. There's probably a name for this type of regulatory weaponization against companies that don't fit your jurisdiction's political agenda (is it the inverse of regulatory capture?). Hard to write that last sentence and connect it to being a "good thing," but if the likes of Texas proactively regulate big tech, and then the likes of California proactively regulate, for instance, the fossil fuel industry (which I'm guessing Texans would say CA has been doing since the beginning of time) then does the broader population benefit overall because all corporations are somewhat being held in check?
Short of there being some objectively true regulatory regime at the federal level (think that's impossible) then this maybe is the only way to have productive regulation.
I'm not selling myself on any of this as I write it, but thought I'd throw it out there.
The EU abject failure to develop a tech industry and seemingly nearly single minded drive to prevent one from forming isn't really equivalent to a couple of enforcement actions in Texas.
ASML is USA tech there is a reason why important ASML decisions such as who they are allowed to sell to is directly controlled by the United States congress.
Hell ASML isn't even allowed to stop selling to the USA which is kinda of funny especially since they've been treating to just leave the Netherlands and move to the USA.
It's extremely funny watching big tech companies relocated to Texas because of "deregulation" (or in Elon Musk's case, because he's a big whiny baby), only to learn that red states still love regulation and hate big tech more than blue states.
Elon learned this lesson the hard way moving Tesla's HQ from CA to TX, and then back to CA. Now he's doing it again with SpaceX.
Texas is the absolute last place I would headquarter a business if I cared about its future.
Do you have a source for Tesla moving its HQ back to CA? I can't seem to find that online. Maybe you mean Tesla engineering? I'm admittedly not sure what that means vs Tesla overall, but it sounds like only the engineering moved back, not the overall company. Not trying to nitpick; generally want to know if the actual HQ of the company moved back.
> Elon learned this lesson the hard way moving Tesla's HQ from CA to TX, and then back to CA
It is probably unwise to make shit up and state it as a fact online. There is no evidence of Tesla moving back to California nor any reason to expect it.
I have not seen any indication SpaceX or Tesla plan to relocate back to California. On the contrary they only recently discussed moving SpaceX to Texas. Is there new news I missed?
He moved Tesla Engineering from CA to TX to CA, first citing that California over-regulates, and later citing that moving to Texas was a mistake due to the power grid issues.
Now he's moving SpaceX from CA to TX, citing that California wouldn't hand over tons of healthcare data on transgender people to Texas authorities. This has nothing to do with SpaceX's actually business (just culture war nonsense), and the power grid problem isn't resolved in the slightest (it will probably get worse), so I assume that he'll be retracting that decision at some point and moving back to CA where the electricity that he needs to run his business is delivered with reliability greater than the privatized mismanaged Texas grid that deliberately refuses to peer with other state's grids (giving them no failover options outside of the state of Texas).
Or maybe because the public wants this? I sure as hell want facial recognition made more difficult. I can’t change my face so I would prefer it not be used as a key in any database
Governmental cash grab; the actual users whose faces were subject to recognition won't see a cent, assuming anything is actually paid. Those not in Texas will not benefit indirectly in any way.
Meta should also be forbidden from using any features derived from this data, or open source any trained model from the data. 1.4B settlement is too small compared to the long term gains of a company