Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems hyperbolic to me. Sometimes companies just want to make money.

Similarly, a SaaS company that would very much prefer you renew your subscription isn’t trying to make you into an Orwellian slave. They’re trying to make a product that makes me want to pay for it.

100% of paid AI tools include the option to not train on your data, and most free ones do as well. Also, AI doesn’t magically invalidate GDPR.



> This seems hyperbolic to me. Sometimes companies just want to make money.

It's not hyperbolic at all. The entire moat is brand lock-in. OpenAI owns the public impression of what AI is- for now- with a strong second place going to Claude for coders in specific. But that doesn't change that ChatGPT can generate code too, and Claude can also write poems. If you can't lock users into good experiences with your LLM product, you have no future in the market, so data retention and flattery are the names of the game.

All the transformer-based LLMs out there can all do what all the other ones can do. Some are gated off about it, but it's simulated at best. Sometimes even circumvent-able with raw input. Twitter bots regularly get tricked into answering silly prompts by people simply requesting they forget current instructions.

And, between DeepSeek's incredibly resource-light implementations of solid if limited models, which do largely the same sort of work without massive datacenters full of GPUs, plus Apple Intelligence rolling out experiences that largely run on ML-specific hardware in their local devices which immediately, full stop, wins the privacy argument, OpenAI and co are either getting nervous, or they're in denial. The capex for this stuff, the valuations, and the actual user experiences are simply not cohering.

If this was indeed the revolution the valley said it was, and the people were lining up to pay prices that reflected the cost of running this tech, then there wouldn't be a debate at all. But that's simply not true: most LLM products are heavily subsidized, a lot of the big players in the space are downsizing what they had planned to build out to power this "future," and a whole lot of people cite their experiences as "fine." That's not a revolution.


> Sometimes companies just want to make money.

Companies never just want money, because more power means more money. Regulatory capture means more money. More control means more money. Polluting the environment and wasting natural resources means more money. Exploiting workers means more money. Their endless lust for money causes them want all sorts of harmful things. If companies were making billions and nothing was being actively harmed by any of it no one would care.

These companies do want your money, but once you're locked in you are no longer the customer. If these AI companies had to depend on the income they get from subscriptions to survive they'd have gone out of business years ago. Instead AI is just shoved down people's throats everywhere they look and the money these companies live off of is coming from investors who are either praying that the AI becomes something it isn't or they're hoping they can help drive up stock value and cash out before the bubble breaks and leave somebody else holding the bag.

0% of AI tools include the option to not train on my data. They've already stolen it. They've scraped every word and line of code I've ever written that's been transmitted over the internet. It's been trained on photos of my family. It's been trained on the shitty artwork I've sent to my friends. By now it's probably been trained on my medical information and my tax records.

AI is controlled by some of the most untrustworthy companies and people on earth who have been caught over and over lying to the public and breaking the law. They can promise all day long not to steal anything I voluntarily give them, but I have zero trust in them and there is no outside oversight to ensure that they will do what they say.

The people behind what passes for AI don't give a shit about you beyond whatever they can take from you. They are absolutely not your friend. AI is incapable of being your friend. It's just a tool for the people who control it.


I feel like you’re still using hyperbole here. For example, you said your family photos were used for training, but most cloud photo providers specifically tell you in their privacy policies (legally binding) that they don’t do that.

My family photos have never trained AI, because my iCloud Photos service specifically says they don’t do that and explains the technical implementation of their object recognition system in detail. Apple even offers an e2e encrypted mode of operation. (Still, I have now moved to a more customer-friendly solution away from iCloud).

As far as training on your code, well, you either believe in open source or you don’t. AI training doesn’t even violate the most copyleft open source licenses. Unless AI has reproduced your code verbatim it’s not engaging in any kind of copyright reproduction.


> 0% of AI tools include the option to not train on my data.

That's perhaps not true. If you sign up for the enterprise accounts there are options to not use any of your data to train. That's how we have it set up at $job.

(I say "perhaps" because of course I'm still sending all the data to the AI and while the contract has an ironclad clause that they won't use it, there's no way to 100% verify that.)


They mean the data AI companies scrape(d) to train their models.

For example they can't opt for their comment not to be scraped off HN and used for training.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: