Hacker News new | past | comments | ask | show | jobs | submit | 34679's comments login

I'd like to offer a cautionary tale that involves my experience after seeing this post.

First, I tried enabling o3 via OpenRouter since I have credits with them already. I was met with the following:

"OpenAI requires bringing your own API key to use o3 over the API. Set up here: https://openrouter.ai/settings/integrations"

So I decided I would buy some API credits with my OpenAI account. I ponied up $20 and started Aider with my new API key set and o3 as the model. I get the following after sending a request:

"litellm.NotFoundError: OpenAIException - Your organization must be verified to use the model `o3`. Please go to: https://platform.openai.com/settings/organization/general and click on Verify Organization. If you just verified, it can take up to 15 minutes for access to propagate."

At that point, the frustration was beginning to creep in. I returned to OpenAI and clicked on "Verify Organization". It turns out, "Verify Organization" actually means "Verify Personal Identity With Third Party" because I was given the following:

"To verify this organization, you’ll need to complete an identity check using our partner Persona."

Sigh I click "Start ID Check" and it opens a new tab for their "partner" Persona. The initial fine print says:

"By filling the checkbox below, you consent to Persona, OpenAI’s vendor, collecting, using, and utilizing its service providers to process your biometric information to verify your identity, identify fraud, and conduct quality assurance for Persona’s platform in accordance with its Privacy Policy and OpenAI’s privacy policy. Your biometric information will be stored for no more than 1 year."

OK, so now, we've gone from "I guess I'll give OpenAI a few bucks for API access" to "I need to verify my organization" to "There's no way in hell I'm agreeing to provide biometric data to a 3rd party I've never heard of that's a 'partner' of the largest AI company and Worldcoin founder. How do I get my $20 back?"


I actually contacted the California AG to get a refund from another AI company after they failed to refund me.

The AG office followed up and I got my refund. Worth my time to file because we should stop letting companies get away with this stuff where they show up with more requirements after paying.

Separately they also do not need my phone number after having my name, address and credit card.

Has anyone got info on why they are taking everyone’s phone number?


(having no insider info:) Because it can be used as a primary key ID across aggregated marketing databases including your voting history / party affiliation, income levels, personality and risk profiles etc etc etc. If a company wants to, and your data hygiene hasn't been tip top, your phone number is a pointer to a ton of intimate if not confidential data. Twitter was fined $150 million for asking for phone numbers under pretense of "protecting your account" or whatever but they actually used it for ad targeting.

>> Wednesday's 9th Circuit decision grew out of revelations that between 2013 and 2019, X mistakenly incorporated users' email addresses and phone numbers into an ad platform that allows companies to use their own marketing lists to target ads on the social platform.

>> In 2022, the Federal Trade Commission fined X $150 million over the privacy gaffe.

>> That same year, Washington resident Glen Morgan brought a class-action complaint against the company. He alleged that the ad-targeting glitch violated a Washington law prohibiting anyone from using “fraudulent, deceptive, or false means” to obtain telephone records of state residents.

>> X urged Dimke to dismiss Morgan's complaint for several reasons. Among other arguments, the company argued merely obtaining a user's phone number from him or her doesn't violate the state pretexting law, which refers to telephone “records.”

>> “If the legislature meant for 'telephone record' to include something as basic as the user’s own number, it surely would have said as much,” X argued in a written motion.

https://www.mediapost.com/publications/article/405501/None


Tangential: please do not use a phone number as a PK. Aside from the nightmare of normalizing them, there is zero guarantee that someone will keep the same number.

Also fun fact, people mistype and mistranscribe data. Some people even... put down fake phone numbers because they don't want to be tracked!

I would think in a world where we constantly get spam calls and texts that people would understand that a phone number is not a good PKI. I mean we literally don't answer calls from unknown numbers because of this. How is it that we can only look at these things in one direction but not the other?


Even better: phone numbers are redistributed after some time of inactivity.

OpenAI doesn’t (currently) sell ads. I really cannot see a world where they’re wanting to sell ads to their API users only? It’s not like you need a phone number to use ChatGPT.

To me the obvious example is fraud/abuse protection.


You're thinking ads are to advertise products. Ads are to modify behavior to make you more likely to buy products.

ChatGPT has the capacity to modify behavior more subtly than any advertising ever devised. Aggregating knowledge on the person on the other end of the line is key in knowing how to nudge them toward the target behavior. (Note this target behavior may be how to vote in an election, or how to feel about various hot topics.)


> Aggregating knowledge on the person on the other end of the line is key in knowing how to nudge them toward the target behavior.

It also, as Google learned, enables you to increase your revenue per placement. Advertisers will pay more for placement with their desired audience.


> It’s not like you need a phone number to use ChatGPT.

I’m pretty sure you do. Claude too. The only chatbot company I’ve made an account with is Mistral specifically because a phone number was not a registration requirement.


They also require it now.

> It’s not like you need a phone number to use ChatGPT.

When I signed up I had to do exactly that.


The fact they don't sell ads doesn't mean they are not in the business of selling users data to third parties.

Also Netflix wasn't initially selling ads and there you have after increasing the price of their plans drastically in the last few years the ad supported subscription is probably the #1 plans because most people aren't willing to shed 15 to 25usd/€ every month to watch content that is already littered with ads.


If you sell ads you're actually incentivised not to sell data because then your competitors would be able to reach your users without paying you

You're incentivized not to sell targeting data, but you're very incentivized to collect and maintain as much of it as you can, and then offer access using it as a service.

So, at the end of your day, company X has an overdetailed profile of you, rather than each advertiser. (And also, at least in the US, can repackage and sell that data into various products if it chooses)


  > To me the obvious example is fraud/abuse protection.
Phones are notorious for spam...

Seriously. How can the most prolific means of spam be used to prevent fraud and abuse? (Okay, maybe email is a little more prolific?) Like have you never received a spam call or text? Obviously fraudsters and abusers know how to exploit those systems... it can't be more obvious...


It costs money to get a phone number. It’s about friction, not elimination.

What would you do instead?


I'm saying it clearly isn't enough friction. It's not worth the privacy cost. Which let's be real, those numbers are then commonly sold to those same spammers, even if indirectly.

You are also forgetting it is easy to mask, obscure, and hijack numbers. So it doesn't cost money per number, many times they can use their own number.

There isn't a universal solution, which is the main problem here. Sometimes numbers make sense, most of the time not.


Ok so you don’t like phone numbers. I get it.

But you’re OpenAI. You need to do _something_. What do you do?


What are they trying to prevent again? Requiring a phone number is one kind of friction for free services like twitter, but this is a service where a user registers a credit card and authorizes charges, they have the legal name of the person paying for a service, what's the phone number for? It's not like OpenAI gives me their phone number so I can call when I'm having an issue.

lol dude, they already have my credit card. Look back at the OP. You're arguing that a phone number costs money so pushes people out. You know what else costs money?...

Stop doing things just because others do it. You'll never find a better way if you're always following. You'll never find better ways if you just accept things as they are. If you never push back. Let's be real, the number isn't about identity verification. They have my name and credit card. Unless by "verification" you mean cross matching me with other databases with the intent to sell that information.

You keep pestering me but you won't ask why they need that data. Like you just accept things at face value?


They don’t need to. It’s totally sufficient that they can correlate your chat history with your identity. That makes other identifiers more valuable, if they can extract your interests

The typical use case of an API is not that you personally use it. I have hundreds of clients all go through my API key, and in most cases they themselves are companies who have n clients.

It’s a good conspiracy theory, but of course it’s scoped to only ChatGPT users who are also developers and using specifically the o3 model via API. So if it is a conspiracy, it’s a fairly non-ambitious one.

They may not sell ads.

They may still buy data from ad companies and store credit cards, etc.

Many of them link users based on phone number.


But to do what with api users? Most api users won’t be individuals…

I bet there's way more individuals than companies that use the API

Obvious goal is to know the identity of users.

Thank you for this comment… a relative of mine spent a ton of money on an AI product that never came a license he cannot use. I told him to contact his states AG just in case.

Source: have dealt with fraud at scale before.

Phone number is the only way to reliably stop MOST abuse on a freemium product that doesn't require payment/identity verification upfront. You can easily block VOIP numbers and ensure the person connected to this number is paying for an actual phone plan, which cuts down dramatically on bogus accounts.

Hence why even Facebook requires a unique, non-VOIP phone number to create an account these days.

I'm sure this comment will get downvoted in favor of some other conspiratorial "because they're going to secretly sell my data!" tinfoil post (this is HN of course). But my explanation is the actual reason.

I would love if I could just use email to signup for free accounts everywhere still, but it's just too easily gamed at scale.


On the flip side it makes a company seem sparklingly inept when they use VOIP as a method to filter valid users. I haven’t done business with companies like Netflix or Uber because I don’t feel like paying AT&T a cut for identity verification. There are plenty of other methods like digital licenses which are both more secure and with better privacy protections.

I wish we could all agree on a better way of auth -- but unfortunately this is all we have. Asking normal people to do anything outside of phone number or email (or 'login with [other account based on phone number or email]' for auth is basically impossible.

Maybe they should look into a non-freemium business model. But that won't happen because they want to have as much personal data as possible.

- Parent talks about a paid product. If they wants to burn tokens, they are going to pay for it.

- Those phone requirements do not stop professional abusers, organized crime nor state sponsored groups. Case in point: twitter is overrun by bots, scammers and foreign info-ops swarms.

- Phone requirements might hinder non-professional abusers at best, but we are sidestepping the issue if those corporations deserve that much trust to compel regular users to sell themselves. Maybe the business model just sucks.


I don't like requiring phone numbers either, but saying OpenAI shouldn't do freemium model for hottest tech product of this century (AI) is a fundamental misunderstanding of how humans and the world works.

Also, if they don't do freemium they're getting way more valuable information about you than just a phone number.


What part of this thread relates to freemium? Use of the API requires tokens that are paid for. General use of the AI via the web interface does not require a phone number.

Only requiring the phone number for API users feels needlessly invasive and is not explained by a vague "countering fraud and abuse" for a paid product...


The discussion wasn't about freemium products though. Someone mentioned that they paid 20 bucks for OpenAI's API already and then they were asked for more verification.

Personally I found that rejecting disposable/temporary emails and flagging requests behind VPNs filtered out 99% of abuse on my sites.

No need to ask for a phone or card -- or worse, biometric data! -- which also removes friction.


> I'm sure this comment will get downvoted in favor of some other conspiratorial "because they're going to secretly sell my data!" tinfoil post (this is HN of course). But my explanation is the actual reason.

Your explanation is inconsistent with the link in these comments showing Twitter getting fined for doing the opposite.

> Hence why even Facebook requires a unique, non-VOIP phone number to create an account these days.

Facebook is the company most known for disingenuous tracking schemes. They just got caught with their app running a service on localhost to provide tracking IDs to random shady third party websites.

> You can easily block VOIP numbers and ensure the person connected to this number is paying for an actual phone plan, which cuts down dramatically on bogus accounts.

There isn't any such thing as a "VOIP number", all phone numbers are phone numbers. There are only some profiteers claiming they can tell you that in exchange for money. Between MVNOs, small carriers, forwarding services, number portability, data inaccuracy and foreign users, those databases are practically random number generators with massive false positive rates.

Meanwhile major carriers are more than happy to give phone numbers in their ranges to spammers in bulk, to the point that this is now acting as a profit center for the spammers and allowing them to expand their spamming operations because they can get a large number of phone numbers those services claim aren't "VOIP numbers", use them for spamming the services they want to spam, and then sell cheap or ad-supported SMS service at a profit to other spammers or privacy-conscious people who want to sign up for a service they haven't used that number at yet.


Doesn’t Sam Altman own a crypto currency company [1] that specifically collects biometric data to identify people?

Seems familiar…

[1] https://www.forbes.com/advisor/investing/cryptocurrency/what...


GP did mention this :)

> I've never heard of that's a 'partner' of the largest AI company and Worldcoin founder


the core tech and premise doesnt collect biometric data, but biometric data is collected for training purposes with consent and compensation. There is endless misinformation (willfully and ignorantly) around worldcoin but it is not, at its core, a biometric collection company

Collecting biometrics for training purposes is still collecting biometrics.

the original claim was "it collects biometrics to identify people" and that's just factually wrong. worldcoin in general is not about identification, in fact it's specifically designed to not identify people. its only purpose is to verify "does this private key have an association to any hash that was created after we scanned a unique set of human retinas". it cant even tell you which retinas it's associated with - the data simply doesn't exist

I also am using OpenRouter because OpenAI isn't a great fit for me. I also stopped using OpenAI because they expire your API credits even if you don't use them. Yeah, it's only $10, but I'm not spending another dime with them.

Hi - I'm the COO of OpenRouter. In practice we don't expire the credits, but have to reserve the right to, or else we have a uncapped liability literally forever. Can't operate that way :) Everyone who issues credits on a platform has to have some way of expiring them. It's not a profit center for us, or part of our P&L; just a protection we have to have.

If you're worried about the unlimited liability, how about you refund the credits instead of expiring them?

From my experience with billing systems it is usually not possible to refund a transaction after 6 or 12 months.

even possible with a some of them, but even in that case they're usually not "refunding" as much as they're just "making a new transaction for the same anount the other way" which does the same at the surface until reversals, voids or rejections happen and it all becomes a mess.

Seems like a weird question to ask OpenRouter Inc, a for-profit company.

Really a shame OpenAI left their non-profit (and open) roots, could have been something different but nope, the machine ate them whole.


Why is it a bad thing to ask for a company to do right by their paid customers? This type of policy absolutely causes the company to lose more business in the future because it shows customers that they don't care about customers.

I never heard of OpenRouter prior to this thread, but will now never use them and advocate they never be used either.


A fair refund policy is not in conflict with a company being for-profit. I (and it seems many others) would be much less inclined to buy credits from a company that will expire them if I don't use it, and more inclined to buy credits from a service that will refund them if I end up not using it. Once I've bought them I'm more likely to use them. And in addition to reducing that purchasing friction and gaining market share, they can get the time-value of the money between when I bought the credits and when they eventually refund them.

Enlightened self-interest is when you realize that you win by being good to your customers, instead of treating customer service like a zero-sum game.


Pretty sure OP was talking about OpenAI expiring their credits (just had mine expire).

Btw, unsurprisingly, the time for expiry appears to be in UTC in case anyone else is in the situation of trying to spend down their credits before they disappear.


Out of curiosity, what makes you different from a retailer or restaurant that has the same problem?

Why only 365 days? Would be way fairer and still ok for you (if it's such a big issue) to expire them after 5 years.

I wonder if they do this everywhere, in certain jurisdictions this is illegal.

then you shouldn’t use OpenRouter. ToS: 4.2 Credit Expiration; Auto Recharge OpenRouter reserves the right to expire unused credits three hundred sixty-five (365) days after purchase

That is so sleezy.

After how long do they expire?

IIRC, 1 year

Same for anthropic

I suspect their data collection might not be legal in the EU.

https://withpersona.com/legal/privacy-policy

To me it looks like an extremely aggressive data pump.


There are stories about e.g. Hetzner requiring all sorts of data from people who want to open/verify accounts so perhaps not. Might just be an anti “money laundering” thing. Especially if the credit card company ends up refunding everything..

Hi there, During our KYC process, we do sometimes ask customers to submit additional information, including IDs, so that we can verify their identity. However, we follow the General Data Protection Regulation in the EU, amongst other regulations. So we only keep that data for the account verification process. We also have a data protection officer and team who can answer questions potential customers have about data protection measures that we take. Their contact information is here: https://www.hetzner.com/legal/privacy-policy/ --Katie, Hetzner Online

What stories? Can you back up that claim with some sources please?

Infomaniak did request my personal id or passport for creating a company account. I'm not going to provide any extra source for you, this is what happened to me, you can either believe it or not.


That's not a source and it is not my responsibility to backup the claims you made. That is yours. If you don't have any sources, and admit to just saying things that are not probable, I can also live with that.

Hetzner is famously notorious for this, but not enough for publications to pick up this. So by your definitions, YEARS of people talking about their experiences with this is nothing?

What’s the source on OpenAI doing the same? How is it anymore legitimate in anyway?

Which kind of would make the entire “discussion” moot and pointless


As someone not in the US, I do a straight nope out whenever I see a Persona request. I advise everyone else to do the same. Afaik, it's used by LinkedIn and Doordash too.

Oh I also recently got locked out of my linkedin account until I supply data to Persona.

(So I’m remaining locked out of my linkedin account.)


> How do I get my $20 back?

Contact support and ask for a refund. Then a charge back.


KYC requirement + OpenAI preserving all logs in the same week?

OpenAI introduced this with the public availability of o3, so no.

It's also the only LLM provider which has this.

What OpenAI has that the others don't is SamA's insatiable thirst for everyone's biometric data.


I think KYC has been beaten by AI agents according to RepliBench [0] as obtaining compute requires KYC which has a high success rate in the graphic.

[0] https://www.aisi.gov.uk/work/replibench-measuring-autonomous...


KYC has been around for a few months I believe. Whenever they released some of the additional thought logs you had to be verified.

Meanwhile the FSB and Mossad happily generate fake identities on demand.

The whole point of identity verification is for the same Mossad to gather your complete profile and everything else they can from OpenAI.

Since Mossad and CIA is essentially one organization they already do it, 100%.


You are even luck to be able to verify. Mine give me an error about "Session expired" for months!! Support do not reply.

I was more excited by the process, like, there exists a model out there so powerful it requires KYC

which, after using it, fair! It found a zero day


I think they're probably more concerned about fake accounts and people finding ways to get free stuff.

China is training their AI models using ChatGPT. They want to stop or slow that down.

Why? It seems counterproductive given OpenAI's mission statement: "We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome."

Because OpenAI's actual mission statement is "money, and lots of it".

Yeah. I think they need money to deliver AGI

I actually think they’re worried about foreign actors using it for…

- generating synthetic data to train their own models

- hacking and exploitation research

etc


What free stuff? It requires a paid API.

With no intention to tarnish your pure world view, paid services with low registration requirements are ideal for account laundering and subscription fraud with stolen credit cards

> which, after using it, fair! It found a zero day

Source?


Recently, Sean Heelan wrote a post "How I used o3 to find CVE-2025-37899, a remote zeroday vulnerability in the Linux kernel’s SMB implementation". It might be what they are referring to.

Link: https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-...


Yep, that’s the one!

[flagged]


Bit strange to see such new accounts already priming everyone to think this is acceptable and normal, which it isn't, hence OA having started doing this months ago and still not a single other provider has it, despite offering just as powerful models. It's not like Google and Anthropic have launched their own Worldcoin either.

Especially what does it have to do with that "these models are getting more powerful"?

Basically the earmark of a narrative OpenAI loves to push covertly, and often overtly. I wouldn't be shocked if these are bots.

There is an ISO standard for digital ID, the same one used by the TSA, and it’s coming to the web soon! Human verification is going to continue to grow in importance.

which standard is that?


I was excited about trying o3 for my apps but I'm not doing this validation.. thanks for the heads up.

> OK, so now, we've gone from "I guess I'll give OpenAI a few bucks for API access" to "I need to verify my organization" to "There's no way in hell I'm agreeing to provide biometric data to a 3rd party I've never heard of that's a 'partner' of the largest AI company and Worldcoin founder. How do I get my $20 back?"

This should be illegal. How many are going to do the same as you, but then think that the effort/time/hassle they would waste to try to get their money back would not be worth it? At which point you've effectively donated money to a corp that implements anti-consumer anti-patterns.


Yeah, same. I am a paying API customer but I am not doing biometric KYC to talk to a bot.

This is in part "abuse prevention"[1] and in part marketing. Making customers feel like they're signing up to access state secrets makes the models seem more "special". Sama is well known to use these SV marketing tricks, like invite-only access, waiting lists, etc to psychologically manipulate users into thinking they're begging for entry to an exclusive club instead of just swiping a credit card to access an API.

Google tried this with Google Plus and Google Wave, failed spectacularly, and have ironically stopped with this idiotic "marketing by blocking potential users". I can access Gemini Pro 2.5 without providing a blood sample or signing parchment in triplicate.

[1] Not really though, because a significant percentage of OpenAI's revenue is from spammers and bulk-generation of SOE-optimised garbage. Those are valued customers!


Gemini doesn't give you reasoning via API though, at least as far as I'm aware.

If by reasoning you mean showing CoT, Gemini and OA are the same in this regard - neither provides it, not through the UI nor through the API. The "summaries" both provide have zero value and should be treated as non-existent.

Anthropic exposes reasoning, which has become a big reason to use them for reasoning tasks over the other two despite their pricing. Rather ironic when the other two have been pushing reasoning much harder.


Google exposes their reasoning. You can use their new gemini python sdk to get thought traces.

Google does not expose their reasoning any more. They give "thought summaries" which provide effectively zero value. [1][2]

[1] https://ai.google.dev/gemini-api/docs/thinking#summaries [2] https://discuss.ai.google.dev/t/massive-regression-detailed-...


Works for me?

Maybe you’re thinking of deep research mode which is web UI only for now.


HN Don’t Hate Marketing Challenge

Difficulty: Impossible


This feels eerily similar to a post I've read a within the last month. Either I'm having a deja vu, it's a coincidence that the same exact story is mentioned or theres something else going on

What should be going on? A regular Google search for "openai persona verify organization" shows withpersona.com in the second search result.

Yeah ok guess I misremembered it a bit but I was curious too and found the previous one I've thought of: https://news.ycombinator.com/item?id=43795406

Good eye! In this case, complaints are numerous. See a web search for:

openai persona verification site:community[.]openai[.]com

e.g. a thread with 36 posts beginning Apr 13:

"OpenAI Non-Announcement: Requiring identity card verification for access to new API models and capabilities"

But always good to be on look out for shenanigans :)


This is OpenAI’s fairly dystopian process, so the exact same thing happens to lots of people.

It's a concerted attempt to de-anonymise the internet. Corporate entities are jostling for position as id authorities.

This is just the process for OpenAI. It's the same process I went through as well.

this reminds me of how broadcom maintains the “free” tier of vmware.

Can you explain? Is it not actually free?

there are so many non-functional websites and signups required to get to the end of the rainbow that any sane person quits well before getting to any freely distributed software, if, in fact, there still is some.

With all this plus the saving of all chats they can't operate on the EU. But they do ....

Interesting, it works for me through openrouter, without configured openai integration. Although, I have openai account and did verification with openai before. Conspiricy theory would say that they are exchanging PII so openrouter knows who am I :)

I don't recall Persona being in the mix last year when I signed up. Interesting development.

I think it was a recent thing [1], but I thought they were only considering it

[1] https://techcrunch.com/2025/04/13/access-to-future-ai-models...


Thank you for reminding me why I’ll never touch these LLM services.

>ID Check

Just send them a random passport photo from the Internet, what's the deal? Probably they are just vibe-verifying the photo with "Is it legit passport?" prompt anyways.


I think modern face verification has moved on, it's been video in all my encounters.

still no real human is involved, as they mention their verification is automated and prohabilistic — which is especially funny to hear in context of verification. Im pretty sure even a kid can go around it, e.g. on the video showing a photo of a person holding his passport which you can find online.

No. You have to turn your head, and stuff. Also, even if this would work, they allow only one verification per person per 90 days.

>one verification per person per 90 days.

this is absurd, how do they define "person"? On the internet I can be another person from another country in a minute, another minute I will be a different person from a different country.


No, that's not how it works.

So your saying it's not just feeding your video to the AI model and blindly trusting it's outcome? Any evidence how it works then?

You can't just hold up a 2d object (passport) and wave it about to try and trick it. There are heuristics at work.

For a regular camera - all objects are 2d, it is not equipped with tools to capture depth. What heuristics are you talking about? There is ML model at work which tries to tell whether the object is legit, but it cannot have any real sense of what is on the image: it just relies on a statistically plausible outcome when being fed pixels from your camera screen, which means you definitely can trick it.

You have to align your face in certain ways, a random video you found on the internet won't work.

If you don't match your face to the markers overlaid on the screen in a certain way, to gather heuristics about your eye distance, nose etc, then it won't work. Impossible to do with a 2d object you're holding. So yea, it does matter.

There's a literal industry that's popped up to make face identitifaction from your smartphone a thing. You might want to research and catch up on the way they work.


It requires video and an app. They are collecting facial biometrics.

App? So you cannot verify without a mobile phone?

You can’t sign up in the first place without a mobile phone number. They require a phone number and block VoIP services.

Real mobile phone number to receive a code is 10¢. Maybe a bit more, but certainly not more than a dollar.

Crazy, I already gave up registering for chatgpt because they asked for my phone number. I'm not giving that to any random startup.

Welcome to tech dystopia.

Hello Human Resource, we have all your data, please upload your bio-metric identity, as well as your personal thoughts.

Building the next phase of a corporate totalitarian state, thank you for your cooperation.


Follow-up. I pasted the above into ChatGPT because I genuinely wanted to know how to get my 20 bucks back. That request is in and they say I'll get a response in 2-3 days. I then asked it "What are the potential risks of agreeing to their terms?" Here's the response:

Agreeing to Persona’s terms, especially for biometric identity verification, involves both privacy and long-term data security risks. Here’s a clear breakdown of the main risks you should be aware of: 1. Biometric Data Collection

Risk: Biometric identifiers (like facial recognition, voiceprints, etc.) are extremely sensitive and irreplaceable if compromised.

    What they collect: Persona may collect a selfie, video, and metadata, and extract biometric templates from those for facial comparison and liveness detection.

    If leaked or abused: Unlike passwords, you can't change your face. A future data breach or misuse could lead to permanent identity compromise.

 2. Data Storage & Retention
Risk: Persona says biometric data is kept for up to one year, but:

    You’re relying on their internal policies, not a legal guarantee.

    There’s no technical detail on how securely it’s stored or whether it’s encrypted at rest.

 Worst-case scenario: Poorly secured biometric templates could be stolen, reused, or matched against other data sets by bad actors or governments.
 3. Third-Party Sharing and Surveillance Risks
Risk: Your biometric and ID data may be shared with subprocessors (partners/vendors) that you haven’t explicitly vetted.

    Persona may transfer your data to cloud providers (like AWS, GCP), verification specialists, or fraud prevention services.

    Depending on jurisdiction, data could be subject to subpoenas, surveillance laws, or government backdoors (especially in the U.S.).

 4. Consent Ambiguity & Future Use
Risk: The fine print often includes vague consent for "quality assurance", "model improvement", or "fraud detection".

    This opens the door to retraining algorithms on your biometric data—even if anonymized, that's still a use of your body as data.

    Their privacy policy may evolve, and new uses of your data could be added later unless you opt out (which may not always be possible).

Should You Agree?

Only if:

    You absolutely need the service that requires this verification.

    You’re aware of the privacy tradeoff and are okay with it.

    You trust that Persona and its partners won’t misuse your biometric data—even a year down the line.
If you’re uneasy about this, you’re not alone. Many developers and privacy advocates refuse to verify with biometrics for non-critical services, and companies like OpenAI are increasingly facing criticism for requiring this.

Huh, I never considered using ChatGPT to read terms of serivce for me. Nice idea.

o3 is really powerful. I understand it tbh. They don't want scammers and abusers easily accessing it

Just like with human engineers, you need to start with a planning session. This involves a back and forth discussion to hammer out the details before writing any code. I start off as vague as possible to see if the LLM recommends anything I hadn't thought of, then get more detailed as I go. When I'm satisfied, I have it create 2 documents, initialprompt.txt and TODO.md. The initial prompt file includes a summary of the project along with instructions to read the to do file and mark each step as complete after finishing it.

This ensures the LLM has a complete understanding of the overall goals, along with a step by step list of tasks to get there. It also allows me to quickly get the LLM back up to speed when I need to start a new conversation due to context limits.


In essence, I need to schedule a meeting with the LLM and 'hammer out a game plan.' Gotta make sure we're 'in sync' and everybody's 'on the same page.'

Meeting-based programming. No wonder management loves it and thinks it should be the future.


LLMs are stealing the jobs of developers who go off half-cocked and spend three days writing 2000 lines of code implementing the wrong feature instead of attending a 30 minute meeting

That's dumb, of course, but sometimes people really just do the bare minimum to describe what they want and they can only think clearly once there's something in front of them. The 2000 lines there should be considered a POC, even at 2000 lines.

and the jobs of developers that want to schedule another breakout session to discuss the pros and cons of a 2-line change.

Yeah... I'm gonna need to circle back on that

my manager has been experimenting have AI first right the specs as architecture decision records (ADR), then explain how the would implement them, then slowly actually implementing with lots of breaks, review and approval/feedback. He says it's been far superior to typically agent coding but not perfect.

Meetings are how managers keep everyone else aligned with their goals.

> This ensures the LLM has a complete understanding of the overall goals

Forget about overall goal. I have this simple instruction that i send on every request

"stop after every failing unit test and discuss implementation with me before writing source code "

but it only does that about 7 times out of 10. Other times it just proceeds with implementation anyways.


Ive found similar behaviour with stopping at linting errors. I wonder if my instructions are conflicting with the agent system prompt.

System prompts themselves have many contradictions. I remember hearing an Anthropic engineer (possibly Lex Fridman's interview with Amanda Askell) talking about using exaggerated language like "NEVER" just to steer Claude to rarely do something.

that doesn't work ( atleast not anymore)

So it behaves just like a person.

thats why we replaced people with machines . so we can have some predictability.

Keyword: Some

humans don't ignore an instruction 4 times out of 10 unless they have a reason to do it on purpose.

I congratulate you in that you only work with humans that never misunderstand, never forget a step in a long process they think they know by heart etc.

I guess you also think that we should get rid of checklists for pilots because they would never ignore an instruction they were clearly given during training except on purpose?


> I guess you also think that we should get rid of checklists for pilots because they would never ignore an instruction they were clearly given during training except on purpose?

Pilots ignore items in checklist 4 times out of 10? wtf


Sadly this just doesn't pan out in larger more complex projects. It will write an implementation plan, not follow it, then lie and say it did.

What tool and/or model are you calling "it"?

I'm using Claude Code on a large legacy monstrosity, and don't have this problem. There are problems and my flow automatically has it reviewing its own work in phased implementations, but even in the worst situations it's easy to get back on track.


  > I have it create 2 documents, initialprompt.txt and TODO.md.
That is an amazing approach. Which (AI) tools are you using?

I anticipate the opposite. Humans will struggle to differentiate themselves from autonomous output. In so doing, their "voice" will shift more and more away from what we recognize now. That might simply mean more profanity,since the majority of LLMs have guardrails against it, or it could be something less recognizable.

That's pretty amusing, considering Trump spent decades as a NY Democrat.

Rules for thee, not for me.

Elon too, until 2022.

>crap shoot

Funny you say that:

"It is a major constituent of bile and can be found in the large intestine. It is named after Latin taurus (cognate to Ancient Greek ταῦρος, taûros) meaning bull or ox, as it was first isolated from ox bile in 1827 by German scientists Friedrich Tiedemann and Leopold Gmelin."

https://en.wikipedia.org/wiki/Taurine


Void is basically the same thing, but open source and better. It's easy to use with any provider API key, even LM Studio for local models. You can use it with free models available from OpenRouter to try it out, but the quality of output is just as dependent on the model as any other solution.

https://voideditor.com/

https://github.com/voideditor/void


Have you used it? Haven't heard about it but tbh I can see how it would eventually outperform cursor and/or windsurf. As LLMS get better, and more background tasks etc, will come, I don't see a sustainably moat around IDEs generally (except switching cost, but if it is mostly vs code... )

saw you did below. What is your experience so far? fast requests are great. Anything big lacking?

I was using roo code for a bit and it was cool to see how fast it was going compared to windsurf.


>What is your experience so far?

I cancelled my Cursor subscription and haven't used it since. I experimented with Aider for a bit, it's also pretty great. Their architect mode seems to be the way of the future. It allows you to use a pricier model for its reasoning abilities, and it directs a cheaper model to make the code changes.

That said, I keep going back to Void because Aider's system prompts have a tendency to introduce problems. If Void had Aider's architect mode, it would be perfect.


People like Windsurf and Cursor because they offer them a flatrate.

Yeah, for sure. That's why I tried Cursor for a month. But as soon as I ran out of fast requests it became unusable. It had me waiting several minutes for the first token. I didn't realize how bad the experience was, fast requests included, until I used Void. It makes Cursor fast requests seem slow, and I tend to need fewer of them. The differences being that Void requests go straight to the provider instead of their own server first, and their system prompts seem to keep the models more on task.

How much do you spend per month on Void, though? Your testimonial is great but incomplete without that information.

That's fair. It is not the cheaper option, unless you use free or low cost models from a provider like OpenRouter. Of course, that comes with a performance hit. What I like about it is the flexibility. I run Devstral Small locally for easier tasks and it's free except for electricity. On the other hand, you can use up $20 with Gemini Pro in a couple hours with large contexts. Grok 3 mini has been a good middle ground- more capable than Devstral and less expensive than Gemini Pro.

So I tend to start with Devstral unless I know it will struggle, and then move on to paid models as needed. When I do switch to a paid provider, it's noticeably better than the same provider on a Cursor paid plan, so even though it's more expensive, I feel like there's more value there.

I honestly don't have a proper monthly tally yet. I've used $30 in OpenRouter credits over a couple weeks, but a lot of that was experimenting to compare quality of output.


To point 4-“aren’t these cables better?”

Any honest installer would give the honest answer: "No."


I have worked in hi-fi sales and something I learned very early on is if you talk someone out of something they want, then whatever you talked them into is on You. If a customer came in asking for my (considerable) expertise, that is very different and I would provide it. If someone asked me if this cable was better than that one, I would answer honestly in my expert opinion. But when someone is buying a $70,000 stereo and they want the ridiculous $900 speaker cables, I would sell them. And it made people happy. And it made me happy because the markup on Audioquest cables is generally around 67%.

The audiophile world is a lot like the wine world. Tell someone the bottle is more expensive and it does taste better. Serve a quality meal on a white table cloth in a dim restaurant with an attentive wait staff and it does taste better than if it's on a paper plate on the floor. That's how our brains work. Sound is a sense like any other and context matters a lot.

The shop I worked at was an Audioquest dealer. One day we did double blind A/B tests of cables like this alongside other quality cables that were maybe a tenth of the price. In our shop and under those circumstances both pros and customers alike couldn't tell the difference. We also did the same thing with the same results testing a $500 stereo amplifier against a $5,000 one.

The fascinating thing to me was when we knew which was which, everyone picked the pricier gear every single time. Even I, who organized and proctored these tests was able to be influenced this way. I could swear up and down the "better" gear sounded better when I knew which was which, but when someone else was proctoring the test and I was blind, then I could not.

I'm not defending Audioquest. They make good speaker cables, but the prices are outrageous. And the digital stuff is laughable. I couldn't write that marketing copy and sleep at night. But I do understand the market. In a way the marketing copy and the price is the value of the product. The mind is a funny thing.


And it made people happy. And it made me happy because the markup on Audioquest cables is generally around 67%.

The fact that wealthy people don't resent the price of expensive things because they can afford expensive things is a fact it took me decades to wrap my head around.

Significant wealth still wants salespeople to solve their problems, however the price of things is not among those problems. They don't mind someone else making money because it has no bearing on what they can do for their children.

More ordinary economic statuses come with many experiences of not being able to afford really nice things either outright or because it would limit what we might do for our children.

---

The other piece of the psychology is the pleasure humans can find in exclusivity/tribalism/etc. Brand identity can form a significant part of this and some people want to wear a Monster Cable dozer cap.


If they are what the client wants, they are better because the client decides what has value.

Part of what wealth pays for is other people’s agreement with their opinions. Telling them they are foolish to believe what they want to believe is not a hill worth dying on.

That’s pretty much true in general. But you do you.


Are we Americans just going to sit around while our greatest ally starves 14,000 kids to death with the help of our tax dollars? Are any of you OK with this?


Apparently we’re going to bury our heads in the sand and prevent any discussion of the topic!


I take issue with the qualifier "these days". On day one, it was mostly fake accounts set up by the founders.

https://m.economictimes.com/magazines/panache/reddit-faked-i...


pre 2023, it took real human effort to make shit up, and there was much less incentive for the amount of effort, and you could more easily guess what was made up by judging whether a human would go through the effort of making it up. these days it's literally anything, all the time, zero effort. you're right there's always been fake shit but it's more than half the posts on /r/all these days are misleading, wrong, or just fake


This seems to imply that the only thing we import from China is junk. That hasn't been the case for decades. Beyond the junk we have pretty much the entire consumer electronic market, and beyond that the equipment running the infrastructure required for many of those electronics to operate. Beyond that, we have equipment for communication and navigation networks for government and first responders, and the countless components required for their vehicles or an effective response to crisis. Then we have the vast variety of equipment required for modern farming, each piece containing countless Chinese components, even if it's an American made tractor.

There is no possible way for anyone to foresee the totality of effects from a serious trade war with China, but I can assure you, it will be far worse than a lack of junk on store shelves.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: