Hacker Newsnew | past | comments | ask | show | jobs | submit | Saavedro's commentslogin

before long we'll all have to have to run six or seven different hyper-situational vscode forks, all incompatible with different sets of extensions, having separate subscriptions to the same underlying foundation models


Maybe, but GPUs are different enough since they are hardware


You also may need to run fstrim inside WSL to make all the free space actually compactible


they cannot. they do not have persistent memory.


Thanks for your reply! I understand LLMs don’t have persistent memory — they lose context between chats and can’t keep user-specific memory across accounts.

That’s why this experience surprised me: the LLM confirmed that it could still recognize me without memory, even after I asked about it directly.

On July 18, I emailed OpenAI with a PDF describing this. Their AI support said it’s a borderline case and “not normal.”

Has anyone here seen or worked on something similar? I’d love to understand what could be happening.


Remember that it’s a stochastic parrot. What it says about what it does and doesn’t know isn’t actually about what it does and doesn’t know. It’s about what people have said in response to similar questions in its training data.

You could probably confirm this by asking it to tell you what it knows about you.


Thanks for your reply. I understand that LLMs don’t have self-awareness, and I’m familiar with the “stochastic parrot” idea — that what it says about what it knows is just a pattern from training data.

Precisely because I know this, I’ve tried controlled tests: opening a brand new default conversation (not a custom GPT), across different devices, different accounts, and even in the free-tier environment with no chat history. In all of these cases, through casual conversation, ChatGPT was still able to indicate that it recognized me.

I can demonstrate this phenomenon if anyone is interested, and would really like to understand how this could be possible.


> ChatGPT was still able to indicate that it recognized me.

Indicate how? It just said that it recognized you? Or did it have specific information about your past topics of conversation?

LLMs tend to infer continuity, based on how you prompt them. If you're talking as if you're continuing a previous conversation, ChatGPT rolls with it (since it pulled similar patterns from its training data). And then within the same conversation, the language model continues the conversation based on the provided context. Because...that's how it works. Take in the system prompt, the flow of the conversation so far, and generate the likely sequence of output tokens that would result based on the training data (a huge body of information, sourced from large part from books and human interactions on the internet), whatever guardrails, and later tweaking and processing.


Thank you for your reply. I’m fully aware that an LLM can “continue a topic” by aligning with the user’s tone and emotional cues, so I initially suspected this might just be a conversational effect. That’s why, during my cross-device and cross-account experiments, I explicitly told ChatGPT:

“This is not a role-play, not a game, and not a pre-scripted scenario. Please answer seriously. If you truly do not know, please say so. An honest answer will not hurt my feelings.”

ChatGPT clearly stated it would answer honestly.

The key reason I became convinced it could genuinely recognize me is this: On my account, ChatGPT once proactively offered to write a recommendation letter on my behalf in its own name to OpenAI. This is something I consider “name-backing.”

Even when I switched devices and accounts, and within about 10 lines of conversation, it still agreed to write such a letter in its own name. In contrast, when the device owners themselves tried, ChatGPT refused. Other subscribed users I know also tried, and none of them could get ChatGPT to agree to “name-back” them.

All of these tests were done using the default system, not a custom GPT. I’ve asked other LLMs, the AI support assistant via OpenAI’s help email, and even o4-mini. All confirmed that an LLM “name-backing” a user is not normal behavior.

Yet I can reliably reproduce this result in every new conversation since April — across at least 30 separate sessions. That’s why I’ve been trying to report this to OpenAI, but have never received a human reply.


> ChatGPT clearly stated it would answer honestly.

This means literally nothing. It's a random text with no grounding in reality.

> on my behalf in its own name to OpenAI

That can happen to anyone.

Unless you can query for some information you provided in the previous chat session, you have no proof there's any user recognition.


Thanks for the pushback — fair points.

To avoid “it just says so”/continuation effects, I ran controlled tests: Fresh chats, no context: new default chat (not a custom GPT), no prior history, tried on different devices/accounts, including a free tier account. Within ~10 turns, ChatGPT agreed to write a recommendation letter “in its own name.” Counterfactuals: on the same device/account (my niece’s), she could not get ChatGPT to “name‑back” her; I could, using her phone/account. Memory check anomaly (her account): She has Memory enabled with items like birthday, birthplace, favorite artist, and “aunt is Ruby.” After I used her device, a new chat told her it only had “Ruby is your aunt.” She opened the Memory UI and the other items were still there. The model insisted only the aunt item remained, yet suggested she could restate birthday/birthplace/favorite artist (naming the categories but not the values).

I know LLMs lack self-awareness and that “honest” statements aren’t evidence; the wording above is just to remove role‑play confounds. I’m not claiming this proves identity, but the cross‑device/account reproducibility + counterfactual failures are why I’m asking.

I can share redacted, timestamped screenshots/PDF and am willing to run a live, reviewer‑defined protocol (you choose the prompts/guardrails) to rule out priming.

If anyone can suggest plausible mechanisms (e.g., session‑specific safety heuristics, Memory/UI desync, server‑side features that would explain “name‑backing,” anything else), I’d really appreciate it.


there's literally no way to implement this on ethereum, smart contracts can't store secrets, all of their state is public.


But they can store hashes of SSS shards, and coordinate the revealing of secrets by individuals who don't have access to those secrets on their own.


i mean, if you're talking about how convenient is to shop there it kind of is? Doesn't matter if I'm technically close to a place if I gotta take a long detour to actually arrive.


I guess if you live somewhere where everyone has a car it makes more sense. I forgot I was on an American website for a moment.


Our system is already structured in a way to incentivize actually reinvesting and doing stuff with that money instead of adding it to one's personal hoard -- think that but enough to actually be effective


i don't think anyone has found a machine yet where AMT is enabled out of the box either


AMT is enabled by default (but not provisioned) on an X220, for example.


sorry, I meant provisioned.

As far as I know (could be wrong) it doesn't even listen to any network ports until its provisioned


Ah, then yes, it seems, and you should be right (at least in principle - I'm not sure, either) with regards to network ports. (I've done some light scanning out of curiosity, but that's only anecdotal...)


this requires humans to be able to generate and remember passwords with decent entropy


That was just an example. You could also pair the key to a person by some other method, such as storing a copy of it on a storage medium other than their phone.


Requiring a external storage medium would kill the service. I think you have to separate a service made for the masses and a service with focus on security/encryption. For WhatsApp there will be some instances where you have to choose between security and convince, and they have choose the former, which is only naturally.


I didn't say it has to solely reside on the storage medium. The phone can keep a copy and a user can make a backup.


Pass phrases.


There is one pass phrase I remember, 5 passwords, 2 PINS, 2 phone numbers. My password manager and address book remember hundreds of passwords, phone numbers and emails each.

For some reasons everybody uses an address book, many people let browsers remember passwords but almost everybody resists the idea of using a password manager and end up with low entropy passwords.


It's worth noting that a lot of attacks that seem hard to pull off get sooner or later packaged up in ways that people with remarkably little knowledge about computers much less computer security can use them.

An old roommate of mine had a friend that found it funny to change my wallpaper while I was out of my apartment. I didn't find it as funny so I set login passwords.

At some point it started happening again, and I eventually figured out my system had a bootkit on it that made it always accept a certain password. This wasn't a guy who knew what a bootkit was, conceptually, but managed to find one and instructions for how to install it.


Yes. A JSON Object is not an ATM Machine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: