Hacker News new | past | comments | ask | show | jobs | submit login

> Not sure why you are so upset about a small and neat study

This article is yet another example of someone misunderstanding what an LLM is at a fundamental level. We are all collectively doing a bad job at explaining what LLMs are, and it's causing issues.

Only recently I was talking to someone who loves ChatGPT because it "takes into account everything I discuss with it", only, it doesn't. They think that it does because it's close-ish, but it's literally not at all doing a thing that they are relying upon it to do for their work.

> If you ask it to summarize (without feeding the entire bible), it needs to know the bible.

There's a difference between "knowing" the bible and its many translations/interpretations, and being able to reproduce them word for word. I would imagine most biblical scholars can produce better discourse on the bible than ChatGPT, but that few if any could reproduce exact verbatim content. I'm not arguing that testing ChatGPT's knowledge of the bible isn't valuable, I'm arguing that LLMs are the wrong tool for the job for verbatim reproduction, and testing that (and ignoring the actual knowledge) is a bad test, in the same way that asking students to regurgitate content verbatim is much less effective as a method of testing understanding than testing their ability to use that understanding.




To add a little context here, I (the author) understand LLMs aren't the right tool (at least by themselves) for verbatim verse recall. The trigger for me doing the tests was seeing other people in my circles blindly trusting that ChatGPT was outputting verses accurately. My background allowed me to understand why that is a sketchy thing to do, but many people do not, so I wanted to see how worried we really should be.


Thanks for the response, it does sound like you've seen similar treatment of LLMs by others to what I've observed.

I think though that an important part of communicating about LLMs is talking about what they are designed to do and what they aren't. This is important because humans want to anthropomorphise, and LLMs are way past good enough for this to be easy, but similar to pets, not being human means they won't live up to expectations. While your findings show that current large models are quite good at verbatim answers (for one of the most widely reproduced texts in the world), this is likely in no large part down to luck and the current way these models are trained.

My concern is that the takeaway from your article is somewhere between "most models reproduce text verbatim" and "large models reproduce popular text verbatim", where it should probably be that LLMs are not designed to be able to reproduce text verbatim and that you should just look up the text, or at least use an LLM that cites its references correctly.


Check out this video at the 22:20 mark. The goal he’s pursuing is to have the LLM recognize when it’s attempting to make a factual statement, and to quote its training set directly instead of just going with the most likely next token.

https://youtu.be/b2Hp0Jk9d4I?si=SwfKJrck5_0LPgTH


>or at least use an LLM that cites its references correctly.

Are there implementation where LLM just out put the text of the references (or the first 100 words). I'm sure someone has implemented already?


Gemini cites web references, NotebookLM cites references in your own material, and the Gemini APIs have features around citations and grounding in web search content. I'm not familiar with OpenAI or Anthropic's APIs but I imagine they do similar, although I don't think ChatGPT cites content.

All these are doing however is fact-checking and linking out to those fact-checking sources. They aren't extracting text verbatim from a database. You could probably get close with RAG techniques, but you still can't guarantee it in the same way that if you ask an LLM to exactly repeat your question back to you, you can't guarantee that it will verbatim.

Verbatim reproduction would be possible with some form of tool use, where rather than returning, say, a bible verse, the LLM returns some structure asking the orchestrator to run a tool that inserts a bible verse from a database.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: