Hacker News new | past | comments | ask | show | jobs | submit login

I'm kind of concerned about the concept of "bad at prompting".

The hypothesis that I'm working off right now is that natural language has structure to it that happens to match some problem spaces. And this makes sense because people will naturally want to talk succinctly and with a convenient flow relative to the problems they encounter the most. Thus jargon is reborn many times over in different domains.

LLMs are encoding this structure.

So a good prompt is one that provides the LLM with additional information about what you expect the answer to be. And bad prompts provide neutral or disinformation.

This isn't to say that being good at prompts is somehow to be disingenuous about the power of LLMs. What is better? To remember much redundant data. Or to remember simply the right sorts of ways to search for the classes of information you are after.

My concern, though, is that the structure of reality doesn't have to match the way that we talk about it. The Novel and the Inexpressible* will tend to yield hallucinations.

[Although, I've had this concern long before I encountered LLMs. My feeling is that there are many people who can only solve problems that match the way they talk about them.]

* - technically, the difficult or unnatural to express, but I couldn't fit that into a single word.




>I'm kind of concerned about the concept of "bad at prompting".

I have met many people in my life that are terrible at asking questions, so it does have some conceptual reality. But this is also why analogy is so powerful for people. It takes the way a person thinks about $A and applies parts of it to $B so they can more easily wrap their mind around it.

Has anyone written a paper about testing and expressing the power of analogy in LLMs?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: