Hacker News new | past | comments | ask | show | jobs | submit login

One could say, for instance… A pattern matching algorithm detects when patterns match.





That's not what's going on here? The algorithms aren't being given any pattern of "being evaluated" / "not being evaluated", as far as I can tell. They're doing it zero-shot.

Put it another way: Why is this distinction important? We use the word "knowing" with humans. But one could also argue that humans are pattern-matchers! Why, specifically, wouldn't "knowing" apply to LLMs? What are the minimal changes one could make to existing LLM systems such that you'd be happy if the word "knowing" was applied to them?


Not to be snarky but “as far as I can tell” is the rub isn’t it?

LLMs are better at matching patterns than we are in some cases. That’s why we made them!

> But one could also argue that humans are pattern-matchers!

No, one could not unless they were being disingenuous.


What about animals knowing? E.g. dog knows how to X or its name. Are these things fine to say?

>Not to be snarky but “as far as I can tell” is the rub isn’t it?

From skimming the paper, I don't believe they're doing in-context learning, which would be the obvious interpretation of "pattern matching". That's what I meant to communicate.

>No, one could not unless they were being disingenuous.

I think it is just about as disingenuous as labeling LLMs as pattern-matchers. I don't see why you would consider the one claim to be disingenuous, but not the other.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: