That's not what's going on here? The algorithms aren't being given any pattern of "being evaluated" / "not being evaluated", as far as I can tell. They're doing it zero-shot.
Put it another way: Why is this distinction important? We use the word "knowing" with humans. But one could also argue that humans are pattern-matchers! Why, specifically, wouldn't "knowing" apply to LLMs? What are the minimal changes one could make to existing LLM systems such that you'd be happy if the word "knowing" was applied to them?
>Not to be snarky but “as far as I can tell” is the rub isn’t it?
From skimming the paper, I don't believe they're doing in-context learning, which would be the obvious interpretation of "pattern matching". That's what I meant to communicate.
>No, one could not unless they were being disingenuous.
I think it is just about as disingenuous as labeling LLMs as pattern-matchers. I don't see why you would consider the one claim to be disingenuous, but not the other.