Sure. Neural nets in general can: after they've been trained on billions of examples first.
It really helps if they've previously seen the same or similar "single example". Which, let's be fair, the larger the training data, the higher the chances they have.
>> This seemed, at first, quite impossible. It would imply that the model was learning to recognise inputs from just one or two examples
To be more precise: the article is talking about fine-tuning a pre-trained LLM, so that's a-few-billion-plus-one-or-two examples.
Btw, what model was that? The article doesn't say.
Sure. Neural nets in general can: after they've been trained on billions of examples first.
It really helps if they've previously seen the same or similar "single example". Which, let's be fair, the larger the training data, the higher the chances they have.
>> This seemed, at first, quite impossible. It would imply that the model was learning to recognise inputs from just one or two examples
To be more precise: the article is talking about fine-tuning a pre-trained LLM, so that's a-few-billion-plus-one-or-two examples.
Btw, what model was that? The article doesn't say.