It depends what you count as learning - you told it something, and it then applied that new knowledge, and if you come back to that conversation in 10 years, it will still have that new knowledge and be able to use it.
Then when OpenAI does another training run it can also internalise that knowledge into the weights.
This is much like humans - we have short term memory (where it doesn't get into the internal model) and then things get baked into long term memory during sleep. AI's have context-level memory, and then that learning gets baked into the model during additional training.
Although whether or not it changed the weights IMO is not a prerequisite for whether something can learn something or not. I think we should be able to evaluate if something can learn by looking at it as a black-box, and we could make a black-box which would meet this definition if you spoke to a LLM and limited it to it's max context length each day, and then ran an overnight training run to incorporate learned knowledge into weights.
Then when OpenAI does another training run it can also internalise that knowledge into the weights.
This is much like humans - we have short term memory (where it doesn't get into the internal model) and then things get baked into long term memory during sleep. AI's have context-level memory, and then that learning gets baked into the model during additional training.
Although whether or not it changed the weights IMO is not a prerequisite for whether something can learn something or not. I think we should be able to evaluate if something can learn by looking at it as a black-box, and we could make a black-box which would meet this definition if you spoke to a LLM and limited it to it's max context length each day, and then ran an overnight training run to incorporate learned knowledge into weights.