Yes. In a recent HN submission, ChatGPT was listed as a “scientific advance” of the past year. While it’s certainly some kind of advance, to me it seems to be more on the engineering side than on the scientific understanding side.
Definitely engineering. It’s not entirely wrong to say that the two reasons it took us until 2022 to make a ChatGPT are 1) the computing power needed and 2) the size of the training corpus needed. The same goes for other generative AI – it took a corpus of a couple billion images to train a Stable Diffusion model.
If we discovered a species of parrot that could learn to use language in the manner of recent LLMs, that would count as a scientific advance (though not a breakthrough, at least until we achieved a good grasp on how it is possible.)
Science advances firstly by finding something in want of an explanation, and then by coming up with one.
I don’t follow. IMO coming up with a working explanation is a necessary part of scientific advance. And that’s what we’re currently missing with LLMs, and with your parrot example.
Case in point: when Zwicky discovered that galaxies were rotating faster than could be explained in terms of what we know, that was an advance - an increase in our scientific knowledge. when we come up with a satisfactory explanation of why that is the case, then we will also have an advance - an increase in our scientific understanding. You can't get to the latter until you have advanced to the former, and we will probably need further advances of our scientific knowledge before we can understand the phenomenon Zwicky identified.
I get where you're coming from but we might also be like ants walking over an iPhone, wondering where the vibration is coming from. They might eventually figure it out, but if so it will be after an extremely long time, and they probably should better focus on other things at this very moment if they seek enlightenment.