The goalposts never moved, but you're right that we're catching up quickly.
We always thought that if AI can do X then it can do Y and Z. It keeps turning out that you can actually get really good at doing X without being able to do Y and Z, so it looks like we're moving the goalposts, when we're really just realizing that X wasn't as informative as we expected. The issue is that we can't concretely define Y and Z, so we keep pointing at the wrong X.
We always thought that if AI can do X then it can do Y and Z. It keeps turning out that you can actually get really good at doing X without being able to do Y and Z, so it looks like we're moving the goalposts, when we're really just realizing that X wasn't as informative as we expected. The issue is that we can't concretely define Y and Z, so we keep pointing at the wrong X.
But all indication is that we're getting closer.