The concern is the velocity. GPT-4 can solve tasks today that it couldn't solve one months ago. And even one month ago, the things it could do made GPT-3.5 look like a silly toy.
Then there's the question of how much this can be scaled further simply by throwing more hardware at it to run larger models. We're not anywhere near the limit of that yet.
Then there's the question of how much this can be scaled further simply by throwing more hardware at it to run larger models. We're not anywhere near the limit of that yet.