There is a belief that we've peaked in terms of bigger model = better results. I think it was GPT 4 is actually smaller in parameter count and better than 3.5 for instance. There is also a finite amount of useful data that people think we have hit, so adding more parameters isn't helpful if you don't have new data to train them on.