Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe it is just time to train and also good training data for prompts which openai has gathered for so long already? E.g. there is a bottleneck on how fast you can train and also gather good data.



Possibly, but wouldn't Google and Meta have access to way more compute resources and data than OpenAI? Google has been touting their TPUs for several years now.


OpenAI has access to Microsoft and Azure. That’s bigger than Meta, roughly on par with Google in terms of capability and higher in terms of market cap.


Google has the compute, from the comparisons I have seen Bard smokes GPT-3.5-Turbo on response times. So my guess is that internal politics prevents them from putting out something better. There would have to be immense pressure from the search division to not make them obsolete.


Bard is also a fair bit worse than GPT-3.5, though, so that can be a function of model size.


Without Nadella footing the compute bills, nobody would be taking about OpenAI. He’s brilliant, he let the start up take on huge risk to quietly claim the gains for m$.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: