The coders at OpenAI are all ML people who only know Python. They have no idea how "proper" software is written, or how infrastructure works.
They had such glaring errors in their APIs for such a long time that it's almost comical. Such as when they increased the context window from 8K to 32K tokens, they forgot to increase the HTTP CDN WAF limit for a while. If you actually tried to submit that much data, you'd get a HTTP error code back. They never noticed because internally they don't go through a CDN.
Similarly, the "web browsing" feature was comically bad, with a failure rate exceeding 80% for months after it was released. Even when it worked it was glacially slow and would time out easily and fail. Meanwhile Phind was doing the same thing with 100% success rates and lightning fast response times... with a fraction of the budget and manpower.
They had such glaring errors in their APIs for such a long time that it's almost comical. Such as when they increased the context window from 8K to 32K tokens, they forgot to increase the HTTP CDN WAF limit for a while. If you actually tried to submit that much data, you'd get a HTTP error code back. They never noticed because internally they don't go through a CDN.
Similarly, the "web browsing" feature was comically bad, with a failure rate exceeding 80% for months after it was released. Even when it worked it was glacially slow and would time out easily and fail. Meanwhile Phind was doing the same thing with 100% success rates and lightning fast response times... with a fraction of the budget and manpower.