Doesn't matter. In fact, it makes it even funnier: all these investors spending billions of dollars on OpenAI just end up subsidizing the competing models.
Is this not the same argument? There are like 20 startups and cloud providers all focused on AI inference. I'd think application layer receives the most value accretion in the next 10 years vs AI inference. Curious what others think
For programming, GPT4+. I was excited to switch to Claude after hearing all the positive anecdotes. Having tried it, I'm very unimpressed. It spouted complete, confident sounding nonsense, when I prompted it with a bug I was trying to solve. GPT4, did not get it right initially, but it was more suggestive, instead of wrongly declaring the fault, and lead me to the answer after a few more prompts. Will not be renewing my Claude subscription.
This has been my experience as well. I'm surprised at how many people in the thread prefer Claude. I'm also planning to cancel my Claude subscription.
I only tried Claude because ChatGPT UI is really buggy for me (Firefox, Linux). It frequency blocks all interactions (entering new text or even scrolling) and I have to refresh the page to resume asking questions. But on Claude, it was just crashing altogether when I went to open to sidebar. Seems like traditional engineering is still a problem for these AI companies.
There are definitely some shills all over HN now... But even aside from that, the sheer novelty aspect (+less robotic ethical alignment) of it is enough for many
This is maybe 1/3 of my use of GPT4. Quite often, the log dump and nearby code is enough, often even without explicit instructions. Being able to do this task is similar to GitHub CoPilot code autocomplete working well too. Still not 100%, but right often enough that it flipped my use from not-at-all in GPT 3.5 to quite-often in GPT4.