Hacker News new | past | comments | ask | show | jobs | submit | stri8ed's comments login

Trained at least in part on Chat-GPT data.


Doesn't matter. In fact, it makes it even funnier: all these investors spending billions of dollars on OpenAI just end up subsidizing the competing models.


That was trained on data scrapped from the web. I'd say it's fair.


It would happen in China regardless what is done here. Removing billionaires does not fix this. The ship has sailed.


The benchmarks agree as well.


Isn't that how previous models were, before the attention is all you need paper?


Or be in the business of building infrastructure for AI inference.


Is this not the same argument? There are like 20 startups and cloud providers all focused on AI inference. I'd think application layer receives the most value accretion in the next 10 years vs AI inference. Curious what others think


Or be in the business of selling .ai domain names.


You can modify this, by setting the GPT4 system prompt, with instructions of your preferred response style.


For programming, GPT4+. I was excited to switch to Claude after hearing all the positive anecdotes. Having tried it, I'm very unimpressed. It spouted complete, confident sounding nonsense, when I prompted it with a bug I was trying to solve. GPT4, did not get it right initially, but it was more suggestive, instead of wrongly declaring the fault, and lead me to the answer after a few more prompts. Will not be renewing my Claude subscription.


This has been my experience as well. I'm surprised at how many people in the thread prefer Claude. I'm also planning to cancel my Claude subscription.

I only tried Claude because ChatGPT UI is really buggy for me (Firefox, Linux). It frequency blocks all interactions (entering new text or even scrolling) and I have to refresh the page to resume asking questions. But on Claude, it was just crashing altogether when I went to open to sidebar. Seems like traditional engineering is still a problem for these AI companies.


> I only tried Claude because ChatGPT UI is really buggy for me (Firefox, Linux).

It is buggy on every platfrom in my experience.


There are definitely some shills all over HN now... But even aside from that, the sheer novelty aspect (+less robotic ethical alignment) of it is enough for many


I think it's a little questionable to prompt language models with "bugs you're trying to solve".


Curious why?

This is maybe 1/3 of my use of GPT4. Quite often, the log dump and nearby code is enough, often even without explicit instructions. Being able to do this task is similar to GitHub CoPilot code autocomplete working well too. Still not 100%, but right often enough that it flipped my use from not-at-all in GPT 3.5 to quite-often in GPT4.


LLMs aren't logical machines, so any non-trivial bug-fix is just likely to introduce more bugs.

It's a bit of a misunderstanding of how LLMs are supposed to be used.

One caveat is if you're very untalented, it might be able to solve very common patterns successfully.


It's impossible to prove.


Is it? Watching animals attack other animals, suffering seems very likely.

What’s the reason why science would dismiss it? That the animal doesn’t fill out a survey afterwards?


You can't even prove if humans suffer.

What is suffering?


If I were to make dinner out of you, you'd probably agree that that's suffering—no need to get more philosophical than that.


I don't think you can infer someone's age and skin colon from that post.


In humans, a software problem can become a hardware problem, due to plasticity in the hardware. See alcoholism for example.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: