Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People have stopped using LLMs? I wasn't aware of that. Can you share a source for that?


Anecdotal, but this is the exact consensus I saw among my non-tech peers. They find it fun for a few days or weeks, then basically never touch it again once the novelty wears off. The only normies I know still using LLMs are students using them to write papers.


I know a lot of people who went through the "Oh, wow - wait a minute..." cycle. Including me.

They're approximately useful in some contexts. But those contexts are limited. And if there are facts or code involved, both require manual confirmation.

They're ideal for bullshit jobs - low-stakes corporate makework, such as mediocre ad copy and generic reports that no one is ever going to read.


> And if there are facts or code involved, both require manual confirmation.

The hidden assumption here seems to be that the model needs to be perfect before it has utility.


Now you're the bullshit machine. No one said that. We expect basic reliability/reproducibility. A $4 drugstore calculator has that to about a dozen 9s, every single time. These machines will give you a correct answer and walk it right back if you respond the "wrong" way. They're not just wrong a lot of the time, they simply have no idea even when they're right. Your strawman is of no value here.


Also hidden assumption, or perhaps lack of clear perception of reality, that most jobs on the market are strongly dependent on factual correctness.

Also assumption that this is any different than human relationship with empirical truth is.


Clearly generative AI can currently only be used when verification is easy. A good example is software. Not sure why you think that I claimed otherwise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: