Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A 'use-case' done worse with LLMs especially with reliability. Translation is already done without a hallucinating LLM and can be done offline.

Summarization of existing text is the *only* safe and serious use-case for LLMs.



How is summarization "safe"? The summary might be wrong just as well.

The use-case is anything where occasional bullshit output is an acceptable downside to speeding up. More reliable outputs will enable more use-cases.


And what is a business that is fine with occasional (or frequent) bullshit output? Fake-news and spam.


Every business is fine with some frequency of bullshit output at some level. The question is how often exactly it happens and how much harm the bullshit can cause.


My point was that spam is the perfect use case for this tech. Of course there are other possible use cases, but spam and fake news content creation are the perfect fit. AI will enable one to easily clone the writing style of any publication and insert whatever bullshit content and keep up with the publishing cycle with almost zero workforce.

Want a flat-earther version of New York Times (The New York Flat Times)? Done. Want a just slightly insidiously fascist version of NPR? Done. Want a pro-Nato version of RussiaToday (WestRussiaToday)? Done.

And we already know people share stuff without checking for veracity and reliability first.


Machine translation is already not reliable even without LLMs, so it's not weak point of LLM translation.


GPT-4 translates much better than anything else out there, esp. when it comes to idioms and manner of speech.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: