Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nevermind the Grok-ness of it, I can't seriously believe a thinking human being would spend 2 hours knowingly reading something written by AI.


It's for the intersection of people who want LLM summarization and people who want an assurance of confirmation of bias explicitly built in. It's not for thinking people.


"A machine which simulates thought for people who don't want to think" is an adequate summation of LLM-generated text.


I decided to read through a subject I already knew a lot about.


I'm unsurprised that a human being would glibly dismiss the utility of the most powerful new form of knowledge representation since the written word, since we are all deeply in the grip of motivated reasoning.


> the most powerful new form of knowledge representation since the written word

1. the LLM model is a representation of language, not knowledge. The two may be highly correlated, but they are probably not coterminant and they are certainly not equivalent.

2. the final "product" is still the written word

3. whether LLM's are or are not the most powerful new form of knowledge representation or not, their output is so consistently inconsistent in its accuracy that it makes that power difficult to utilize, at best.


No one is being glib here, this is a serious concern. Think about it, please. A human being choosing to spend hours of their time reading something produced by something that is an amorphous, unanswerable, unaccountable agglomeration of weights formed not by a human's lived experience, but by a for-profit company's selection of inputs and tuning. It's completely dystopian.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: