Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s so good. I spent 2 hours reading articles on there last night and the consistency was excellent, although a little verbose at times.


It took me less than a minute on the site to run into factually wrong information, broken citations, etc. Cannot imagine rotting my brain with knowingly bad information for over 2 hours


I’ve run into the exact same set of issues on Wikipedia.


Did you fix them?


This is sometimes hard when the editors keep on reversing edits which attempt to fix those errors. It will be interesting to see how Grokipedia - a bad name, surely they can come up with something better - deals with this.

I often come across out-of-place or clearly ideologically driven content on Wikipedia and normally just leave this alone - I have better things to do with my limited time than to fight edit wars with activist editors. Having said that I did a number of experiments some 5 years ago with editing Wikipedia where I removed clearly ideologically driven sections out of articles where those sections really had no place. One of these experiments consisted of removing sections about ´queer politics and queer viewpoints' from articles about popular cartoon characters. These sections - often spanning several paragraphs - were inserted relatively recently into the articles and were nothing more than attempts to use those articles to push a 'queer' viewpoint on the subject matter and as such not relevant for a general purpose encyclopedia. I commented my edits with a reference to the NPOV rules. My edits were reversed without comment. I reversed the reversion with the remark to either explain the reversion of leave the edits in place and was reversed again, no comments. I reversed again with an invitation to discuss the edits on the Talk pages which was not accepted while my edits were reversed again. This continued for a while with different editors reversing my edits and accusations of vandalism. Looking through the 'contribs' section for the users responsible for adding the irrelevant content showed they were doing this to hundreds of articles. I just checked and noticed the same individuals are still actively adding their 'queer perspectives' to articles where such perspectives are not relevant for a general-purpose encyclopedia.


Do you happen to remember any of the articles where you performed this experiment? I ask because specifically around 5 years ago, I know there were a number of cartoons where the creators intentionally wrote characters with queer representation in mind (She-Ra is the first to come to mind). So, if the sections you were removing had been properly cited and relevant to the actual series, then the removal for being "nothing more than attempts to use those articles to push a 'queer' viewpoint on the subject matter" probably did not represent a neutral viewpoint.

Of course, this depends on you opening up your research to some peer review.


Correct. That's the main reason I dove into reading subjects I was already knowledgeable of to see how it did.


Suggesting that people being able to make mistakes means that there's no qualitative and quantitative difference in how AI makes mistakes is either disingenuous or stupid. I don't know which place you're coming from or what kind of gotcha you think you pulled, but it doesn't create a strong argument either way.


Maybe he meant that the information was consistently incorrect, which was entertaining..


Nevermind the Grok-ness of it, I can't seriously believe a thinking human being would spend 2 hours knowingly reading something written by AI.


It's for the intersection of people who want LLM summarization and people who want an assurance of confirmation of bias explicitly built in. It's not for thinking people.


"A machine which simulates thought for people who don't want to think" is an adequate summation of LLM-generated text.


I decided to read through a subject I already knew a lot about.


I'm unsurprised that a human being would glibly dismiss the utility of the most powerful new form of knowledge representation since the written word, since we are all deeply in the grip of motivated reasoning.


> the most powerful new form of knowledge representation since the written word

1. the LLM model is a representation of language, not knowledge. The two may be highly correlated, but they are probably not coterminant and they are certainly not equivalent.

2. the final "product" is still the written word

3. whether LLM's are or are not the most powerful new form of knowledge representation or not, their output is so consistently inconsistent in its accuracy that it makes that power difficult to utilize, at best.


No one is being glib here, this is a serious concern. Think about it, please. A human being choosing to spend hours of their time reading something produced by something that is an amorphous, unanswerable, unaccountable agglomeration of weights formed not by a human's lived experience, but by a for-profit company's selection of inputs and tuning. It's completely dystopian.


I checked it out based on this comment. It's funny how in some ways it feels like a lazy student-assignment copied from Wikipedia: the subheadings and the structure are exactly the same as the Wikipedia article on the topic, and sometimes it even leaves in the citation numbers as normal text like a careless copy paste.

However, it also seemed less eurocentric, mentioning non-Greek non-Roman side of origins of fields where relevant, when the corresponding Wikipedia article doesn't. Wikipedia is generally pretty bad at this, but I had expected "Grokipedia" to be worse, not better in this regard!


If I was rewriting Wikipedia pages with an LLM I'd maybe use all the different languages' Wikipedias as input.


Of course, it cribbed the best!


If it works, it works.


Yeah, but it doesn't work. It's full of inaccuracies.


Unsupervised Source of Truth(tm), what could possibly go wrong?


I don't understand what supervision you want. Worried about inaccuracies? Double check and use it in conjunction with other sources.


I want some supervision. Why is that hard for you to grasp?


There is supervision, just not community-based.


Just like Wikipedia?


With Wikipedia there is the talk page which will alert you to controversies about topics, as well as checking the citations. While Grokopedia has "citations" when I checked many of them didn't actually have anything to do with what they were supposed to be citing.


Wikipedia has 7 billion potential editors. Grokapedia explicitly says we can't edit it.

So, absolutely the opposite thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: