Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPT is a language model, not an oracle.

So my first take is that people querying it for research are doing it wrong.

Then again, if there’s a large economic incentive to use it in that way, we are very well may end up with the kind of feedback loop that the author describes.



You’re absolutely right on all counts, and yet, people Are doing it wrong, and will increasingly do so, because there Are large economic incentives for using it that way.


Behind this thinking - an economic incentive to access knowledge and answers quicker that is inevitable - we have built a tool that verifies statements using source materials. You have to put the legwork in to upload your PDFs/web pages/videos, but once you do, you can be confident in the answers.

If it can't verify, it just won't answer/tickmark check the answers (happens 16% of the time... and ... always for maths). This is a feedback loop stopper, in the sense of only relying on your documents as the base, and being able to operate entirely without OpenAI (still though using other GPT models)

It's Fragen.co.uk - we believe that more answers formerly missed by CTRL+F will be found with this technology, than false answers taken as true. And if that's true, you are enbettering knowledge. And if not, you're enshittening it slower than the higher-hallucinating alternatives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: