Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe for your use cases. I've found perplexity.ai wrong a few times just today:

* Misunderstanding one of its citations, it said that use of `ParamSpec` in Python would always raise a warning in Python 3.9

* When asked why some types of paper adhere to my skin if I press my hand against them for a few minutes (particularly glossy paper), it gave two completely different answers depending on how the question was worded, one of which doesn't necessarily make sense.



That's a good point about use cases.

In my usage of ChatGPT, in areas I'm very knowledgable. I've mostly received answers that were stylistically excellent, creatively plausable and maybe even transcendent. The boilerplate around the answer tends to keep the answers grounded, though.

In areas where I have some experience but not much theoretical knowledge, after multiple exploratory questions, I better understand the topic and feel ok adjusting my behavior appropriately.

I haven't relied on it in areas where I am ignorant or naive e.g. knitting, discriminatory housing policy or the economy in Sudan. Since I have no priors in those areas, I may not feel strongly about the results whether they are profound or hallucinatory or benign.

I also haven't used it for fact checking or discovery.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: