Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That’s great to hear! What are you asking it?


These are the types of questions I want to ask it:

> What degrees are held by each of the current Fortune 100 CEOs?

> What job did each of the current NFL GMs hold before their current position?

> Which genre would each of the current Billboard Hot 100 songs be considered part of?

> How many recipients of the Presidential Medal of Freedom were born outside of the US?

> Which US car company has the most models in their 2025 line-up across all of their brands?

It can't handle those directly right now.

You need to break the problem down step by step and sort of walk it through gathering the data with follow up questions.

But much better than it used to be.


There is a method that could help immensely when answering questions like these. E.g. some of these question may be answered quite quickly using WikiData [0] (answer to question about the recipients of Medal of Freedom, query written with the help of Claude), instead of just scraping and compiling information from potentially hundreds of websites. I believe this idea is quite under-explored compared to just blindly putting everything to the model's context.

[0] https://query.wikidata.org/#SELECT%20%28COUNT%28DISTINCT%20%...


Yeah, I've used gpt to create wikidata queries for me, it worked great :-)


Part of the problem will not be solved by LLMs but maybe hiding aspects of one running. LLMs basically "think" "out loud" as it processes and produces tokens.

The amount of thought required to answer any of those questions is pretty high, especially because they are all sizeable lists. It is going to take a lot of thinking out loud, and detailed training data covering all those items, to do that well.


Thanks! Have you tried enabling “multi-query” mode in the search box?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: