There is a method that could help immensely when answering questions like these. E.g. some of these question may be answered quite quickly using WikiData [0] (answer to question about the recipients of Medal of Freedom, query written with the help of Claude), instead of just scraping and compiling information from potentially hundreds of websites. I believe this idea is quite under-explored compared to just blindly putting everything to the model's context.
Part of the problem will not be solved by LLMs but maybe hiding aspects of one running. LLMs basically "think" "out loud" as it processes and produces tokens.
The amount of thought required to answer any of those questions is pretty high, especially because they are all sizeable lists. It is going to take a lot of thinking out loud, and detailed training data covering all those items, to do that well.