Literally anything a philosopher or mathematician invented without needing to incorporate billions of examples of existing logic to then emulate.
Try having an LLM figure out quaternions as a solution to gimbal locking or the theory of relativity without using any training information that was produced after those ideas were formed, if you need me to spell out examples for you
Are you saying “reasoning” means making scientific breakthroughs requiring genius level human intelligence? Something that 99.9999% of humans are not smart enough to do, right?
I didn’t say most humans “would” do it. I said humans “could” do it, whereas our current AI paradigms like LLMs do not have the capability to perform at that level by definition of their structure.
If you want to continue this conversation I’m willing to do so but you will need to lay out an actual argument for me as to how AI models are actually capable of reasoning or quit it with the faux outrage.
I laid out some reasonings and explicit examples for you in regards to my position, it’s time for you to do the same
I personally cannot “figure out quaternions as a solution to gimbal locking or the theory of relativity”. I’m just not as smart as Einstein. Does it mean I’m not capable of reasoning? Because it seems that’s what you are implying. If you truly believe that then I’m not sure how I could argue anything - after all, that would require reasoning ability.
Does having this conversation require reasoning abilities? If no, then what are we doing? If yes, then LLMs can reason too.