Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think gwern gave a good hot take on this: it’s super rare for humans to do this; it might just be moving the chains to complain the ai can’t.


No, it’s really not that rare. There are new scientific discoveries all time, and all from people who don’t have the advantage of having the entire corpus of human knowledge in their heads.


To be clear the “this” is a knowledge based “aha” that comes from integrating information from various fields of study or research and applying that to make a new invention / discovery.

This isn’t that common even among billions of humans. Most discoveries tend to be random or accidental even in the lab. Or are the result of massive search processes, like drug development.


Regardless of goalposts, I'd imagine that a persistent lack of "intuitive-discovery-ability" would put a huge dent in the "nigh-unlimited AI takeoff" narrative that so many people are pushing. In such a scenario, AI might be able to optimize the search processes quite a bit, but the search would still be bottlenecked by available resources, and ultimately suffer from diminishing returns, instead of the oft-predicted accelerating returns.


> I think gwern gave a good hot take on this: it’s super rare for humans to do this; it might just be moving the chains to complain the ai can’t.

Super rare is still non-zero.

My understanding is that LLMs are currently at absolute zero on this metric.

The distance between tiny probability and zero probability is literally infinite!

It's the difference between "winning the lottery with a random pick" and "winning the lottery without even acquiring a ticket".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: