Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AGI is matter of when, not if

probably true but this statement would be true if when is 2308 which would defeat the purpose of the statement. when first cars started rolling around some mates around the campfire we saying “not if but when” we’ll have flying cars everywhere and 100 years later (with amazing progress in car manufacturing) we are nowhere near… I think saying “when, not if” is one of those statements that while probably indisputable in theory is easily disputable in practice. give me “when” here and I’ll put up $1,000 to a charity of your choice if you are right and agree to do the same thing if wrong



If you look at Our World in Data's "Test scores of AI systems on various capabilities relative to human performance" https://ourworldindata.org/grapher/test-scores-ai-capabiliti...

you can see a pattern of fairly steady progress in different aspects, like they matched humans for image recognition around 2015 but 'complex reasoning' is still much worse than humans but rising.

Looking at the graph, I'd guess maybe five years before it can do all human skills which is roughly AGI?

I've got a personal AGI test of being able to fix my plumbing, given a robot body. Which they are way off just now.


It is already here, kinda. I mean look at how it passes the bar exam, solves math olympiad level questions, generates video, art, music. What else are you looking for? It already has penetrated into job market causing significant disruption in programming. We are not seeing flying cars but we are witnessing things even not talked about around campfire. Seriously even 4 years ago, would you think all these would happen?


> What else are you looking for?

To begin with, systems that don't tell people to use elmer's glue to keep the cheese from sliding off the pizza, displaying a fundamental lack of understanding of.. everything. At minimum it needs to be able to reliably solve hard, unique, but well-defined problems like a group of the most cohesive intelligent people could. It's certainly not AGI until it can do a better job than the most experienced, talented, and intelligent knowledge workers out there.

Every major advancement (which LLMs certainly are) has caused some disruption in the fields it affected, but that isn't useful criteria that can differentiate between "crude but useful tool" from "AGI".


Majority of people on earth don't solve hard, unique, but well-defined problems, do we? I dont expect AGI to to solve one of Hilbert's list of problems (yet). Your definition of AGI is a bit too imposing. Saying that I believe you would get answers from an LLM better than most of the answers you would get from an average human. IMHO the trend is obvious and we will see if it stalls or keeps the pace.


I don't mean "hard" in the sense that it can easily solve novel problems that no living human knows how to solve, although any "general" intelligence should certainly be capable of learning and making progress on these just like human would, but without limitations of human memory, attention span, relatively short lifetime, and other human needs.

I mean "hard" in the sense that it can reliably replace the best software developers, civil engineers, lawyers, diagnosticians. Not just in economic sense, but by reliably matching the quality of their work 100% of the time.

It should be capable of methodically and reliably arriving at correct answers without expert intervention. It shouldn't be the case that some people claim that they don't know how to code and the LLM generated an entire project for them, while I can confidently claim that LLMs fall flat on their face almost every time I try to use them for more delicate business logic.


AGI is here?????! Damn, me, and every other human, must have missed that news… /s


Such things happen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: