Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> humans can extrapolate and make reliable predictions about the future based on really small sample sizes

You severely underestimate the bandwidth of your eyes and ears and other senses, and the volume of your brain's memory (despite it's uber-loosy compression). That's terabytes a day probably, if not big data than I dunno what is. Yeah, 99% of it is thrown away at passing through the first few hundreds of layers of your neural networks, but they still know what to throw away...

To get a digital computer on "equal" terms with the zillions of hacky optimizations your semi-analog brain uses you need a shitton of raw power and data volume ("if you don't know what to throw away of the input data, you need to just sift through all/more of it") to compensate for the fact that you don't have N million years of evolution to devise similar hacky optimizations.

Also, humans work as a "network of agents", that's also recurrent (aka "culture"). Current sub-human-level AI agents are far from any sort of reliable interop.

My guess is that we'll get human level performance levels at AGI tasks when we learn to build swarms of AI-agents that cooperate well and "model each other", and few people are working on this... Heck, when it happens it will probably be an "accident" of some IoT optimizations thing, like, "oops, the worldwide network of XYZ industrial monitoring agents just reached sentience and human level intelligence bc it was the only way it could solve the energy-efficiency requirement goals it was tasked to optimize for" :)



You severely underestimate the bandwidth of your eyes and ears and other senses

This is so common there’s a term for it https://en.m.wikipedia.org/wiki/Moravec%27s_paradox


> You severely underestimate the bandwidth of your eyes and ears and other senses

Sample size and record size are two different things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: