Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It's something a clever fourth-grader would write.

This level of cope and denial is amazing to witness.

The most powerful (multi trillion dollar) companies on the planet are pouring practically infinite resources into developing systems that will ultimately make you redundant.

An early version of AGI is staring you in the face while you call it a "fourth-grader". It won't stay in fourth grade forever.



I don't think I'm particularly in denial about the prospects of AI. I think it's going to be hugely disruptive and could possibly put me out of a job.

But I'd like to posit a hypothetical counterpoint, just to get you thinking. So far, all of the work on AGI has been the result of brute forcing. We've tried to develop a structural understanding of how the human brain works, and we've failed. So we've fallen back to torturing circuits into reorienting themselves into compression algorithms for human knowledge. The mechanisms that these tortured circuits used for doing so, the structures they produced in N-dimensional space to embody that knowledge -- we have very little understanding of how these things actually work under the hood.

I think a lot of the grandoise hypotheses about the future of AGI emerging from this avenue of invention are overly optimistic. Why are we so confident that this brute force approach will continue to bear fruit for us? At what point will it overcome the long tail of inadequacy that it's currently exhibiting?

The 20th century bears several notable examples of would-be-transformative technologies that have since stalled, and failed to live up to their promise. Nuclear power. Space travel. Industries buckling under the weight of their own complexity, suffering from the human inability to keep the emergent externalities in check. Why would AI be any different?

I predict a future where increasing global hardship, conflict and scarcity renders the current type of energy-intensive AI approaches infeasible.


>So far, all of the work on AGI has been the result of brute forcing. We've tried to develop a structural understanding of how the human brain works, and we've failed. So we've fallen back to torturing circuits into reorienting themselves into compression algorithms for human knowledge. The mechanisms that these tortured circuits used for doing so, the structures they produced in N-dimensional space to embody that knowledge -- we have very little understanding of how these things actually work under the hood.

And this is the way. (Machine) learning theory is in some way a meta-science about how to do science from facts in order to construct theories that effectively explain these facts. What you are asking for will never amounts to a short set of equations. There is not elegant theory of how to perceive numbers and this is why symbolic artificial perception, rule engines, spam detection, RDF ontologies, etc never took off. You're idealizing knowledge as a set of representations without ever reifying how these representations come into existence. We're departing a world of representation toward a world driven by "incarnations": you can't make sense of a how a brain works without the help of another brain, and this is why there is so many things being researched at the intersection of deep learning and neuroscience. I'd even go as far as considering this is in fact how brains work: they can be composed and decomposed monoidically.

In short:

>a structural understanding

There is no such thing

>the structures they produced in N-dimensional space [...] this brute force approach

This is a contradiction. I'm not saying there won't be "structural insights along the way" nor that throwing categories into the machine learning mix won't be useful, but the learning-like aspect that you denote by "brute force" is more fundamental, and in some way above the very processus of science.


That's all very well and good from a theoretical, scientific perspective. But we're hooking these things up to real-world applications that often call for deterministic, structural understanding of their inner workings for safety reasons.


Part of me hopes this is true, that AGI (or even worse - ASI) will never be fully realized. Too disruptive.

A counter example to nuclear power or space travel is integrated circuits. This technology has transformed our society and we haven't reached the end of it yet.

Our own brains are living proof that intelligence is possible with lower power consumption. I watched a recent lecture by Geoffrey Hinton where he mentioned future AI hardware based on analog integrated circuits could reduce the power consumption by orders of magnitude [1].

It is possible that we will hit a wall and never achieve anything more than Chat GPT++++, but the smartest people in town mostly believe that we will create machines that exceed human intelligence and capability.

We have some understanding of how neural networks work under the hood. The scale of the current models are too vast to comprehend in their specific details, but I think we understand them in principle.

[1] Prof. Geoffrey Hinton - "Will digital intelligence replace biological intelligence?" https://www.youtube.com/watch?v=N1TEjTeQeg0




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: