One of the things I wonder about is whether “intelligence” can be linearly scaled or if it’s just a way of solving an optimization problem. In other words, humans have come pretty close to the peak of Mt. Smarts and therefore being 1000x as intelligent is more like the difference between 1 meter from the peak and a millimeter from the top. You’re both basically there.
In other words, maybe humans have basically solved the optimization problem for the environment we live in. At this point the only thing to compete on is speed and cost.
You don't see that many, or any really, von Neumanns walking around so there's probably still significant room to improve with all the benefits of having intelligence neatly packaged in a computer.
Yeah, imagine spinning up 100 von Neumanns to attack a problem. They can all instantly share their thoughts & new skills, coordinate, choose new exploration directions, and spend decades developing new tools -- all within moments after pressing 'Enter'.
Even if our AI systems have only a minute fraction of von Neumann's intellect, we still have no idea what tomorrow will be like. I'm terrified and excited.
Even if all the computers can do is ask the right questions and it takes a big research project to figure it out, that would be an improvement in productivity.
I actually think it will come from the other direction. That people will get better at asking questions, because there is an automated tool that will build systems to answer larger problems than a single person could quickly answer.
I don't think there is such a thing as general intelligence, there are only capabilities. What we call "general" intelligence is really just the set of capabilities that a human has, because we're self-centered.
If we had more intelligences around to compare with I think we'd find that some are "more intelligent" in that they have all of our capabilities, plus some. And that others are "less intelligent" in that we have all of the capabilities that they have, plus some. And then there would be the "differently intelligent" which have at least one capability that we don't and which lack at least one capability that we have.
Under this lens, I don't know if there's much utility in fine grained comparisons of intelligence re meters and millimeters. The space is discrete: subsets, not metrics.
I don't know if you could ever prove something like this (or maybe we just lack that capability). It seems more like an axiom-selecting notion than something to be argued. Anyhow, it's what my gut says.
I think it’s an interesting thought. But for the sake of it, can you name/imagine some examples of problems/forms of problems we as human can not solve? Or can not solve efficiently?
If you can find none, is it not the proof that our intelligence is general?
The first thing to come to mind is the traveling salesman problem and the host of other unsolved math problems which we suspect may be unsolvable-by-us.
There's also problems of self reference. A Turing machine may be able to solve the halting problem for pushdown automata, but it can't solve the halting problem for Turing machines. Whether or not we're as capable as Turing machines, there's a halting problem for us and we can't solve it.
I'm restricted to mathy spaces here because how else would you construct a well defined question that you cannot answer? But I see no reason why there wouldn't be other perspectives that we're incapable of accessing, it's just that in these cases the ability to construct the question is just as out of reach as the ability to answer it.
You may have heard talk about known unknowns and unknown unknowns, but there are also known unknowables and unknown unknowables, and maybe even unknowable unknowables (I go into this in greater detail here: https://github.com/MatrixManAtYrService/Righting/blob/master...).
In any case, I don't think it's ever valid to take one's inability to find examples as proof of something unless you can also prove that the search was exhaustive.
Instead of AGI, we should call it AHI: artificial human intelligence, or SHI: super human intelligence. That would be much clearer and would sidestep the generality issue.
We already have technology which beats what we would be capable of ever doing and almost instantaneously.
If you want a concrete example, astronomical image processing would be one: impossible for humans without AI.
In that same logic, if we invent AGI and then it solves a problem for us then it counts for humanity? (and of course it does but here we're talking about something that humans wouldn't solve without AGI)
Yes we did, but there’s a difference between delegating a task (asking a computer to do it) and executing the task (running the calculation). Otherwise you might as well say humans can run 40mph because we can ride horses.
Also, no one person invented the calculator. The calculator is the culmination of hundreds or even thousands of years of invention. It’s not like the knowledge or creativity is in each of our brains and we could each build a calculator given the requisite materials. It took thousands of lifetimes of ingenuity. So there’s another answer to your question of things we aren’t efficient at solving: building a calculator.
In other words, maybe humans have basically solved the optimization problem for the environment we live in. At this point the only thing to compete on is speed and cost.