We’re rapidly approaching problems (AP Calculus BC, etc) that are in the same order of magnitude of difficulty as “design and implement a practical self-improving AI architecture”.
Endless glib comments in this thread. We don’t know when the above prompt leads to takeoff. It could be soon.
And funnily enough, with the AI community’s dedication to research publications being open access, it has all the content it needs to learn this capability.
Since when was "design and implement a practical self-improving AI architecture" on the same level as knowing "the requisite concepts for getting Transformers working"?
this is such garbage logic. the semantics of that comment are irrelevant. creating and testing AI node structures is well within the same ballpark. even if it wasnt, the entire insinuation of your comment is that the creation of AI is a task that is too hard for AI or for an AI we can create anytime soon -- a refutation of the feedback hypothesis. well, thats completely wrong. on all levels.
We can't predict what is coming. I think it probably ends up making the experience of being a human worse, but I can't avert my eyes. Some amazing stuff has and will continue to come from this direction of research.
I passed Calculus BC almost 20 years ago. All this time I could have been designing and implementing a practical self-improving AI architecture? I must really be slacking.
In the broad space of all possible intelligences, those capable of passing calc BC and those capable of building a self-improving AI architecture might not be that far apart.
hey, im very concerned about AI and AGI and it is so refreshing to read your comments. over the years i have worried about and warned people about AI but there are astonishingly few people to be found that actually think something should be done or even that anything is wrong. i believe that humanity stands a very good chance of saving itself through very simple measures. i believe, and i hope that you believe, that even if the best chance we had at saving ourselves was 1%, we should go ahead and at least try.
in light of all this, i would very much like to stay in contact with you. ive connected with one other HN user so far (jjlustig) and i hope to connect with more so that together we can effect political change around this important issue. ive formed a twitter account to do this, @stop_AGI. whether or not you choose to connect, please do reach out to your state and national legislators (if in the US) and convey your concern about AI. it will more valuable than you know.
That's a pretty unfair comparison. We know the answers to the problems in AP Calculus BC, whereas we don't even yet know whether answers are possible for a self-improving AI, let alone what they are.
> Yeah, I know about LLAMA, but as I understand - it's not exactly legal to use and share it.
For anyone keeping track, this is when you update your cyberpunk dystopia checklist to mark off "hackers are running illegal AIs to compete with corporations".
Where singularity = something advanced enough comes along that we can't understand or predict or keep up with it, because it's so far beyond us and changing so far faster than our ape brains can perceive, and (hopefully) it brings us along for the ride.
By that definition, I wonder if we've already surpassed that point. Things on the horizon certainly feel hazier to me, at least. I think a lot of people were surprised by the effectiveness of the various GPTs, for example. And even hard science fiction is kinda broken: humans piloting spaceships seems highly unlikely, right? But it's a common occurrence there.
The idea is that eventually we build something that, when it plateaus, builds its own successor. That’s the singularity: when the thing in question builds its successor and that builds its successor and this happens far outside our ability to understand or keep up.
Can GPT9 build GPT10, with zero human input?
I’d give 50/50 odds it can.
Can GPT15 build something that isn’t a large language model and is far superior in every way?
I’d give 50/50 odds it can.
Can both the above steps happen within one solar rotation of each other?
I’d give 50/50 odds they can.
Because at some point these models won’t need humans to interact with them. Humans are very slow- that’s the bottleneck.
They’ll simply interact with their own previous iterations or with custom-instantiated training models they design themselves. No more human-perceptible timescale bottlenecks.
Well for Homo sapiens the odds are probably a hundredth or a thousandth of that.
It’s 50/50 that in 150 years some version of our descendants will exist, i.e. something that you can trace a direct line from Homo sapiens to. Say a Homo sapiens in a different substrate, like “human on a chip”.
The thing is if you can get “human on a chip” then you probably also can get “something different and better than human on a chip”, so why bother.
By the 24th century there’ll be no Homo sapiens Captain Picard exploring the quadrant in a gigantic ship that needs chairs, view screens, artificial gravity, oxygen, toilets and a bar. That’s an unlikely future for our species.
More likely whatever replaces the thing that replaces the thing that replaced us won’t know or care about us, much less need or want us around.
I honestly don't think it will be quite like that, at least not terribly soon. There is so much work being done to hook up LLMs to external sources of data, allow them to build longer term memories of interactions, etc. Each of these areas are going to have massive room to implement competing solutions, and even more room for optimization.
> He was an uninformed crackpot with a poor understanding of statistics.
There's a lot you can say about Kurzweil being inaccurate in his predictions, but that is way too demeaning. Here's what Wikipedia has to say about him and the accolades he received:
Kurzweil received the 1999 National Medal of Technology and Innovation, the United States' highest honor in technology, from then President Bill Clinton in a White House ceremony. He was the recipient of the $500,000 Lemelson-MIT Prize for 2001. He was elected a member of the National Academy of Engineering in 2001 for the application of technology to improve human-machine communication. In 2002 he was inducted into the National Inventors Hall of Fame, established by the U.S. Patent Office. He has received 21 honorary doctorates, and honors from three U.S. presidents. The Public Broadcasting Service (PBS) included Kurzweil as one of 16 "revolutionaries who made America" along with other inventors of the past two centuries. Inc. magazine ranked him No. 8 among the "most fascinating" entrepreneurs in the United States and called him "Edison's rightful heir".
I’ve been a Kurzweil supporter since high school, but to the wider world he was a crackpot (inventor who should stick to his lane) who had made a couple randomly lucky predictions.
He wasn’t taken seriously, especially not when he painted a future of spiritual machines.
Recently on the Lex Fridman podcast he himself said as much: his predictions seemed impossible and practically religious in the late 90s and up until fairly recently, but now experts in the field are lowering their projections every year for when the Turing test will be passed.
Half of their projections are now coming in line with the guy they had dismissed for so long, and every year this gap narrows.
That would be my response but without the /s. Of course, depending on the definition it can always be said to be "happening", but to me it feels like the angle of the curve is finally over 45 degrees.