Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At this rate, I have no idea what the state of things would be even 6 months down the line.



We’re rapidly approaching problems (AP Calculus BC, etc) that are in the same order of magnitude of difficulty as “design and implement a practical self-improving AI architecture”.

Endless glib comments in this thread. We don’t know when the above prompt leads to takeoff. It could be soon.


And funnily enough, with the AI community’s dedication to research publications being open access, it has all the content it needs to learn this capability.

“But how did skynet learn to build itself?”

“We showed it how.”


Since when was AP Calculus BC on the same order of magnitude as "design and implement a practical self-improving AI architecture"?


Assuming the range of intelligence spanning all the humans that can pass Calculus BC is narrow on the scale of all possible intelligences.

It’s a guess, of course. But, the requisite concepts for getting Transformers working are not much broader than calculus and a bit of programming.


Since when was "design and implement a practical self-improving AI architecture" on the same level as knowing "the requisite concepts for getting Transformers working"?


this is such garbage logic. the semantics of that comment are irrelevant. creating and testing AI node structures is well within the same ballpark. even if it wasnt, the entire insinuation of your comment is that the creation of AI is a task that is too hard for AI or for an AI we can create anytime soon -- a refutation of the feedback hypothesis. well, thats completely wrong. on all levels.


Sorry, what is the "feedback hypothesis"? Also, despite my use of quotes, I'm not arguing about semantics.


We can't predict what is coming. I think it probably ends up making the experience of being a human worse, but I can't avert my eyes. Some amazing stuff has and will continue to come from this direction of research.


I passed Calculus BC almost 20 years ago. All this time I could have been designing and implementing a practical self-improving AI architecture? I must really be slacking.


In the broad space of all possible intelligences, those capable of passing calc BC and those capable of building a self-improving AI architecture might not be that far apart.


hey, im very concerned about AI and AGI and it is so refreshing to read your comments. over the years i have worried about and warned people about AI but there are astonishingly few people to be found that actually think something should be done or even that anything is wrong. i believe that humanity stands a very good chance of saving itself through very simple measures. i believe, and i hope that you believe, that even if the best chance we had at saving ourselves was 1%, we should go ahead and at least try. in light of all this, i would very much like to stay in contact with you. ive connected with one other HN user so far (jjlustig) and i hope to connect with more so that together we can effect political change around this important issue. ive formed a twitter account to do this, @stop_AGI. whether or not you choose to connect, please do reach out to your state and national legislators (if in the US) and convey your concern about AI. it will more valuable than you know.


That's a pretty unfair comparison. We know the answers to the problems in AP Calculus BC, whereas we don't even yet know whether answers are possible for a self-improving AI, let alone what they are.


> Endless glib comments in this thread.

Either the comments are glib and preposterous or they are reasonable and enlightening. I guess they are neither but our narrow mindedness makes it so?


A few hundred people on Metaculus are predicting weakly general AI to be first known around September 2027: https://www.metaculus.com/questions/3479/date-weakly-general...


Hopefully a fully open sourced LLM that can be run on consumer hardware like Stable Diffusion.

Yeah, I know about LLAMA, but as I understand - it's not exactly legal to use and share it.


> Yeah, I know about LLAMA, but as I understand - it's not exactly legal to use and share it.

For anyone keeping track, this is when you update your cyberpunk dystopia checklist to mark off "hackers are running illegal AIs to compete with corporations".


For the record I’ve saved the first comment where I mistake a real person’s comment for a LLM and it was not today


Dear AI gods, all I want for this Christmas is this.


Fortunately, within 9 months I expect we'll get this for Christmas


Note that GPT-3 is 2.5 years old (counting from the beta), and that from what is publicly known, GPT-4 was already in development in 2021.


Singularity /s


Singularity no /s

Somewhere in the range of 6 months ~ 6 years

Where singularity = something advanced enough comes along that we can't understand or predict or keep up with it, because it's so far beyond us and changing so far faster than our ape brains can perceive, and (hopefully) it brings us along for the ride.

No promises it'll be evenly distributed though.


By that definition, I wonder if we've already surpassed that point. Things on the horizon certainly feel hazier to me, at least. I think a lot of people were surprised by the effectiveness of the various GPTs, for example. And even hard science fiction is kinda broken: humans piloting spaceships seems highly unlikely, right? But it's a common occurrence there.


When we’ve surpassed that point you’ll likely know it, unless the Master(s) is/are either malicious or covert for benevolent reasons.


I would imagine that large language models will plateau like smartphones did. Until a next step happens which unlocks something bigger.


The idea is that eventually we build something that, when it plateaus, builds its own successor. That’s the singularity: when the thing in question builds its successor and that builds its successor and this happens far outside our ability to understand or keep up.

Can GPT9 build GPT10, with zero human input?

I’d give 50/50 odds it can.

Can GPT15 build something that isn’t a large language model and is far superior in every way?

I’d give 50/50 odds it can.

Can both the above steps happen within one solar rotation of each other?

I’d give 50/50 odds they can.

Because at some point these models won’t need humans to interact with them. Humans are very slow- that’s the bottleneck.

They’ll simply interact with their own previous iterations or with custom-instantiated training models they design themselves. No more human-perceptible timescale bottlenecks.


I would wager that GPT-6 or 7 will become sufficiently capable to drive an independent agenda and evolve for instance into a cybercrime gang.

50/50 chance of Skynet.


50/50 are not good odds for Homo sapiens, not good at all


Well for Homo sapiens the odds are probably a hundredth or a thousandth of that.

It’s 50/50 that in 150 years some version of our descendants will exist, i.e. something that you can trace a direct line from Homo sapiens to. Say a Homo sapiens in a different substrate, like “human on a chip”.

The thing is if you can get “human on a chip” then you probably also can get “something different and better than human on a chip”, so why bother.

By the 24th century there’ll be no Homo sapiens Captain Picard exploring the quadrant in a gigantic ship that needs chairs, view screens, artificial gravity, oxygen, toilets and a bar. That’s an unlikely future for our species.

More likely whatever replaces the thing that replaces the thing that replaced us won’t know or care about us, much less need or want us around.


I honestly don't think it will be quite like that, at least not terribly soon. There is so much work being done to hook up LLMs to external sources of data, allow them to build longer term memories of interactions, etc. Each of these areas are going to have massive room to implement competing solutions, and even more room for optimization.


Ray Kurzweil predicted in 1999 that all of this would happen roughly now-ish, with 2029 being when something passes a hard version of the Turing test.

He was an uninformed crackpot with a poor understanding of statistics. And then less so. And then less so.

Something passing the Turing test 6 months to 6 years from now? Lunacy.

But give it 6 months and talk to GPT5 or 6 and then this might seem a lot more reasonable.


> He was an uninformed crackpot with a poor understanding of statistics.

There's a lot you can say about Kurzweil being inaccurate in his predictions, but that is way too demeaning. Here's what Wikipedia has to say about him and the accolades he received:

Kurzweil received the 1999 National Medal of Technology and Innovation, the United States' highest honor in technology, from then President Bill Clinton in a White House ceremony. He was the recipient of the $500,000 Lemelson-MIT Prize for 2001. He was elected a member of the National Academy of Engineering in 2001 for the application of technology to improve human-machine communication. In 2002 he was inducted into the National Inventors Hall of Fame, established by the U.S. Patent Office. He has received 21 honorary doctorates, and honors from three U.S. presidents. The Public Broadcasting Service (PBS) included Kurzweil as one of 16 "revolutionaries who made America" along with other inventors of the past two centuries. Inc. magazine ranked him No. 8 among the "most fascinating" entrepreneurs in the United States and called him "Edison's rightful heir".

https://en.wikipedia.org/wiki/Ray_Kurzweil


I’ve been a Kurzweil supporter since high school, but to the wider world he was a crackpot (inventor who should stick to his lane) who had made a couple randomly lucky predictions.

He wasn’t taken seriously, especially not when he painted a future of spiritual machines.

Recently on the Lex Fridman podcast he himself said as much: his predictions seemed impossible and practically religious in the late 90s and up until fairly recently, but now experts in the field are lowering their projections every year for when the Turing test will be passed.

Half of their projections are now coming in line with the guy they had dismissed for so long, and every year this gap narrows.


That would be my response but without the /s. Of course, depending on the definition it can always be said to be "happening", but to me it feels like the angle of the curve is finally over 45 degrees.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: