Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand how most of the comments here seem to be along the lines of "these interview questions are useless now" or "we need to rethink education, it hasn't kept up". These all seem absurdly myopic to me.

What we're seeing is the first instance, still very limited and imperfect, of AGI. This is not going to make some interview questions obsolete, or give students more tools to cheat with their homework. It is effectively proving that acquiring knowledge and professional skills is becoming useless, for good. In a few years (3, 5, 10 at most) this is going to defeat the entire purpose of hiring, and therefore of completing most forms of professional education. At the current rate of progress, most intellectual professions will be obsolete before today's sixth-graders are ready to enter the job market.

I can't even picture a functional world where humans are cut out of most professions that don't involve manual work; where any amount of acquired knowledge and skills will be surpassed by machines that can produce better results at a thousandth of the cost in a thousandth of the time a human can. And even if such a world can function, I can't imagine a smooth transition to that world from its current state.

I'm worried, does it show? :)



I agree 1000000%. Watching all the ART AI's change so radically fast, Watching ChatGPT being adopted by so many people at my company. Watching people say art will shift to prompt engineering, then watching prompt engineering being automated away already! Watching the rise of stable diffusion tooling like invokeai, controlnet, and more. Watching youtube videos of people integrating openai and stable diffusion together. Hearing my old university has created an emergency council to deal with AI and its implications on education and students. Seeing artists on twitter say their commissions have dropped dramatically. Hearing stories that 4channers have been trying to steal Anime creation AI's. Reading articles that chatgpt passed Wharton MBA tests, and various Lawyer tests. An article on hackernews TODAY talks about running this giant models on a home pc!

The speed of AI adoption with its immediate practical usefulness and watching the the speed of innovation around only increase faster than ever makes me nervous extremely nervous about the future, extremely nervous.

This is a global paradigm shift.


It is a paradigm shift and whoever happens to be on the good side of this may survive just a bit longer before they too become redundant.


Do we know if anything, other than luck, will determine who's on which side? Or, if any decisive action on anybody's part is more likely to pay off than just staying mentally flexible, watching, and waiting?


This take is widely prevalent in every single ChatGPT discussion.

I'll quote myself for discussion for a previous article: https://news.ycombinator.com/item?id=34746348

> I've find two groups of people on this subject, first one has taken a graduate level stat course and/or ML course, and may have work experience in machine learning/data science. The latter camp is more numerous and did not do those things.

> The latter group are far more hyped about ChatGPT et al, despite explanation by the first group.

> Don't get me wrong, ChatGPT is very exciting, just not in the way it is frequently portrait to be, in particular, development on this model will not lead to General Intelligence, which is not a data training/stat problem altogether. With how the ML field is shaping up to be today, it doesn't even look to be a Machine Learning problem.

And let me just add a ridiculous comparison, since you are assuming that it is a path towards AGI:

> iPhone 4 isn't fusion energy. It's incredibly useful and a tectonic shift in the cellphone industry, but it's not producing energy. The analogy might sound completely insane, but that's how different machine learning of today and general intelligence is.

While AI of today feels like a blackbox, at every stage of its training we know to a degree what it is doing, and once trained, it is as deterministic as a few billion interconnected if else connected to a random seed generator. It could do a lot of things, acquiring a mind of its own isn't one of it.

I wouldn't be worried just yet, because AGI is something that research have barely had an idea on how it's going to look outside of our brain, or even inside. I'm not saying that it will never come, just that I highly doubt it will be invented in my lifetime. In the meanwhile, I have something that all of bots today won't have, initiative.


AGI is defined as human-level performance across wide variety of tasks. ChatGPT achieves that. It's somewhat stupid and limited (no vision, hearing, manipulation), but still, in principle, it's AGI.


I don't want to argue about the semantics or definition of AGI, since that is a rabbit hole that I don't believe contribute anything to the subjective danger that people are feeling.

The only thing I want to say about ChatGPT and other LLM model is that they don't know what they are doing. Give me something that knows what its doing, and perhaps more importantly, what it wants to do. Then I'll acknowledge my personal obsolescence, if not at that point then in quick succession, but until then.


Right now everything is in such a grey area that we could take this discussion in so many ways. Writing it off because it doesn't mean what it says misses the point, in my opinion, the point is that anyone who knows how to write the correct questions can work with ChatGPT to develop things things they couldn't before, improve their own writings, and potentially replace their need to consult others, for a quick list.

You are the one saying that belief behind the writing matters. What if the audience decides to upvote ChatGPT instead of you? Who says conviction is essential?

I would also contest you by suggesting that the fearful ones are the one's who find a need to downplay the threat, or we can rephrase it, advanced usefulness of current, let alone near-future versions.

And I realize ChatGPT could have written this post quicker than I did, including a version that contains my typo patterns, as well as a more concise and grammatically improved version...


It is unsurprising to find wisdom in 570GB of human generated text.

It is unsurprising to find useful information being returned when a statistical process is used to extract value from those text. It's a similar process to how search engine work, except more costly and the result more natural.

However, if you choose to believe that the above process is signifying the development artificial intelligence that will start to have its own consciousness, then good for you.

Again, I'm not saying ChatGPT is useless or won't replace many jobs, it is amazing in its own right. It's like a refined and more useful google, which is huge. I'm only arguing that it is not AGI, will not develop into AGI, attempting to argue for its usefulness in other area is not arguing against me - no strawman please. As for myself, I'm not too worried for my job from this angle (there are plenty of other angles to worry about, that is), for with all tooling that develops, it benefits the people who could best put the said tool to work.


It is not in any sense whatsoever AGI. AGI is an unsolved problem. You are defining it to be something it’s not. I don’t know where this idea comes from.


The big question I'd ask is "do we need AGI?"

Surely an LLM could take over the ordering flow at say, Jack in the Box today. That's gotta be true for a lot of different situations.


I don’t think ChatGPT is AGI nor do I think LLMs can be AGI but I don’t see why any person with any sort of reasonable value system and understanding of reality would think we need AGI; of course I’m not convinced it’s possible a Turing Machine framework. Someone else said “it’s just another type of automation. No, it’s fundamentally different than other kind of automation. It would eliminate economics of knowledge work if left unchecked and probably many other areas of work. I don’t think plumbers would be called the “rich” anymore if that sort of work is all that’s left. I’m fact, it could be much worse than that. This would have a profoundly deleterious effect on society. Economic conditions that cause unrest and violence outside of the West would probably be the case throughout the world. Sone people that is called something like “post scarcity”, I think that is the new version “perpetual motion”.


I see this as another step in automation, like the sewing machine, the internal combustion engine, the microchip, internet, etc.


I'm not happy to say this, but to these kinds of posts I say the truth:

An old version of ChatGPT could have written a better version of your post than you in 2-5 seconds.


Of course, except it said so because the training weights aligned on previously learned material, or you lead it to say so by tweaking those probabilities with your input.

You can also easily make it say the complete opposite, which you'll have a hard time to do with me, because I actually meant what I said.

In fact, the next iteration of ChatGPT might have used my very words right here to construct its response.


But from what I’ve seen many skilled AI researchers who have even reimplemented these models themselves do in fact believe that a scaled up ChatGPT could be AGI, or at least that it’s a step towards AGI


> many skilled

citation needed, I'm only aware of one case which might be characterized as such (though I digress): https://www.engadget.com/blake-lemoide-fired-google-lamda-se...


>What we're seeing is the first instance, still very limited and imperfect, of AGI

absolutely not. language-model text generation is actually about as non-general as it gets -- they are fundamentally incapable of understanding anything at all, ever. they can't do math, work through basic logic problems, or produce any output that isn't just an assumed logical continuation of the input.


People seem so convinced of this and I just don't get it. I'm seeing this comment through my eyeballs, generating some pertinent text in my brain, and outputting it back out. But so many people seem convinced convinced this process is something radically, fundamentally, irreducibly different than what ChatGPT is doing internally, and I don't get why.

Is it because I have a consciousness with an internal narrative and ChatGPT does not? Because that seems like more of a result of how we've wired up ChatGPT to operate than a fundamental structural difference; nothing stopping us from making ChatGPT talk to itself in its brain to generate synthesis.


LLM (and deep learning in general) are to AGI what bogosort is to sorting. For some reason beyond me people think it's very important to not try to understand anything about the structure of the problem you're trying to solve and just make a very general algorithm which will be suboptimal in just about every way except for the generality of it's code.

Seeing the world as a trillion dimensional token soup is definitely quite general and at the same time very very weak in terms of expressivity.


Sure, I think buy this interpretation. As long as we can agree that bogosort and quicksort are both still sorting algorithms. My brain definitely has some structures that are useful for me understanding the world that are more immediately useful than trillion dimensional token soup.

But I'm also not convinced that it's impossible those structures could be successfully emulated by quadrillion dimensional token soup. And a lot of folks seem to be convinced that it's some kind of fundamental impossibility.


No other approach has worked as well for natural language, and not for lack of trying.


It's because you and I also have mental resources that are structured very differently from ChatGPT. Even if some part of our brain might resemble a LLM, a LLM is a very poor representation of other parts of the brain. If you take ChatGPT too far, it falls into repetition and demonstrates that it obviously has no comprehension of the material it's remixing.

Maybe some combination of a LLM and other mental machinery would result in AGI with real comprehension.


I dunno - I hear what you're saying about how it seems to lose track of things over long conversations. But part of the mental machinery that it lacks is the ability to learn from those conversations and insert new ideas into its base model; every new conversation starts fresh. It has no long-term storage; it can't "keep notes" for itself, and we don't give it the ability to alter its fundamental state based on new input after we're done feeding it training data.

I guess we're mostly agreeing, because those things are probably part of the "other mental machinery" we'd want to provide it, but I'd push back a little on "real comprehension". It kinda seems to me like the "comprehension engine" is working just fine, it's the structure we've built around that comprehension engine that's limiting it right now.


It's because whatever you and I are doing, it's not simply statistical analysis. That is all that ChatGPT is doing.

It may be possible to create a machine that flirts with actual intelligence, but this is simply not it. There's not even room for doubt about this.


I can't claim to understand everything my brain is doing, but accepting input and filtering it through a bunch of neuron chains to result in some kind of output sounds like "statistical analysis" to me.

You seem convinced that ChatGPT will never have "actual intelligence" — care to make a prediction about something that LLMs will never accomplish? We know they can write code, write essays, generate artwork, and play chess. What's a task that requires "actual intelligence"? Parenting a child? Running for President? Making a steak sandwich?


An LLM will never be able to produce something that cannot be statistically derived from its training data.

So... creating a new language, maybe?


https://maximumeffort.substack.com/p/i-taught-chatgpt-to-inv...

My bet would be that in ten years GPT would get pretty good at it, but eh, I dunno.


I see what you are going with here for it not being able to come up with original things. But I think that falls apart when you realize almost nothing humans do is original, everything is just building on other things.


The vast majority of what people do is this, yes. Just like how most of the time, we're all running on autopilot and our behaviors are pretty much just following scripts. "Meat robots".

But it's not 100%.


ChatGPT's attempt: https://gist.github.com/iameli/d9a5b715ec9baa5b11063888e054d...

Are those two responses enough to say it's invented a language? Probably not. But it's already farther along than I would have gotten if you asked me to do such a thing.

If I kept prompting it for hours, I bet it would start to contradict itself and lose track of the rules it had already established, but so would I. If I were actually inventing a language, I'd take months and keep extensive cross-referenced documentation on grammar and syntax and vocabulary. We don't _let_ ChatGPT do that, it has no mechanism for persisting its ideas like that. But like... neither would I if you took away my notebook and the parts of my brain that persist long-term memory.

I guess my interpretation here - I see differences in _capabilities_, but not differences in actual _intelligence_.


> whatever you and I are doing, it's not simply statistical analysis.

How do you know?


Because I have seen genuine creativity and invention.


You’re comparing a LLM to yourself, really? I wonder how many people talking about this know much about computers. Your cognitive capability is much more complex than ChatGPT. I’m not sure why people are convinced that a Turing Machine can simulate it, let alone do it efficiently. Do you believe that the way you think is at all like ChatGPT. That you are both processing text is not evidence. How many watts of power did you utilize on your whole life up to this point? How much did it take to train ChatGPT and how much does it take to run it, even if you could run it on your home PC? How much heat does it generate. You may not like it but could live and solve intellectual problems for a week on only water and a handful of rice. Do you really think these two things, human brains and computers based on TMs are the same type of thing?


Your argument is that my brain is more energy-efficient than ChatGPT, therefore... ChatGPT is incapable of "understanding"?


I’m not arguing that it is more energy efficient. I’m showing the obvious fact that there is a massive scale difference in the amount of energy and that one is capable of much more complex problem solving which seems to indicate that the two things are fundamentally different.


Okay. Same question I asked elsewhere in this thread; care to make a prediction about a problem that won't ever get solved by an LLM? We know they can write essays, create art, write code — what's something they'll never do?


I don't agree. Besides the fact that I can't do much math either, except with a lot of effort and tools, at this point there are masses of examples of GPTs doing basic reasoning and logic and solving problems. Of course you can say that they do it by "finding the most probable continuation"- and you're right. But that doesn't change the fact that it works. Simply because to find the most probable continuation ultimately you need semantic understanding, and somewhere, back-propagating from the training text, the NNs have managed to build a decent model of the world. There's no other way to explain their performance, these are not markov chain text generators.


Have you actually used any of these products? GPT et al are perfectly capable of taking knowledge from any one domain and applying it towards the solution of any other problem domain, through various kinds of data abstraction, reasoning by analogy, and other techniques similar to what humans do. It makes plenty of goofups along the way, just like humans do. But if your requirement is that it performs absolutely perfectly in new domains without making any mistake or needing any supervision... well that is certainly a requirement no human could ever meet, either.


> through various kinds of data abstraction, reasoning by analogy, and other techniques similar to what humans do.

No, that's exactly not how LLMs work. They are extremely good at predicting what sentences resemble the sentences in their training data and creating those. That's all.

People are getting tripped up because they are seeing legitimate intelligence in the output from these systems -- but that intelligence was in the people who wrote the texts that it was trained with, not in the LLM.


I see a lot of comments along the lines of "it's just predicting the next word".

But there's evidence that's what humans do as well:

"In the last few decades, there has been an increased interest in the role of prediction in language comprehension. The idea that people predict (i.e., context-based pre-activation of upcoming linguistic input) was deemed controversial at first. However, present-day theories of language comprehension have embraced linguistic prediction as the main reason why language processing tends to be so effortless, accurate, and efficient."

https://www.psycholinguistics.com/gerry_altmann/research/pap...

https://www.tandfonline.com/doi/pdf/10.1080/23273798.2020.18...

https://onlinelibrary.wiley.com/doi/10.1111/j.1551-6709.2009...


Sure, but that's not what makes humans intelligent.


LLM are not fancy Markov chains. These are more than mere statistical prediction. They contain large deeply layered attentional networks which are perfectly capable of representing complex, arbitrarily structured models, trained from the data set or assembled on the fly based on input. I'm sorry, but I think you are about a decade or so out of date in your intuitions for how these things work. (And a decade is a long time in this field.)


I will grant that my understanding is not complete (and would argue that pretty much everyone else's is incomplete as well), but it's not out of date. I have deliberately avoided forming any opinion about this stuff until I learned more about what the modern approach is. I'm not relying on what I learned a decade ago.


I’ve just asked gpt3 to sum two large random numbers and it gave me correct sum of them. Then I’ve defined fibanachi like sequence (f1=1, f2=1, fn=fn_1 + fn_2 + 7) and it correctly gave me the value of 10th element. It’s not just statistical model to generate something resembling training set, it does understand training set, to similar extents as we understand world around us…


I don't see how your example demonstrates your hypothesis, though. Summing two numbers and telling the next number in the Fibonacci sequence would be expected from a deep and complex statistical modelling of the existing internet data.


Both of these examples show GPT not barely approximating outputs (which doesn’t exist in real worlds for these inputs) based on training set but understands algortihms and able to apply them. I don’t believe our brains are doing anything different from that.


I feel like there are two parallel discourses going on here, and it's crazy.

On the one hand, we have LLM, and people arguing that they are simply memorizing the internet and what you're getting is a predictive regurgitation from what actual people have said.

On the other hand, you have AI Art, and people arguing that it's not just copy-pasting the images it's recognized, and it's actually generating novel outputs by learning 'how to draw'.

Do you see a commonality?

It's that people are arguing whatever happens to be convenient for them.

If a model can generate human-like responses, and it has a large input token size that effectively allows it to maintain a 'memory' by sticking the history in as the input rather than being a one-shot text generator...

Really.

What is the difference between that and AGI?

Does your AGI definition mean you have to have demonstrated understanding of the underlying representations that are put in as text?

Does it have to be error free?

What fundamental aspect of probabilistic text generation means that it can't be AGI?

...because, it seems to me that it's incredibly convenient to define AGI as something that can't be represented by a LLM, when all you have really is a probabilistic output generator, and a model that currently doesn't do anything interesting.

...and it doesn't. It's not AGI. Right now; but your comment suggests that because of the technical process that the output is generated by that LLMs are fundamentally unable to produce AGI; and I think that's not correct.

The technical process is not relevant; it's simply that these models are not sophisticated enough to really be considered AGI.

...but a 5000 billion param model with a billion character token size? I dunno. I think it might start looking pretty hard to argue about.


I have the same sentiment. To me, there's two kinds of groups in most recent discussions about GPT: those who don't understand the underlying functionality at all and those who think they deeply understand it down to its bits.

The second group seems to be very stubborn in downplaying GPT et al capabilities. What's curious is that, for the first time in history of AI field, the source of general amazement is coming straight from AI responses, rather than some news or corporate announcement about how the thing works or what it will be able to do for you.


>No, that's exactly not how LLMs work. They are extremely good at predicting what sentences resemble the sentences in their training data and creating those. That's all.

It's a little hard to take this argument entirely at face value when you can ask it to produce things that aren't in its training data to begin with, but are synthesized from things that are in the training data. I remember being pretty impressed with reading the one where someone asked it to write a parable in the style of the King James bible about someone putting peanut butter toast in a VCR and it did a bang up job. I've asked it to explain all sorts of concepts to me through specific types of analogies/metaphors and it does a really good job at it.

I think the semantics around whether it itself possesses or is displaying "intelligence" isn't the point. I treat it kind of like an emulator. It's able to emulate certain narrow slice of intelligent behavior. If a gameboy emulator still lets me play the game I want to play, then what does it matter that it's not a real gameboy?


“People are getting tripped up because they are seeing legitimate intelligence in the output from these systems -- but that intelligence was in the people who wrote the texts that it was trained with, not in the LLM.”

This is the real magic. Let’s train ChatGPT on absolute garbage information and compare the intelligence of the two.


Let's take a kid and teach them garbage information as they are growing up... minimize as much 'real knowledge' as possible and see what comes out.


Right. There are various examples of what growing up isolated does to your mind.

Intelligence comes from a mix of the universe's stream of data hammering your senses and the teachings of another previous intelligent being.

There's nothing fundamentally wrong in kickstarting a chatbot with lots of pretrained data. It’s Chinese Rooms all the way down.


Agree. What happens? Is a human by default accepted to be an “AGI” entity?


I do agree. But being able to combine old ideas in new ways is also intelligence. LLMs have memorized a ton of information, and learned “information combinators” to compose them. All that’s missing is a clean way for LLMs to engage in the scientific method.


Vast majority of knowledge any one of us has comes from cultural heritage. We all stand on the shoulders of giants. And knowledge, not computation, is behind intelligent behavior.


Nope, they are good at prete ding they do. Ask it something that can't be googled in 30s and you will see this.

I'm skeptical you can simply scale up this approach to full AGI.


To be completely ridiculous, we'll have AGI when it can convince a human to mate with it over any other human followed by raising its offspring to a successful adulthood. This is the game we're all playing.


Parasites are also lifeforms. It doesn't have to convince you to breed, it has to convince you to keep it plugged in and feeding ever increasing amounts or computing power.

In theory GPT is there with the right prompt.


>It doesn't have to convince you to breed, it has to convince you to keep it plugged in and feeding ever increasing amounts or computing power.

It's already succeeding at that.


What you're saying is outright false. If read llm papers you'll see that a lot of benchmarks used are related to math and logic.

GPT3 paper as an example https://arxiv.org/abs/2005.14165

I recommend you read that paper as it answers a lot of misconceptions you may have about llms.


> they can't do math, work through basic logic problems

ChatGPT begs to differ:

Me: How do I measure 4 liters when I have a 3 liter bottle and a 5 liter bottle?

ChatGPT: Here are the steps to measure 4 liters using a 3 liter and a 5 liter bottle:

Fill the 5 liter bottle completely with water. Pour 3 liters from the 5 liter bottle into the 3 liter bottle, leaving 2 liters in the 5 liter bottle. Now you have 3 liters of water in the 3 liter bottle, and 2 liters of water in the 5 liter bottle. Empty the 3 liter bottle, and pour the 2 liters from the 5 liter bottle into the 3 liter bottle. Fill the 5 liter bottle again with water, and pour 1 liter from the 5 liter bottle into the 3 liter bottle, which already has 2 liters of water. You will be left with 4 liters of water in the 5 liter bottle, which is the desired amount.


I’m fairly sure that’s more reciting a solution learned from memory than doing math. What does it answer to

“How do I measure 12 liters when I have a 15 liter bottle and a 9 liter bottle?”

or

“How do I measure a liter when I have a 750 ml milk carton and a 12½ dl tea pot?”

I think we expect an AGI to be able to answer them, even though quite a few intelligent humans wouldn’t be able to do that.


OTOH, ChatGPT also agrees with kurisufag.

I entered your prompt and got the same reply from ChatGPT. Then I changed "4 liters" to "4 gallons" and got this:

  To measure 4 gallons of liquid using a 3-liter and a 5-liter bottle, you would need to convert gallons to liters first. One gallon is approximately equal to 3.785 liters, so 4 gallons is approximately equal to 15.14 liters.
   
  Since the available bottles have capacities of 3 liters and 5 liters, you will need to fill and empty the bottles several times to reach the target of 15.14 liters. Here's one way to do it:
  
    Fill the 5-liter bottle with water.
    Pour the water from the 5-liter bottle into the 3-liter bottle until it's full, leaving 2 liters of water in the 5-liter bottle.
    Pour the 2 liters of water from the 3-liter bottle into a separate container.
    Repeat steps 1-3 two more times, which will leave you with a total of 6 liters of water in the separate container.
    Fill the 5-liter bottle with water again and pour as much as possible into the separate container until it's full. This will add 3 liters of water to the container, for a total of 9 liters.
    Fill the 3-liter bottle with water and pour it into the separate container until it's full. This will add another 3 liters of water to the container, for a total of 12 liters.
    Fill the 3-liter bottle with water again and pour as much as possible into the separate container until it's full. This will add 1 liter of water to the container, for a total of 13 liters.
    Fill the 5-liter bottle with water and pour as much as possible into the separate container until it's full. This will add another 2 liters of water to the container, for a total of 15 liters, which is approximately 4 gallons.
  
  Keep in mind that this method is approximate and assumes that 1 gallon is equal to 3.785 liters. If you need a more precise measurement, you may need to use a measuring container that is calibrated in gallons.
Starts off OK. Notice that it's eventual goal is to get 15 liters: good! But then it doesn't understand what it's doing so it messes around with the 3 liter bottle instead of knowing 3*5=15. Additionally, one of the times it fills the 3 liter bottle and pours all of it out but somehow only produces 1 liter.


And it uses the 5 liter bottle to top up the separate container leaving 2 liters, and then proceeds to fill up the separate container.

Looks like it's mixing up elements of the solution to the original problem without understanding how the topping up works, with the solution to a different problem that it felt was sufficiently related. It's a wild guess based on similarity.


I used to think so.

But then I reconsidered.

Those poo-pooing LLMs say it is merely ‘a fancier version of autocompletion.’ Or, they make comments (correctly) that ‘it isn’t reasoning… it’s just guessing which word ought come next.’

Such a point-of-view is similar to thinking, in regard to a circle saw, “it doesn’t ‘want’ to cut off your hand! It’s just a circle of spinning serrated steel!”

The human race is about to get its hand cut off.

We are in a bad place.

So much time spent being wasted debating how to make LLMs ‘safe’ by ensuring they don’t inadvertently say something racist! People are utterly missing the point as to the true danger.


What about Toolformer?


I think what an LLM is best at is fooling people into thinking it’s intelligent. It is really good at saying things in a natural sounding way, and statistically often getting it right, because certain strings of tokens are encoded. But it’s clear when you start poking that it just as easily tells you that 5/2=3, or that 2+2 != 4. it doesn’t model math or any sort of knowledge at all.


Something that I don't quite understand is why the tendency of ChatGPT to be inaccurate sometimes is a fundamental flaw rather than something that can be improved on iteratively if its just a matter of improving the statistical likelihood of accuracy. The question of whether its AGI or not is, to paraphrase the famous quote, a bit like the question of whether a submarine can swim.


Because ChatGPT isn't thinking. It's not reasoning at all. It's assembling sentences that are statistically predicted from using existing writings as the template.

Accuracy isn't a part of the process except in terms of how accurate the training data is. ChatGPT is not making any sort of truth or accuracy determination, let alone doing so poorly.


Can you point to where the thinking happens in a human?


Nope. But I don't have to do that to understand that LLMs do not assess truth or accuracy of anything.


We don’t have any proof of that, as we don’t have proof also about its opposite. We have no idea why neural nets work, or how our brain works in context of this. There is definitely something human-like in neural networks, but we don’t have any idea why, and what exactly. It’s a completely empirical field and not a theoretical one. We don’t have any good idea what would happen if we could built a 180 billion neuron large neural net, because there is no theory which would prove what would happen even the current ones. That’s why I’ve seen almost every single prediction about what AI would solve in the following years in the past 40 years fail. We have no clue.


There is research that shows humans are also predicting the next word.

I posted that here: https://news.ycombinator.com/item?id=34875324


But you don't know that humans don't reason by stringing words together and seeing how statistically likely they seem.

Related to this topic, see "Babble and Prune": https://www.lesswrong.com/s/pC6DYFLPMTCbEwH8W


I don’t think it’s a stretch to say humans aren’t great at assessing the truth or accuracy of anything either.


My point isn't about how good or bad this is being done. Humans, at least some of the time, attempt to assess truth and accuracy of things. LLMs do not attempt to do this.

That's why I think it's incorrect to say they're bad at it. Even attempting it isn't in their behavior set.


Where is the organ that does that? My impression is everything the brain does is homomorphic to what LLMs do.


Isn’t this the whole point of John Searl’s “the Chinese room” thought experiment? But does it matter what is actually going on inside the room, if the effect and function is indistinguishable? Edit: after conferring with ChatGPT, Searle’s point like yours is that the man in the room doesn’t understand Chinese, he is just manipulating symbols, but from the outside, the man in the room seems to speak fluent Chinese.


I think a better analogy is asking if a water bottle can swim. It floats most of the time, and can move around if pushed.

The reason “can be inaccurate sometimes” is a fundamental flaw is because my assumption is that it will never not be inaccurate. I think it will always be inaccurate sometimes and never be accurate always.

This doesn’t mean it isn’t useful for a lot of applications. But I don’t think it is a holy grail technology, it’s not AGI, and it isn’t going to replace professions.


The whole point of using a computer instead of doing something yourself is to do something quickly and accurately. If I need someone to give me maybe-correct-maybe-not information, I'll just ask one of my coworkers.


Well, that was the point of computers until now. That doesn't mean computers can't be other things, too. ChatGPT is a lot cheaper and faster than your coworker, and it's available (almost) 24 hours a day. And the accuracy may improve!


I mean only if you want accurate information, but if you're building a misinformation network of bots to cause problems in an enemy state then a human sounding bullshit machine sounds like something any number of governments would buy into.


The best feature of this LLM is that it goes from fooling people into having people make fools of themselves when they turn around and predict the end of the world/education/programming/whatever thing they don't quite understand based on what a confidently incorrect charlatan machine told them. It's like a viral marketing gag.


I could say the same things about some of my coworkers...


Fair point, I think we would also call them frauds.


I don't know if this is an AGI-like experiment, because LLM are trained on human knowledge. I'd expect that real AGI wouldn't need such a thing and would improve on its own. That's the moment where we become obsolete.


"you can invent human cognition from first principles via billion years of parallel evolution. or since it’s already been invented, applied, recorded at scale you can just observe its behavior to learn it"


Mammals did not evolve from single cell creatures in order to replace the dinosaurs.


I'm just going to quit my programming job and take up flint knapping, make some quality stone tools -- that's what homo genus did for 2 million years before computers, I figure I can always fall back on that.


> I can't even picture a functional world where humans are cut out of most professions that don't involve manual work

Many people do manual work. It may very well be the case that general fine motor skills are a far more complex and difficult operation than the entire edifice of human intellect. Philosophically, it would be an immense blow, the mother of all existential crises. But regardless, it suggests that the first AGI would be incapable of independent survival and we'd still be relevant and in control for a while.

> where any amount of acquired knowledge and skills will be surpassed by machines that can produce better results at a thousandth of the cost in a thousandth of the time a human can.

I don't think we can necessarily extrapolate it to be that cheap. It could be. But it is also possible that the increases in cost and resources to scale these models bigger and bigger will outstrip hardware progress and that this technology will run into a dead end. To put it differently, I think it is far from clear that the kind of hardware that we build is actually better than an organic substrate for this sort of computation. Imagine an optimized organic neural implementation of ChatGPT, for instance. Would it be slower or more expensive than ChatGPT? Perhaps not. Likewise, the very best that the current paradigm can offer may not be faster or cheaper than humans are at quite many valuable tasks.


This all seems very unlikely to me. Better tools have always made humans more useful, just doing different things. I do think it's interesting to consider whether there is some singularity where that trend dramatically reverses (Vonnegut's Player Piano is one of my favorite books...), but I think the better prior is that this is just another step up the abstraction ladder for humanity.


Software development isn't really about just acquiring and memorising knowledge to later spit out. You need reasoning, logic and creativity. That AGI can develop complex new software (not just some code that has been scraped of the internet years ago) in 10 years is something I doubt.


We don’t have AGI as of now but it could spark anytime and its acceleration could be extremely fast. It could almost spark and yet never really get there, just get closer and closer and quite never hit the mark. But this is not something that will simply stop a massive disruption in most professions and livelihoods, LLMs could do that easily


It's a good idea to think about some aspects of humans. Most people never be content with what they have.

Once progress was made, we get used to the old tool and begin to build new norms.

The pain is always there, you just run faster.


Butlerian Jihad when?


> What we're seeing is the first instance, still very limited and imperfect, of AGI.

None of this is AGI. This is Eliza on steroids.


It seems disingenuous to say that this is Eliza on steroids. They operate in fundamentally different ways, no?

I definitely agree with the premise that the current state of the art isn't at all AGI, but it seems almost self-evident to me that LLMs are a key piece of the puzzle on the road to AGI. Eliza was never going to have that kind of trajectory, but LLMs I think you could certainly make that argument for.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: