Hacker News new | past | comments | ask | show | jobs | submit login

> There’s no need to ascribe any belief that they can evolve, modify themselves, or spontaneously develop intelligence.

But neural networks clearly evolve and are modified during training. Otherwise they would never get any better than a random collection of weights and biases, right?

Is the claim then that an artificial neural network can never be trained in such a way that it will exhibit intelligent behavior?

>> Do you claim that an artificial neural network with trillions of neurons can never be intelligent, no matter the structure?

> If, by structure, you mean some algorithm and memory layout in a modern computer I think this sounds like a reasonable claim.

Yes, that's what I mean.

Is your claim that no Turing machine can be intelligent?

>> Look, I realize that "GPT-4 is intelligent" is an extraordinary claim that requires extraordinary evidence.

> That’s the crux of it.

And I provided links to such evidence. Is there a rebuttal?

If we're saying that GPT-4 is not intelligent, there must be questions that intelligent humans can answer that GPT-4 can't, right?

What is the type of logical problem one can give GPT-4 that it cannot solve, but most humans will?




> Is the claim then that an artificial neural network can never be trained in such a way that it will exhibit intelligent behavior?

I think it’s not likely a NN can be trained to exhibit any kind of autonomous intelligence.

Science has good models and theories of what intelligence is, what constitutes consciousness, and these models are continuing to evolve based on what we find in nature.

I don’t doubt that we can train NN, RNN, and deep learning NN to specific tasks that plausibly emulate or exceed human abilities.

That we have these deep learning systems that can learn supervised and unsupervised is super cool. And again, fully explainable maths that anyone with enough education and patience can understand.

I’m interested in seeing some of these algorithms formalized and maybe even adding automated theorem proving capabilities to them in the future.

But in none of these cases do I believe these systems are intelligent, conscious, or capable of autonomous thought like any organism or system we know of. They’re just programs we can execute on a computer that perform a particular task we designed them to perform.

Yes, it can generate some impressive pictures and text. It can be useful for all kinds of applications. But it’s not a living, breathing, thinking, autonomous organism. It’s a program that generates a bunch of numbers and strings.

But when popular media starts calling ChatGPT “intelligent,” we’re performing a mental leap here that also absolves the people employing LLM’s from responsibility for how they’re used.

ChatGPT isn’t going to I take your job. Capitalists who don’t want to pay people to do work are going to lay off workers and not replace them because the few workers that remain can do more of the work with ChatGPT.

Society isn’t threatened by ChatGPT becoming self aware and deciding it hates humans. It cannot even decide such things. It is threatened by scammers who have a tool that can generate lots of plausible sounding social media accounts to make a fake application for a credit card or to socially engineer a call centre rep into divulging secrets.


> "it’s not a living, breathing, thinking, autonomous organism"

> "autonomous intelligence"

> "what constitutes consciousness"

> "autonomous thought"

In my mind, this is a list of different concepts.

GPT-4 is definitely not living, breathing or autonomous. It doesn't take any actions on its own. It just responds to text.

Can we stay on just the topic of intelligence?

Let's take this narrow definition: "the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas".

> But in none of these cases do I believe these systems are intelligent

It should be possible to measure whether an entity is intelligent just by asking it questions, right?

Let's say we have an unknown entity at the other end of a web interface. We want to decide where it falls on a scale between stochastic parrot and an intelligent being.

What questions about logical reasoning and problem solving can we ask it to decide that?

And where has GPT-4 failed in that regard?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: