Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The AI can not have a goal unless we somehow program that into it.

I am pretty sure that's not how modern AI works. We don't tell it what to do, we give it a shitload of training data and let it figure out the rules on its own.

> If we don't, then the question is why would it choose any one goal over any other?

Just because we don't know the answer to this question yet doesn't mean we should assume the answer is "it won't".



Modern AI works by maximizing the correctness score of an answer. That's the goal.

It does not maximize its chances of survival. It does not maximize the count of its offspring. Just the correctness score.

We have taught these systems that "human-like" responses are correct. That's why you feel like talking to an intelligent being, the models are good at maximizing the "human-likeness" of their responses.

But under the hood it's a markov chain. A very sophisticated markov chain, with lots of bling. Sure, when talking to investors, it's the second coming of sliced bread. But come on.


> Modern AI works by maximizing the correctness score of an answer. That's the goal.

Right. But whose goal? I would say that is the goal of the programmers who program the AI. The AI program itself, doesn't have a "goal" it would be trying reach. It just reacts base don its markov-chain.

The current chatbot AI is reactive, not pro-active. It reacts to what you type.


The correctness score is maximized by faithfully imitating humans. Humans do have goals.


They are not imitating humans in general. They are imitating the statistical average of many human written texts. That is not the same thing as imitating the goals of humans.

By imitating the speech it may look like the AI has some goal-oriented behavior, but it only looks that way. And that is precisely the goal of their programmers, to make it look like the AI has some goals.

It would be possible to have a different type of AI which actually decides on its own goals and then infers what are the best actions to take to reach those goals. Such an AI would have goals yes. But language models do not. They are not scored based on did they reach any specific goal with any specific interaction. They have no specific goals.

The only goal (of the programmers who wrote the AI) is to fool the humans into thinking they are interacting with some entity which has goals. and intelligence.


It figures out "rules" within a guided set of parameters. So yes it is given direction by constructing a type of feedback on a task that it is given.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: