Hacker News new | past | comments | ask | show | jobs | submit login

I mean, it is magical, in a sense that we are not sure how and why it works.



I'm not sure how it works, but I'm sure it doesn't think. Not till it can choose its own loss function.


Are you sure that you are thinking? Can you choose your loss function?


It's not like people can arbitrarily choose their own loss function; our drivers, needs and desires are what they are, you don't get to just redefine what makes you happy (otherwise clinical depression would not be a thing); they change over time and can be affected by various factors (things like heroin or brain injury can adjust your loss function) but it's not something within our conscious control. So I would not put that as a distinguishing factor between us and machines.


Sure, it doesn't think, just as submarines don't swim, as EWD said.


People always reach for these analogies. "Planes don't fly like birds." "Submarines don't swim like fish."

Backpropagation has zero creativity. It's an elaborate mechanical parrot, and nothing more. It can never relate to you on a personal level, because it never experiences the world. It has no conception of what the world is.

At least a dog gets hungry.

Not persuaded? Try https://news.ycombinator.com/item?id=23346972


> Backpropagation has zero creativity. It's an elaborate mechanical parrot, and nothing more. It can never relate to you on a personal level, because it never experiences the world. It has no conception of what the world is.

The problem is: it's not really clear how much creativity we have, and how much of it is better explained by highly constrained randomized search and optimization.

> It can never relate to you on a personal level

Well, sure. Even if/once we reach AGI, it's going to be a highly alien creature.

> because it never experiences the world.

Hard to put this on a rigorous setting.

> It has no conception of what the world is.

It has imperfect models of the world it is presented. So do we!

> At least a dog gets hungry.

I don't think "gets hungry" is a very meaningful way to put this. But, yes: higher living beings act with agency in their environment (and most deep learning AIs we build don't, instead having rigorous steps of interaction not forming any memory of the interaction) and have mechanisms to seek novelty in those interactions. I don't view these as impossible barriers to leap over.


I agree GPT isn't grounded and it is a problem, but that's a weird point to argue against AlphaCode. AlphaCode is ground by actual code execution: its coding experience is no less real than people's.

AlphaGo is grounded because it experienced Go, and has a very good conception of what Go is. I similarly expect OpenAI's formal math effort to succeed. Doing math (e.g. choosing a problem and posing a conjecture) benefits from real world experience, but proving a theorem really doesn't. Writing a proof does, but it's a separate problem.

I think software engineering requires real world experience, but competitive programming probably doesn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: