Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And this will happen to all of humanness in time. Everything we think is unique about us, and our endeavours, will be reduced to optimization and learning algorithms eventually.

The main issue in determining man or machine in each situation eventually will be which has the lower TOC.



While it may be true, this is, of course, conjecture. Certainly many skills that seemed "uniquely human" have turned out not to be. That does not mean that all such abilities will be amenable to replacement by, as you say, "optimization and learning algorithms." Machine authoring of great art, a problem which may be at least as hard to solve as Artificial General Intelligence, does not seem to be on the horizon any time soon, for example.


In the same way as the Go master (perhaps) feels disillusioned by being beaten by the AI, I think people in general will not accept humans being replaced with machines in some "special fields". In those fields, the customer/user, will not experience the same utility if a robot is doing the service compared to another human, even if they are indistinguishable.

I think an interesting issue here is that in the (far) future, many services could be performed cheaper by AI/robots and in such a way that the customer is unable to tell wether a human is involved or not. And in this future, humans will probably be a premium service.

Take motor sports for example. We can probably now/soon replace F1 drivers with algorithms and cameras, but nobody would pay 1000's of USD to watch them drive around in Monaco. If it would turn out someday that the drivers had been replaced (for safety reasons or whatever) without telling the fans, the outcry would be tremendous. And even if outcry does not always equal "true utility", I think it highlights my point: humans made of flesh and blood risking their lives or performing extraordinary feats have an intrinsic economic value that can't be replaced.


as early as the 90s, automated control systems in F1 resulted in cars (i.e. Williams FW14) that to some extent drove themselves better than any human. indeed, many of the systems used have been specifically banned since then.


Hmm, I don’t think an ai could use optimization and learning algorithms to “learn” to DM D&D. For that, the ai would have to simply be an i.


Eventually it can do your example of unique human intelligence whatever that example is.


Another example of unique human intelligence: drown in existential dread.

I can do that, you can do that, but will a computer be able to do it?


I've written this in another comment but I'll repeat here. What you're asking really boils down to a combination of whether a human brain can be simulated and whether human intelligence is merely due to the physical brain (or do you believe in the existence of some intangible consciousness that cannot be replicated by a machine). Assume you believe both to be true, then your simulated brain is surely able to drown in existential dread because it's capable of no more and no less than the human one.


I mean, you joke, but existential dread might be an adaptive response to a hostile environment.

...a situation we might want to simulate for training purposes


Ai won’t, but a machine with true intelligence will. That was the point of talking about ai needing to be i.


So you're defining "AI" to be anything that we can currently program a computer to do, and "I" to be anything we can't yet? That doesn't seem like a useful distinction to me. Unless you're using "I" to mean general (artificial) intelligence, in which case you should probably use the more well-known term.


No, please don’t straw man my point. I’ll assume you know what ai is and that you understand there is a huge difference between that and human intelligence. I am arguing that ai will never be able to DM a D&D game. For that, a computer will need human intelligence.


But AI definition is still target-in-motion, very blurred.


Why the but? I didn’t say anything that disagrees with that statement.


I feel like it's far too blurry to make claims such as "will never be able to DM a D&D game".


Re: "all of humanness"

https://xkcd.com/1875/

(If you don't want to click the link, there's a joke there that machines may have a hard time "being to cool to care about stuff.")


Except Calvinball of course


Perhaps the one 'special thing' piece humanness is that we can and tend to automate ourselves. (i.e. we're lazy) :)


That depends on whether "Everything" is finite or infinite.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: