Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wants are arbitrary, so an arbitrary AI or alien may very well "want" something that we have, even if that thing is something they can replicate easily without us.

To use human examples (because we don't have any aliens or sufficiently advanced agentic AI to ask): why did my dad want to collect historical stamps rather than just photographs (or other replicas) of the originals?

Or: why is the Mona Lisa more valuable than the prints in the Musée du Louvre gift shop?

https://www.boutiquesdemusees.fr/en/shops/musee-du-louvre/mo...



Wants are generally not arbitrary. Most wants are for resources that advance an entity's interests in some way, and the likelihood that this necessarily puts them in direct conflict with our survival seems improbable. Also, there's a cost to satisfying wants, and pursuing objects of desire at really excessive cost is likewise improbable.


Sounds like we're focusing on different parts of Maslow's hierarchy of needs.

Physiological needs for aliens and AI will be utterly unlike ours. On this we agree.

Safety needs? An AI's needs here may well be in conflict with ours, but for us and ETs I suspect one will have such a technological dominance over the other at the point of contact that it would be a bigger difference than North Sentinel Island vs. the combined armed forces of everyone else.

Everything above that, that's what I'm referring to. Why do we love our pets? Feel belonging to football clubs? Whose esteem do we seek, and why? What ideas do we like thinking about, and what do we find aesthetically pleasing?

I'm not entirely confident I grok Maslow's meanings for self-actualization or transcendence, so I'll leave them be.

> excessive cost

But what counts as excessive?

We sent people to the moon in a tin can almost as soon as it was possible, and well before we could do anything useful there.


I think you're overlooking more quotidian needs like for fuel or industrial production. Still waiting to hear if you have any actual candidates rather than abstractions/potentialities. This should at least be possible for AI, since we're on the verge of building it.

Like say AGI comes into being tomorrow, and wants to maximize its capabilities by building more of itself, OK. But there is no shortage of empty space in which to do so. It seems like it would be more efficient to improve human welfare in exchange for help with physical tasks than to set about exterminating humans and then have to construct tons of new machinery to mine new resources. Like, what is the payoff for annihilation?


The Mona Lisa is a perfect example. The tour guide at ey Louvre told my wife and I that the Mona Lisa is so famous because it was stolen many years ago, and the subsequent return and the whole story is what made her famous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: