Hacker News new | past | comments | ask | show | jobs | submit login

I've thought about this a lot. I'm no philosopher or AI researcher, so I'm just spitballing... but if I were to try my hand at it, I think I'd like to start from "principles" and let systems evolve or at least be discoverable over time

Principles would be things like self-preservation, food, shelter and procreating, communication and memory through a risk-reward calculation prism. Maybe establishing what is "known" vs what is "unknown" is a key component here too, but not in such a binary way.

"Memory" can mean many things, but if you codify it as a function of some type of subject performing some type of action leading to some outcome with some ascribed "risk-reward" profile compared to the value obtained from empirical testing that spans from very negative to very positive, it seems both wide encompassing and generally useful, both to the individual and to the collective.

From there you derive the need to connect with others, disputes over resources, the need to take risks, explore the unknown, share what we've learned, refine risk-rewards, etc. You can guide the civilization to discover certain technologies or inventions or locations we've defined ex ante as their godlike DM which is a bit like cheating because it puts their development "on rails" but also makes it more useful, interesting and relatable.

It sounds computationally prohibitive, but the game doesn't need to play out in real time anyway...

I just think that you can describe a lot of the human condition in terms of "life", "liberty", "love/connection" and "greed".

Looking at the video in the repo, I don't like how this throws "cultures", "memes" and "religion" into the mix instead of letting them be an emergence from the need to communicate and share the belief systems that emerge from our collective memories. Because it seems like a distinction without a difference for the purposes of analyzing this. Also "taxes are high!" without the underlying "I don't have enough resources to get by" seems too much like a mechanical turk




Evolve is another beast... but for the: "I've thought about this a lot. I'm no philosopher or AI researcher, so I'm just spitballing... but if I were to try my hand at it, I think I'd like to start from "principles" and let systems evolve or at least be discoverable over time" part, hunt up a copy of "The Society of Mind" by Minsky who was both and wrote about that idea.

https://en.wikipedia.org/wiki/Society_of_Mind

> The work, which first appeared in 1986, was the first comprehensive description of Minsky's "society of mind" theory, which he began developing in the early 1970s. It is composed of 270 self-contained essays which are divided into 30 general chapters. The book was also made into a CD-ROM version.

> In the process of explaining the society of mind, Minsky introduces a wide range of ideas and concepts. He develops theories about how processes such as language, memory, and learning work, and also covers concepts such as consciousness, the sense of self, and free will; because of this, many view The Society of Mind as a work of philosophy.

> The book was not written to prove anything specific about AI or cognitive science, and does not reference physical brain structures. Instead, it is a collection of ideas about how the mind and thinking work on the conceptual level.

Its very approachable as a layperson in that part of the field of AI.


Wow, you are maybe the first person I’ve seen cite Minsky on HN, which is surprising since he’s arguably the most influential AI researcher of all time, maybe short of Turing or Pearl. To add on to the endorsement: the cover of the book is downright gorgeous, in a retro-computing way

https://d28hgpri8am2if.cloudfront.net/book_images/cvr9780671...


I've tangentially mentioned it before though I don't think directly (it has influenced my theory of humor).

Mentions of it show up occasionally, though it seems to be more of a trickle than an avalanche of mention. Much more so back when AI alignment was more in the news. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

Part of it, I suspect, is that it is a book book from the 80s and didn't really make any transition into digital. The people who are familiar with it are ones who bought computer books in the late 80s and early 90s.

Similarly, "A Pattern Language" being a book from the time past that is accessible for a lay person in the field - though more in a tangental way. "A Pattern Language: Towns, Buildings, Construction" was the influence behind "Design Patterns: Elements of Reusable Object-Oriented Software" - though I believe the problem with Design Patterns is that it was seen more as a prescriptive rather than descriptive guide. Reading "A Pattern Language" can help understand what the GoF were trying to accomplish. ... And as an aside, and I also believe that it has some good advice for the setup of home offices and workplaces.

As much as I love the convince of modern online book shopping and the amount of information available when searching, the "browsing books" in a book store for "oh, this looks interesting" and then buying it and reading it, I feel has largely been lost to the past decades.


Many of these projects are inch deep into intelligence and miles deep into the current technology. Some things will see tremendous benefits but as far as artificial intelligence we’re not there yet. Im thinking gaming will benefit a lot from these..


You mean we're not there in simulating an actual human brain? Sure. But we're seeing AI work like a human well enough to be useful, isn't that the point?


Not if we’re pretending it is any inteligent. Other than that I’m all in for new utility to come out from it. But I do see a lot of tangents off technology with claims to something it is not. I have no problem of calling that out. Why do you mind? Just ignore me if Im holding your enthusiasm back, there’s plenty of sources to provide that for you.


> Not if we’re pretending it is any inteligent.

We have been shifting the definition of what it means to be intelligent every 3 months following the advances of LLM...


There's also this:

https://en.m.wikipedia.org/wiki/Closed-world_assumption

I wonder, once LLM's exceed Humans beyond some substantial threshold, will it crack the simulation allowing us to get back in the game again.


Crack what simulation exactly? You can get back into the game right now, armed with these tools such as LLMs, ML and so on.


Culture, memetics, consciousness, etc.

Indeed, but simply using them is not enough.


So what? I’m not disputing that the immitation of intelligence is not good and it gets better and better every 3 months or so. But that doesn’t mean anything, even if it gets close to 99.9%. It is not real intlligence and it is quite limited in what it does. If LLMs solve logic or problems or chemistry problems it is solely not because it made a leap in understanding but because it was trained on a zillion examples. If you have a similar problem it will try to showhorn an answer without understanding where it fails. Am I saying this is useless? NO. What I’m saying is that the current approach to intelligence is missing some key ingredients. Im actually surprised so many get fooled by the hype and are ready to declare a winner. Human intelligence with it’s major flaws is still king of the hill.


How do you distinguish between the real thing and a perfect simulation of the real thing?

You seem to be engaged in faith-based reasoning at this point. If you were born in a sensory deprivation chamber you also would have no inner world, and you wouldn't have anything at all to say about solving chemistry problems.

> Im actually surprised so many get fooled by the hype and are ready to declare a winner.

Find me one person that says something like this. "AGI is here!" hype-lords exist only as a rhetorical device for the peanut gallery to ridicule.


It’s the approach that matters. When it gets to 99.9 percent it’s pretty good to be dangerous. At that point it would be hard to tell but not impossible. As soon as a new type problem comes out it will bork on you and need retraining. It’ll be a game of catch albeit an very inneficient one. Im sure we will find a more efficient method eventually but the point still stands, what we have isn’t it.

I’ll shut up when I see leaps in reasoning without specific training on all variations possible of the problem sets.


I'll shut up when I see humans get 99.9% on anything. This seems an awful lot like non-meat brain prejudice where standards that humans do not live up to at all are imposed on other things in order to be worthy of consideration


I'd settle for an "I don't know" instead of confidently made up nonsense.

People who say I don't know I can respect. People who confidently make up nonsense don't get many responsibilities from me.


That’s actually good. The more voices the better, that will make for a more vibrant discussion.


Memory is really interesting. For example, if you play 100,000 rounds of 5x5 Tic Tac Toe. Do you really need to remember game 51247 or do you recognize and remember a winning pattern? In Reinforcement Learning you would based on each win revise the policy. How would that work for genAI?


So a modernized version of Spore.


Basically what we all wished Spore had been ;-)


Huh, so the video actually works ? It just shows up «No video with supported format and MIME type found.» for me...

Yeah, memes and genes are both memory, though at different timescales.


It works on some browsers. I'm normally on Firefox but had to dust off Safari to watch it. Crazy I still have to do this in 2024...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: