Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know why I expected this to be about Self, that one language with prototypes.

https://selflanguage.org



It's kind of related, isn't it?

> “Romantic poetry was unruly, dynamic, alive and forever changing, they believed, and should not be corseted by metric patterns because it was a ‘living organism.’” According to Friedrich Schlegel, “it should forever be becoming, never perfected.”

The idea of having prototypes and no classes rhymes with forever becoming whereas classes are aligned with reasoning.

>The Enlightenment’s emphasis on reason rather than feeling, Schiller claimed, had led to the excesses of the French Revolution.

The french revolution being driven by class consciousness.

The following sentence is strange, though:

>“Utility is the great idol of our time, to which all powers pay homage.” Beauty, on the other hand, leads to ethical principles, and art, as its vessel, makes us better people as well as wiser ones.

When beauty is truth, how is it distinct from reasoning?

>He was a key figure in the Jena thinkers’ attempt to fuse the scientific and the emotional, art and nature; he preceded Keats in judging truth to be beauty, beauty truth.

I would love to see an approach to build AI programmatically. Instead of training NN, why not use the analytical foundations of the philosophers to create it? A language like Self, or its relative Javascript, could be the tool to create a self in a machine.


>I would love to see an approach to build AI programmatically.

But they've been trying to do that since (at least) the 1950s! And indeed moving the goalposts further and further as our own ideology of "what is intelligence" evolves. The evolution of programming languages and programmed systems goes hand in hand with that.

Until we get to the present situation where someone invented a program so generic and all-encompassing (a language model, ffs!) that it turns out all "meaning" and "reasoning" is just a moderately illusory effect induced by correlations between morphological tokens: to a large language model, things that "rhyme" (more like "appear in proximity within a corpus of text" - but that's a criterion of similar complexity to "sound alike or have similar morphological origin") are the things that make sense. And a lot of people can't tell the difference.

> Instead of training NN, why not use the analytical foundations of the philosophers to create it? A language like Self, or its relative Javascript, could be the tool to create a self in a machine.

You could imagine Human Language alike to a cellular automaton of virtually infinite complexity, serving as the substrate for all thought; programming languages, and indeed any formal language, such as mathematics and its subsets used in different sciences, are abstractions over that infinite field of possible patterns: they hide the messy details and provide idioms of varying (but not infinite) complexity, for achieving particular goals (such as making webpages dance). While a neural network could be said to "see everything simultaneously", in JavaScript you can't even be sure what the keyword "this" refers to (from context - you gotta trust the docs or look under the hood). Vastly different scopes aside, AFAIK you can't beat the notion of "selfhood" into neither a program nor a NN, because neither actually has to fend for itself in order to keep thinking.

>When beauty is truth, how is it distinct from reasoning?

One of those is a subset of another, and for many people the need for "intelligence" and "reasoning" never went any further than knowing how to operate the right machine. (There exist unimaginably many beautiful and useful things that each one of us fails to grasp, because we're busy parroting the opinions of whoever reached the opinion-parroting machine first.) But it inevitably turns out that when beauty does not figure into the equation, there remains very little in this world worth reasoning about - other than ensuring adequate caloric intake and that sort of thing.

It would be interesting to see what GPT has to say to your question though. Anyone feel like asking it? I dropped my barge pole in the swamp


>moving the goalposts further and further as our own ideology of "what is intelligence" evolves

We have moved the goalpost for intelligence but not for self. There are also programs to model the units of the human brain but I am not aware of programs that model the self.


Here's where things get slightly creepy. I addressed the concept of "self" in a couple other walls of text over the past few days and, sure enough, I get this thread on my front page.

In short, do we even know where that goalpost is? The "self" is kind of a linguistic phantom: we talk about it as if we know what it is, we even sometimes attribute "selfhood" to non-human animals, inanimate objects with complex behavior, the words of a long-dead author - yet I still don't know of a technology that lets you experience anyone else's "self" but your own. Maybe with Neuralink-type brain-to-brain stuff we could convince ourselves we are experiencing another person's perceptions - but how can we ever be certain that we are experiencing their perception of selfhood in the same way that they do?

In the present day, the related question of how consciousness (with all its bells and whistles, including qualia and selfhood) arises from brain activity is only seriously engaged with by some fringe theorists, with predictably unsatisfactory results; while mainstream authors just handwave the whole thing away. Thinkers fundamental to our cultural tradition, like Plato and Descartes, pondered these matters in two completely different ages, and came to the same conclusion that this is somehow beyond the knowable, and indeed if you poke too much at it you end up having to reconstruct your cognition from first principles.

This is why I posit the "school of hard knocks" theory to the "hard problem of consciousness": for a thing to have a self, it has to have to fend for itself. It's how we've producing "selves" for millenia without being able to model them. But this still has very low explanatory power (beyond giving someone a hard knock when they ask a hard question) so I'm not really planning to make any YouTube videos about it.

Personally, I'm partial to Julian Jaynes' yarn, but it's still an outside view - a history of the cultural concept of consciousness, but still not of consciousness itself. One interesting thing that one may derive from it is that the ancient pagan gods were "China brain" consciousnesses running as background processes on the brains of entire nations, and the founding fathers of monotheistic religions perpetuated the greatest "white hat" hacks in history. (Also the JavaScript ecosystem may be conscious in a "China brain" sense, and laughing at us.)

I suspect that, if "the self" is not just a word, some neural network may end up containing an accidental model of "selfhood" itself, and not just of the usage of the world "self", and we would still be incapable of knowing such a model when we see it.

If you have any ideas about how you would even model a thing that contains all your perceptions, and is not observable from the outside, I'm eager to hear them. Maybe you see something I don't.


I am confident that the scientific process will lead to systems that will contain perception and self.

The interesting thing about philosophy is that it perceives perception from inside.

I would use philosophical texts as design documents and turn them into code. There was a shift in mathematical algorithms from texts to formulas. It made reasoning much easier. Likewise, I think reasoning about philosophies will be easier when they are formalized.

Once philosophy becomes code, it can be combined with the signal processing code and code that models the brain. Having an idea of what to look for, it could be easier to discover self than to wait for selves that fend for themselves.


Glad you're still here for us to have this conversation! I've considered the same experiment and would love to see a demo of what you think this would look like in practice.

>The interesting thing about philosophy is that it perceives perception from inside.

Isn't that's also the futile thing about it, though? It can reflect on reflection, ad infinitum - while being subject to the same external forces and constraints as other, more linear human activites: e.g. to do philosophy one needs to find an academic institution, wealthy patron, or circle of like-minded folks, who would publish it for future outsiders like us to appreciate; one needs to avoid retaliation for disrupting the discourses of power; etc.

>Once philosophy becomes code, it can be combined with the signal processing code and code that models the brain. Having an idea of what to look for, it could be easier to discover self than to wait for selves that fend for themselves.

Have you considered that an organism as simple as a bacterium might possess perception and experience? It would know no restraint or reflection, only one or two overwhelmingly pure emotions depending on whether it's feeding, being fed upon, dividing, transferring genes, etc. As evolution layers more complex behaviors on top of this "primordial spark of consciousness", the internal experience of the organism would become more complex until we get to the present state of affairs.

Of course current science doesn't agree with the idea of consciousness without nervous system - although it doesn't convincingly explain their relation, either. (Favorite crack: how exactly have we been able to confirm that the brain is not just a big antenna, for some transmission we haven't been able to observe yet - and conscious experience doesn't originate somewhere entirely outside the physical, on a client-server basis?)

But I think the connectome of something like a nematode or fruit fly has been mapped. So maybe one could start looking for that "proto-self" in a recording of the activity of such a simulated connectome over time?

Also, I've read a couple of sci-fi writers who try to address the technical details of simulated consciousness for nerd cred; they just hand-wave away the discrete nature of the simulation, positing continuity of consciousness even when running at <1 FPS. If one could somehow identify "consciousness" in a simulated nervous system, it would be possible to verify that experimentally.

How would we be able to identify a particular feedback loop between organism and environment as "consciousness" or "self-experience" though?

>I would use philosophical texts as design documents and turn them into code.

The main obstacle I see is that philosophical texts are linguistic in nature: if you can find a base text that is "dry" enough (I've seen works of analytic philosophy already structured as paragraphs of bullet points so that could be a start; but then you might as well start with the penal code of a small country, legal thought is also a form of philosophy, and it's one of the few practical applications of theories of selfhood that we see today), you could write a program that applies the conditionals described in the source text - but how would acting according to those conditionals work? Especially if the program has no intrinsic goals of its own, like self-preservation?

For me, philosophical texts are more like the fossilized byproducts of someone's consciousness, rather than blueprints for it. I'm interested in things that could disprove that view.


It doesn't matter how fossilized philosophical texts are. By turning them into code, their structure becomes alive. To start with penal code is an interesting idea worth trying although it could be a dead end since it is all about setting limits to the self.

I think that consciousness will reveal itself in a not so distant future. There are already implants for blind people. More and more parts of the brain will be replaced which will reveal where consciousness is situated. I like to think that fruit flies are also conscious so it could also be possible to enhance the brains of flies. However, I expect that its easier to enhance human minds and let humans communicate their experience than it is to find the consciousness patterns in flies.

>to do philosophy one needs to find an academic institution, wealthy patron, or circle of like-minded folks,

All you need is a blog. But I don't think that engaging in the current style of philosophy is time well spent because written language could be at its limit. Philosophy has the ideas of people who were thinking for several thousand years. They were very keen on being right. That could be a solid foundation to build on. The bones of birds are not very helpful to design planes but they still offer the idea of wings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: