Here's where things get slightly creepy. I addressed the concept of "self" in a couple other walls of text over the past few days and, sure enough, I get this thread on my front page.
In short, do we even know where that goalpost is? The "self" is kind of a linguistic phantom: we talk about it as if we know what it is, we even sometimes attribute "selfhood" to non-human animals, inanimate objects with complex behavior, the words of a long-dead author - yet I still don't know of a technology that lets you experience anyone else's "self" but your own. Maybe with Neuralink-type brain-to-brain stuff we could convince ourselves we are experiencing another person's perceptions - but how can we ever be certain that we are experiencing their perception of selfhood in the same way that they do?
In the present day, the related question of how consciousness (with all its bells and whistles, including qualia and selfhood) arises from brain activity is only seriously engaged with by some fringe theorists, with predictably unsatisfactory results; while mainstream authors just handwave the whole thing away. Thinkers fundamental to our cultural tradition, like Plato and Descartes, pondered these matters in two completely different ages, and came to the same conclusion that this is somehow beyond the knowable, and indeed if you poke too much at it you end up having to reconstruct your cognition from first principles.
This is why I posit the "school of hard knocks" theory to the "hard problem of consciousness": for a thing to have a self, it has to have to fend for itself. It's how we've producing "selves" for millenia without being able to model them. But this still has very low explanatory power (beyond giving someone a hard knock when they ask a hard question) so I'm not really planning to make any YouTube videos about it.
Personally, I'm partial to Julian Jaynes' yarn, but it's still an outside view - a history of the cultural concept of consciousness, but still not of consciousness itself. One interesting thing that one may derive from it is that the ancient pagan gods were "China brain" consciousnesses running as background processes on the brains of entire nations, and the founding fathers of monotheistic religions perpetuated the greatest "white hat" hacks in history. (Also the JavaScript ecosystem may be conscious in a "China brain" sense, and laughing at us.)
I suspect that, if "the self" is not just a word, some neural network may end up containing an accidental model of "selfhood" itself, and not just of the usage of the world "self", and we would still be incapable of knowing such a model when we see it.
If you have any ideas about how you would even model a thing that contains all your perceptions, and is not observable from the outside, I'm eager to hear them. Maybe you see something I don't.
I am confident that the scientific process will lead to systems that will contain perception and self.
The interesting thing about philosophy is that it perceives perception from inside.
I would use philosophical texts as design documents and turn them into code. There was a shift in mathematical algorithms from texts to formulas. It made reasoning much easier. Likewise, I think reasoning about philosophies will be easier when they are formalized.
Once philosophy becomes code, it can be combined with the signal processing code and code that models the brain. Having an idea of what to look for, it could be easier to discover self than to wait for selves that fend for themselves.
Glad you're still here for us to have this conversation! I've considered the same experiment and would love to see a demo of what you think this would look like in practice.
>The interesting thing about philosophy is that it perceives perception from inside.
Isn't that's also the futile thing about it, though? It can reflect on reflection, ad infinitum - while being subject to the same external forces and constraints as other, more linear human activites: e.g. to do philosophy one needs to find an academic institution, wealthy patron, or circle of like-minded folks, who would publish it for future outsiders like us to appreciate; one needs to avoid retaliation for disrupting the discourses of power; etc.
>Once philosophy becomes code, it can be combined with the signal processing code and code that models the brain. Having an idea of what to look for, it could be easier to discover self than to wait for selves that fend for themselves.
Have you considered that an organism as simple as a bacterium might possess perception and experience? It would know no restraint or reflection, only one or two overwhelmingly pure emotions depending on whether it's feeding, being fed upon, dividing, transferring genes, etc. As evolution layers more complex behaviors on top of this "primordial spark of consciousness", the internal experience of the organism would become more complex until we get to the present state of affairs.
Of course current science doesn't agree with the idea of consciousness without nervous system - although it doesn't convincingly explain their relation, either. (Favorite crack: how exactly have we been able to confirm that the brain is not just a big antenna, for some transmission we haven't been able to observe yet - and conscious experience doesn't originate somewhere entirely outside the physical, on a client-server basis?)
But I think the connectome of something like a nematode or fruit fly has been mapped. So maybe one could start looking for that "proto-self" in a recording of the activity of such a simulated connectome over time?
Also, I've read a couple of sci-fi writers who try to address the technical details of simulated consciousness for nerd cred; they just hand-wave away the discrete nature of the simulation, positing continuity of consciousness even when running at <1 FPS. If one could somehow identify "consciousness" in a simulated nervous system, it would be possible to verify that experimentally.
How would we be able to identify a particular feedback loop between organism and environment as "consciousness" or "self-experience" though?
>I would use philosophical texts as design documents and turn them into code.
The main obstacle I see is that philosophical texts are linguistic in nature: if you can find a base text that is "dry" enough (I've seen works of analytic philosophy already structured as paragraphs of bullet points so that could be a start; but then you might as well start with the penal code of a small country, legal thought is also a form of philosophy, and it's one of the few practical applications of theories of selfhood that we see today), you could write a program that applies the conditionals described in the source text - but how would acting according to those conditionals work? Especially if the program has no intrinsic goals of its own, like self-preservation?
For me, philosophical texts are more like the fossilized byproducts of someone's consciousness, rather than blueprints for it. I'm interested in things that could disprove that view.
It doesn't matter how fossilized philosophical texts are. By turning them into code, their structure becomes alive. To start with penal code is an interesting idea worth trying although it could be a dead end since it is all about setting limits to the self.
I think that consciousness will reveal itself in a not so distant future. There are already implants for blind people. More and more parts of the brain will be replaced which will reveal where consciousness is situated. I like to think that fruit flies are also conscious so it could also be possible to enhance the brains of flies. However, I expect that its easier to enhance human minds and let humans communicate their experience than it is to find the consciousness patterns in flies.
>to do philosophy one needs to find an academic institution, wealthy patron, or circle of like-minded folks,
All you need is a blog. But I don't think that engaging in the current style of philosophy is time well spent because written language could be at its limit. Philosophy has the ideas of people who were thinking for several thousand years. They were very keen on being right. That could be a solid foundation to build on. The bones of birds are not very helpful to design planes but they still offer the idea of wings.
In short, do we even know where that goalpost is? The "self" is kind of a linguistic phantom: we talk about it as if we know what it is, we even sometimes attribute "selfhood" to non-human animals, inanimate objects with complex behavior, the words of a long-dead author - yet I still don't know of a technology that lets you experience anyone else's "self" but your own. Maybe with Neuralink-type brain-to-brain stuff we could convince ourselves we are experiencing another person's perceptions - but how can we ever be certain that we are experiencing their perception of selfhood in the same way that they do?
In the present day, the related question of how consciousness (with all its bells and whistles, including qualia and selfhood) arises from brain activity is only seriously engaged with by some fringe theorists, with predictably unsatisfactory results; while mainstream authors just handwave the whole thing away. Thinkers fundamental to our cultural tradition, like Plato and Descartes, pondered these matters in two completely different ages, and came to the same conclusion that this is somehow beyond the knowable, and indeed if you poke too much at it you end up having to reconstruct your cognition from first principles.
This is why I posit the "school of hard knocks" theory to the "hard problem of consciousness": for a thing to have a self, it has to have to fend for itself. It's how we've producing "selves" for millenia without being able to model them. But this still has very low explanatory power (beyond giving someone a hard knock when they ask a hard question) so I'm not really planning to make any YouTube videos about it.
Personally, I'm partial to Julian Jaynes' yarn, but it's still an outside view - a history of the cultural concept of consciousness, but still not of consciousness itself. One interesting thing that one may derive from it is that the ancient pagan gods were "China brain" consciousnesses running as background processes on the brains of entire nations, and the founding fathers of monotheistic religions perpetuated the greatest "white hat" hacks in history. (Also the JavaScript ecosystem may be conscious in a "China brain" sense, and laughing at us.)
I suspect that, if "the self" is not just a word, some neural network may end up containing an accidental model of "selfhood" itself, and not just of the usage of the world "self", and we would still be incapable of knowing such a model when we see it.
If you have any ideas about how you would even model a thing that contains all your perceptions, and is not observable from the outside, I'm eager to hear them. Maybe you see something I don't.