I use LLM to do the final rendering. It is mainly to unify the language style and ensure smooth semantics. After working with models for a long time, my own language expression skills have been affected and become somewhat fragmented. Use it to make sure that what I say is more human-like than a string of prompts.
Regardless of the text, formulas and codes are the final proof. Stay tuned for more explorations from us.
True "interruption" requires continuous learning, and the current model is essentially a dead frog, and frozen weights cannot be truly grounded in real time.
You've perfectly articulated the central challenge that inspired my own work. The 'magical', ungrounded reality of early cyberpunk cyberspace is precisely the gap we're trying to bridge with formalized realism.
Instead of telepathic magic, what if the 'deck' ran on a verifiable, computationally intensive process rooted in a concrete theory of consciousness? We've been archiving our attempt to build just that—the theory, the code, and the narrative simulation. Perhaps a less optimistic, but more grounded future.
Okay, I'll bite: what's so "terrifying" about developing a physical theory of consciousness?
I have to admit having similar reactions to other "profound" questions - for example, does free will exist? To that one I say: As long as weather exists, even deterministic intelligences will be as unpredictable as one with free will. A machine with chaotic inputs will itself be sufficiently chaotic.
Regarding consciousness, I think there is a category error born of (understandable) hubris. It is the conceit that you can carve out "consciousness" from the holistic physical phenomena of "humans" or, more generally, "life". It's kind of a package deal. Humans might (and probably will) make concious machines, but it will forever be an unanswerable philosophical question about whether they "really" are, just as it is with other humans. In the end it's best to "zoom out" and consider the subject in the context of the Fermi paradox - will such an invention help or harm humanity? (Does replacement imply harm? If we are replaced by our children, is that harm?)
In any event, it's all above my pay-grade, so to speak. For what it's worth, I tend to think that a) life is common in the universe, b) intelligent life very uncommon, and c) humanity got some really serious help from the cosmos/won a few lotteries. We got a moon the exact same angular size as the sun, allowing us to e.g. verify general relativity with ease. We got an atmosphere that let us see the stars clearly, and still breathe. We got a 3rd gen star and planet with a nice mix of light and heavy elements, and plenty of energy runway in the sun. We got abiogenesis (~common) and eukaryotic cells (~uncommon). We got some timely 99% extinctions (but not 100%) to clear the path for us, and which coincidentally left vast energy resources underground for us to bootstrap out of the middle ages. We got a celestial moat, almost impossible to cross (special relativity speed limits; thermodynamic limits) for all but the most advanced (and therefore presumably wisest) civilizations, keeping us safe from colonization. The latter is a bit of a golden cage, and I consider getting out of that cage the highest civilizational goal possible.
Within this picture, AI can fit in in many places, with positive and negative effects. I have to admit that I do not like the trend I see in humanity to become unmoored from the physical world, to venture out unarmed with critical thinking skills, like lambs to the slaughter in the barbaric free-for-all that is the modern info-sphere, who's ulimtate goals are the same as they ever were: money and power. The chance of a stupid self-own like nuclear war, autonomous AI weapons, bio-warfare, or catastrophic global climate change are still all too likely, and getting more likely as intelligent, balanced minds are selected against. We can't do anything about a caldera explosion or a nearby supernova, or even being stuck in-system while the sun burns out, but we can and should avoid shooting ourselves while playing with daddy's gun.
While some focus on the missed predictions like pocket supercomputers, I find Gibson's true genius lies in anticipating the conceptual shifts – how our very sense of self, reality, and freedom would become inextricably linked to, and perhaps even defined by, digital networks.
The real 'matrix' isn't just a virtual space we plug into; it's the increasingly complex, often invisible, interplay between our biological cognition and the predictive models that mediate our perception. We're already seeing early signs of 'cognitive debt' and the subtle erosion of our internal models as we offload more mental tasks to external systems. The challenge isn't just building smarter machines, but building anchors for consciousness in an increasingly fluid, data-driven existence.
On the other hand, we can also diagnose LLM itself: the activation value is their EEG, the gradient is their BOLD - if you are at the cost, you can even calculate their true variational free energy - that is, KL divergence.
"Don't just train your model, understand its mind."
Regardless of the text, formulas and codes are the final proof. Stay tuned for more explorations from us.
https://github.com/orgs/dmf-archive/repositories