I suppose the problem in multiplayer is that everyone has the same wall clock time, so you couldn't easily have consistent time dilation and related effects such as the twin paradox.
Except that as far as I understand, one of the inspirations for the Turing machine is to explain precisely the computations a human computer could perform with (potentially a lot of) pen and paper.
Considering 1TB sandisk ssds (which have notoriously had high failure rates) were around 100 for 1TB for a while is pretty impressive they got it into that form factor. I wouldn't trust them for anything important though.
Or a person could have the program either critique their flashcards as they write them, or suggest new sorts of flashcards to create without doing the work for them by automatically generating them.
I think one should be somewhat skeptical of the claim that schizophrenia is completely absent in blind people; it might merely be more difficult to diagnose. Combined with the population of congenitally blind people being sufficiently small, the cases that might exist could escape notice. There was an informative post on Less Wrong about this https://www.lesswrong.com/posts/z9Syf3pGffpvHwfr4/i-m-mildly...
It seems like it stems from a 2019 philosophy article written by Perry Zurn, titled "Busybody, Hunter, Dancer: Three Historical Modes of Curiosity."
Zurn does write "At their most basic level, a busybody is someone who is curious about other people's business," but develops the concept a bit further. Zurn says "The busybody's ideational sphere, for example, is characterized by quick associations, discrete pieces of information, and loose knowledge webs. They are interested in conceptual rarities: whatever lies outside of their knowledge grids."
Whereas the research article Zhou et al. (2024) states "Hunters build tight, constrained networks whereas busybodies build loose, broad networks." So it seems their conception of busybody roughly matches Zurn's description.
See the methods section https://www.science.org/doi/10.1126/sciadv.adn3268#sec-4 , for a description of how Zhou et al. (2014) aggregate graph theoretic metrics to define "busybody" and "hunter" styles of navigating Wikipedia.
The human brain, the authors argue, in fact uses multiple networks when interpreting and producing language. These include:
- the language network, which delivers formal linguistic competence
- the multiple demand network, which provides reasoning ability
- the default network, which tracks narratives above the clause level
- the theory of mind network, which infers the mental state of another entity
This leads to their argument that a modular structure would lead to enhanced ability for an LLM to be both formally and functionally competent. (While LLMs currently exhibit human-level formal linguistic competence, their functional competence--the ability to navigate the real world through language--has room for improvement.)
Transformer models, they note, have degree of emergent modularity through "allowing different attention heads to attend to different input features."
I was wondering, is it possible to characterize the degree of emergent modularity in current systems?
One of the big limitations in LLMs is that they only have a single context window. People throw things that probably shouldn't be mixed together into the same context and hope for the best (e.g. system prompt, RAG context, user input, LLM output).
This is basically no different from a Turing machine going from one tape to multiple tapes. While in theory it doesn't make the Turing machine more powerful, it saves a whole lot of book keeping operations that are necessary to work around the limitations of a single tape.
Another limitation is the inability to seek to positions by moving the head back and forth to rewrite old data in the context.
I’m not sure if this is exactly what you are referring to, but Anthropic has done a lot of interpretability work on Claude, which they’ve published along with the famous "Golden Gate Claude".^1
"We also find more abstract features—responding to things like bugs in computer code, discussions of gender bias in professions, and conversations about keeping secrets."
I am hopeful that advances in brain-computer interfaces will start to provide a partial answer to the question of "what's there" and why it's there. It seems to me the ability to controllably augment one's own consciousness with precision will tremendously clarify the necessary ingredients for consciousness.
Neuroscience has already done lots of investigations as to what's there and why. We know which structures in the brain do what (including "consciousness") at an increasingly fine level. We can observe all sorts of brain disorders and dysfunctions and their effect on consciousness. You can do drugs yourself to alter the ingredients of consciousness.
I think people just don't like how boring the answer is.
No, I don't think you understand how fundamentally hard the question is. See the hard problem of consciousness[1]. When you think about gravity, you can imagine a universe where gravity is reversed. All of the physics seems mechanical, or probabilistics or whatever. But "there's there there" is a completely different phenomena that I think we will never have an answer for.
There doesn't seem to be a continuity, either something is there, or there isn't. You can be drunk, hallucinating, feel extremely dizzy, trapped in a vat, trapped inside another universe inside vat, trapped as a figment of reality of other beings, but the fact that "there's there there" is binary. It is something that cannot be divided or peeked into. A kind of fundamental atomic property.
I understand the hard problem. There clearly is a continuum of "consciousness" from simple insects on up to primates & cetaceans, and the complexity of that consciousness is correlated with the structural complexity of the brain. Creating minds is just what brains do.
Anyway, I was specifically responding to the parent comment statements about the research needed, pointing out that we already have it.
I don't think neural correlates of consciousness have been identified, so clearly there remains research to be done. I'm not in touch with the neuroscience literature, but acknowledging the hard problem of consciousness means one should accept there is a lot of work remaining. That being said, I believe the hard problem is surmountable. In my mind the situation is similar to computer science before Turing's description of the Turing machine: there were imprecise notions abound about what computation meant that needed to be clarified through a concrete model. My view is simply that finer control over conscious experience would aid understanding enormously. But you're right, I should probably skim the real research more.
Because the evidence strongly indicates so: we haven't found a single instance in the entire universe of a conscious entity that wasn't also a living being.
And 200 years ago only living being could fly, but now we have machines that can fly higher and faster than any bird, even to Mars. But back then you could have said flight requires a living being because that's all we've ever seen.
A hundred years ago computers didn't exist and now they've been beating us at chess for some time. That you haven't seen a conscious computer yet in no way proves that it's not possible is principle.
I never said that the evidence proves that conscious machines are impossible. I said it strongly suggests that it's impossible. Also I disagree that air-planes can fly, for the same reason that I don't agree that boats can swim. The term "can" implies agency, and machines don't have that. All the technological progress during the last 200 years doesn't appear to have brought machines any closer to having such capability.