I think this is how we get ML models to come up with novel ideas. Diagonalize against all the ideas they’ve already tried and dismissed via self-argument but keep certain consistency constraints. (Obviously much easier said than done.)
Scaled up and spread out - this probably gets you pretty close to consciousness(?)
Conway's game of life, but instead of colored squares with rules, they're LLM's with some kind of weighting - all chattering back and forth with one another - bubbling up somehow to cause speach/action
Decades ago I read The Society of Mind by Marvin Minsky. He pushed this sort of idea, that consciousness is composed of individual, competing processes. Worth a revisit!
These models have limitations obviously, but many critiques apply equally or more to people.
If people were tasked with one shot, 10 second answers, to be written out in near errorless grammar, the LLM’s viewing our responses to prompts would be spending a lot of time discussing our limitations and how to game us into better responses. Humor, not at all humor.