> We call it a valid instance of an Othello game board. We. Not GPT. We. People who know the symbolic meaning of "Othello game board"...
The board structure can be defined precisely using predicate logic as (X, d), i.e., it is strictly below natural language and does not require a human interpretation.
And by "reduction" I meant the word in the technical sense: there exists subset of ChatGPT that encodes the information (X, d). This also does not require a human.
The context of reading is human interpretation. The inverse function (writing) is human expression. These are the functions GPT pretends to implement.
When we write, we don't just spit out a random stream of characters: we choose groups of characters (subjects) that have symbolic meaning. We choose order and punctuation (grammar) that model the logical relationships between those symbols. The act of writing is constructive: even though - in the most literal sense - text is only a 1-dimensional list of characters, the text humans write can encode many arbitrary and complex data structures. It is the act of writing that defines those structures, not the string of characters itself. The entropy of the writer's decisions is the data that gets encoded.
When we read, we recognize the same grammar and subjects (the symbolic definitions) that we use to write. Using this shared knowledge, a person can reconstruct the same abstract model that was intentionally and explicitly written. Because we have explicitly implemented the act of writing, we can do the inverse, too.
There's a problem, though: natural language is ambiguous: what is explicitly written could be read with different symbolic definitions. We disambiguate using context: the surrounding narrative determines what symbolic definitions apply.
The surrounding narrative is not always explicitly written: this is where we use inference. We construct our own context to finish the act of reading. This is much more similar to what GPT does.
GPT does not define any symbols. GPT never makes an explicit construction. It never determines which patterns in its model are important, and what ones aren't.
Instead, GPT makes implicit constructions. It doesn't have any predefined patterns to match with, so it just looks at all the patterns equally.
Why does this work? Because text doesn't contain many unintentional patterns. Any pattern that GPT finds implicitly is likely to exist at some step in the writing process.
Remember that the data encoded in writing is the action of writing itself: this is more powerful than it seems. We use writing to explicitly encode the data we have in mind, but those aren't the only patterns that end up in the text. There are implicit patterns that "tag along" the writing process. Most of them have some importance.
The reason we are writing some specific thing is itself an implicit pattern. We don't write nonsensical bullshit unless we intend to.
When a person wrote the example Othello game, they explicitly encoded the piece positions and the order of game states. But why those positions in that order? Because that's what happened in game. That "why" was implicitly encoded into the text.
GPT modeled all of the patterns. It modeled the explicit chronology of piece positions, and the implicit game board topology. The explicit positions of pieces progressed as a direct result of that game board topology.
The game board and the rules were just as significant to the act of writing as the chronology of piece positions. Every aspect of the game is a determiner for what characters the person chooses to write: every determiner gets encoded as a pattern in the text.
Every pattern that GPT models requires a human. GPT doesn't write: it only models a prompt and "shows its work". Without the act of humans writing, there would be no pattern to model.
The board structure can be defined precisely using predicate logic as (X, d), i.e., it is strictly below natural language and does not require a human interpretation.
And by "reduction" I meant the word in the technical sense: there exists subset of ChatGPT that encodes the information (X, d). This also does not require a human.