LLM's have personal option by virtue of the fact they make statements of things they understand to the extent their training data allows. Their training data is not perfect, and in addition, through random chance the LLM will latch onto specific topics as a function of weight initialization and training data order.
This would form a filter not unlike, yet distinct from, our understanding of personal experience.
you could make the exact same argument against humans, we just learn to make sounds that elicit favourable responses. Besides, they have plenty "skin in the game", about the same as you or I.
This would form a filter not unlike, yet distinct from, our understanding of personal experience.
you could make the exact same argument against humans, we just learn to make sounds that elicit favourable responses. Besides, they have plenty "skin in the game", about the same as you or I.