Not exactly. Current LLMs are optimized to put out language that aligns with what we speak. They are not really aligned on 'truthfulness' and they are completely capable of just making things up. Now, don't get me wrong, LLMs are very neat and amazing at what they do, but we must be aware of what they cannot do at this time.
Discovering Latent Knowledge in Language Models Without Supervision
> Existing techniques for training language models can be misaligned with the truth: if we train models with imitation learning, they may reproduce errors that humans make; if we train them to generate text that humans rate highly, they may output errors that human evaluators can't detect. We propose circumventing this issue by directly finding latent knowledge inside the internal activations of a language model in a purely unsupervised way. Our results provide an initial step toward discovering what language models know, distinct from what they say, even when we don't have access to explicit ground truth labels.