The Turing test of the future is going to be start a politically incorrect discussion with a suspected bot and see if it treats different groups differently.
That it doesn't immediately spit out a blurb either for or against race or other group-based discussions/reasoning? People are a lot typically more guarded in controversial topics but LLMs can't help but spit out some kind of boilerplate.