I like GPT-3 a lot, but there's an important distinction between "plausible conversation" and "trustworthy answers". ChatGPT is probably best in the world in the "plausible conversation" category.
But you might say ChatGPT is like talking to a human who actually knows nothing about finance but doesn't let that stop them (and for that matter, if you wanted to anthropomorphize it, you could say part of the problem is ChatGPT literally "believes" everything it reads online - and you can easily make it "believe" anything through leading wording, which I think is a strong tool for strengthening the Turing test, since no human will respond that way). Sometimes they might luck into being correct, but you wouldn't want to base any information on what they say.
On the other hand, if I actually wanted reliable finance advice, a scripted finance chatbot would still win because those answers are written by people who do know what they're talking about.
One scenario where the "believes anything" might be useful is to use ChatGPT to get alternate takes on opinions. If you have some great idea or strongly held opinion, get ChatGPT to take the other side of the argument and poke holes in it. The creative but inaccurate characteristics of ChatGPT are less of a problem in this case but it might bring out alternatives you haven't considered.
To some extent, but people also have convictions about certain things, which GPT-based chatbots don't. The world would be very different if we could "fix" racists simply by asking them their favorite thing about people from other races (implying that there are admirable qualities, which GPT-3 plays along with but humans don't).
But you might say ChatGPT is like talking to a human who actually knows nothing about finance but doesn't let that stop them (and for that matter, if you wanted to anthropomorphize it, you could say part of the problem is ChatGPT literally "believes" everything it reads online - and you can easily make it "believe" anything through leading wording, which I think is a strong tool for strengthening the Turing test, since no human will respond that way). Sometimes they might luck into being correct, but you wouldn't want to base any information on what they say.
On the other hand, if I actually wanted reliable finance advice, a scripted finance chatbot would still win because those answers are written by people who do know what they're talking about.