Isn't that where ChatGPT would actually be worse overall? It's been shown to give semi-accurate data very often while sounding very confident about it.
When people require precise answers and it gives almost correct answer, it seems the general feeling is of amazement. I don't share that feeling. I hope nobody is using it for serious work without a human vetting the output.
OP mentions GPT in concert with WolframAlpha, which has already been implemented[0], showing a symbiosis between generation of text with proven knowledge.
Isn't that where ChatGPT would actually be worse overall? It's been shown to give semi-accurate data very often while sounding very confident about it.
When people require precise answers and it gives almost correct answer, it seems the general feeling is of amazement. I don't share that feeling. I hope nobody is using it for serious work without a human vetting the output.