Doesn't it bother you that it sounds just as confident when it's making stuff up as when it's accurately summarizing? Do you just think you can tell when it's wrong?
I can easily tell whether the code that ChatGPT writes for me is wrong by using an IDE. But it’s always closer to meeting my exact requirements than a Google search.