ChatGPT struggles with out-of-distribution problems. However, it excels at solving problems that have already been solved on the internet/GitHub. By connecting different contexts, ChatGPT can provide a ready solution in just a few seconds, saving you the time and effort of piecing together answers from various sources. But when you have a problem that can't be found on Google, even if it's a simple one-liner or one function, then in my experience ChatGPT will often produce an incorrect solution. If you point out what's wrong, it will acknowledge the error and then provide another incorrect answer.
This is the expected behavior. It's a language model trained to predict the next word (part of words actually) after all.
What is unexpected is the ability to perform highly in a multitude of tasks it was never trained for, like answering questions or writing code.
I suppose we can say we basically don't understand what the f* is going on with GPT-3 emergent abilities, but hey, if we can make it even better at those tasks like they did with chatGPT, sign me in.
Is not that the AI is too dumb, it's that my computer now can write me code I'd take one hour to Google and check and test. Now I ask, ask for corrections, test the answer and voila, my productivity just went through the roof.
So, my point is: don't believe (or be mad about) the hype from people that don't understand what curious marvel we got in front of us, just see how you can use it.