Out of curiosity: if humans have trouble coming up with anything non-trivial, like regexes, why should something that has been trained on the output of humans do much better?
To me it feels like if 90% of $TASK content out there would be bad and people would struggle with it, then the AI-genrated $TASK output would be similarly flawed, be it regarding a programming language or something else.
As a silly example, consider how much bad legacy PHP code is out there and what the answers to some PHP questions could become because of that.
But it's still possible to get answers to simplistic problems reasonably fast, or at least get workable examples to then test and iterate upon, which can easily save some time.
Agree; the ChatGPT answer is not correct, as the assignment is to match a word that starts with `dog` and ends with `cat`. You can make .* non-greedy by adding ? at the end, but it's not needed in this case, as the engine should backtrack. Something like this should work: /\bdog[\w_-]*cat\b/ (assuming _ and - should be allowed inside words). You can also specify word-separators ([^ ] instead of [\w_-]) if that's easier to read.
Yep. But it gave straight up code rather than trying to persuade a natural language LLM to write code.
The regex I was expecting would be
"\\b(dog.*)|(.*cat)\\b"
The key point is to ask the code model. Part of what ChatGPT does is it appears to categorize the question and then may dispatch it to the code model. If you know you have a code question, asking the code model first would likely be more productive and less expensive.