Is the coding question that is representative of the work you do at your company, and that you expect candidates to have done? Or is it a toy, "write a function that does X" question? "ChatGPT solves easily" suggests to me that it's a toy question.
If it is a "toy" question, then I'm of two minds about it:
On one hand, I am used to solving higher level problems, so it might take me a few minutes just to realize you are asking a much simpler question. It also can feel just a tad insulting to be drilled on CompSci 101 questions.
Oh the other hand, I think candidates should be able to solve such questions, as long as the scope is clear. You need to filter somehow and I've met people who could not do that.
> Is the coding question that is representative of the work you do at your company, and that you expect candidates to have done?
The solution is a while-loop with a couple of if-statements. I would hope an engineer would write code like this many times per day. Whenever they need to marshal a blob from A to B.
> It also can feel just a tad insulting to be drilled on CompSci 101 questions.
I wish I had this problem! In these rare cases, I just say, "Great job! This was to just double-check you could write code. You'd be surprised how often a candidate isn't able to solve this! Let's talk about your career. In what aspects would you like to grow next?"
I ask a "toy-ish" question for a phone screen since we have a higher level coding section on-site. I get LC is the standard, but I've always considered how easily someone can adapt to be as much a signal as anything.
It's one thing if we paint it as Leetcode and then ask for fizz-buzz, but when I start the interview off by saying "no algorithms involved, we're not even compiling, it's mostly a way for us to talk about <insert language>" and 15 minutes in you're still looking for a place to shoe horn in a hand rolled hash map, it might just say something about how your approach to engineering.
Many would argue filling FAANG to the brim with people who actively seek complexity is what has directly hurt their ability to innovate (and the fact OpenAI is full of ex-FAANG doesn't disagree)
If it is a "toy" question, then I'm of two minds about it:
On one hand, I am used to solving higher level problems, so it might take me a few minutes just to realize you are asking a much simpler question. It also can feel just a tad insulting to be drilled on CompSci 101 questions.
Oh the other hand, I think candidates should be able to solve such questions, as long as the scope is clear. You need to filter somehow and I've met people who could not do that.