I agree. This reminds me of the argument that being able to use web search during a “coding interview” is cheating.
My stance is that if web search can render the difference between a competent and incompetent candidate undetectable, the problem is the interview task, not access to web search. (Not to mention problems with coding interviews in general.)
I’ll go out on a limb and say the same general principle applies here: If ChatGPT can pass a test, the test is measuring the wrong thing.
> if web search can render the difference between a competent and incompetent candidate undetectable, the problem is the interview task, not access to web search
;-)
My take is that the problem of distinguishing between competent and incompetent candidates in 20 minutes is hard (if not impossible), and interviewers may not be able to do so reliably.
Your take appears to be a generalization of my take in at least two axes:
1. Asserting that it's hard if not impossible to generate valuable signal, where I am speaking only to the case where access to web search makes it hard if not impossible to generate valuable signal, and;
2. I suspect you are also factoring in a very thorny problem, which is not just detecting candidates who are attempting the interview in good faith but are incompetent at the task given, but also detecting interviewers who are gaming the system by memorizing solutions to popular tasks.
My stance is that if web search can render the difference between a competent and incompetent candidate undetectable, the problem is the interview task, not access to web search. (Not to mention problems with coding interviews in general.)
I’ll go out on a limb and say the same general principle applies here: If ChatGPT can pass a test, the test is measuring the wrong thing.