The conclusion this article offers is drawn from a badly flawed study from over a decade ago, which Google decided neither to publish nor to use the conclusions from.
A machine learning system was trained at Google as an attempt to do "scientific hiring." As their ground truth, the team used the performance reviews of people who had already been hired at Google. To optimistically give it the benefit of every doubt, this study says that Google put too much emphasis on success in programming competitions when it was making hiring decisions in the early 2000s.
To look at it more pessimistically, the possibility of correlations between the features cast doubt on that conclusion as well.
"To optimistically give it the benefit of every doubt, this study says that Google put too much emphasis on success in programming competitions when it was making hiring decisions in the early 2000s."
It's kind of unbelievable that anyone is even bothering to say _anything_ in this thread without addressing this point. Of course people sometimes overvalue competitions, and if it's true that Google did it, that would explain the effect whether competition winners are typically better or worse than other people who interview at Google.
Do I know it's true? Of course not. But it's such an obvious question to ask that your first question should be "how did measure/rule out that effect?". And if there's no answer, you go back to the drawing board and try to answer it.
Ruberik ran Google Code Jam for several years. (Or maybe someone registered his handle on HN just to forge this comment, but that seems like a silly thing to do.)
EDIT: To be clear, Ruberik is NOT an impostor, and it's sad that someone is now going around flagging his comments.
> it's sad that someone is now going around flagging his comments.
No user flagged those comments. Some comments (e.g. by noob accounts from Tor IPs) are moderated by default because of past activity by trolls. These eventually get unflagged. We're going to open the unflagging part to the community soon, but that work isn't done yet.
I don't have any evidence of this other than my word, sorry. As kentonv points out, I was in a position to know: in the early days of Code Jam, we to do some convincing of Googlers who had seen the "programming contestants don't perform well" headline.
Also, as others have pointed out, Peter Norvig mentions some of this in his talk.
A machine learning system was trained at Google as an attempt to do "scientific hiring." As their ground truth, the team used the performance reviews of people who had already been hired at Google. To optimistically give it the benefit of every doubt, this study says that Google put too much emphasis on success in programming competitions when it was making hiring decisions in the early 2000s.
To look at it more pessimistically, the possibility of correlations between the features cast doubt on that conclusion as well.