Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If used wisely, at-home coding tests should be far better than the average mess of a technical interview.

The problem is in the hands of TrendCo, they can be used crudely to find the trendy hire who has the exact same ideas about engineering as the hiring person. Because giving the test costs TrendCo nothing, they are happy to throw people at it until they randomly find someone who is exactly what they are looking for.

Anecdotally, I and 2 people I later befriended applied for the same role. We each spent 4 hours on the task, all 3 of us strong coders who made a solution that would be suitable for any startup, all 3 taking different approaches based on our styles. We were all rejected, because none of the 3 of us hit upon the exact approach the company was hoping for, but didn't ask for. There was no opportunity to ask for this feedback either. And it took them nearly a month to bother looking at my code (even after I called).

Since then, I won't take these kind of tests unless I have reason to believe they are being given in good faith as a way to determine if a programmer is capable, not as a way to find their perfect ideal of a programmer.



If you had asked at Matasano how we evaluated our work-sample tests, we'd have told you. In fact: the first contact any new candidate had at Matasano was a 45-minute-long phone call with a director+ (for a year and a half: me), during which we explained our process in excruciating detail and answered any questions that came up. If we got even a hint from a candidate that they might not know exactly what we were looking for, we'd get their mailing address and fedex them a stack of free books; we gave them cheat-sheets on what parts of those books to read, as well.

I have the strong impression that a typical SFBA tech hiring manager thinks it's crazy to give candidates cheat sheets for interviews, and that helping candidates with technical evaluations decreases rigor. THE EXACT OPPOSITE THING IS TRUE. Evaluating technical ability under unrealistic pressure and with unrealistic restrictions confounds the results of the evaluation.

Of course, you have to design tests that are valid predictors for candidates who have received tips and assistance and practice and resources. That sounds hard! But it isn't. The best predictor for a candidate's ability to do the kind of work you really do is, simply, the kind of work you really do. Not problems from a book, not trivia questions, not one algorithm poorly implemented on a shared coding site and a Skype call, not whatever bug is at the top of the bug tracker today, but a sample assignment given to a real member of your team, the same one, for every candidate, graded the same way.


I completely agree with all of that! If only more hiring managers were so thoughtful. Now, I'd only do this kind of at-home work if I can talk to someone who will explain how their hiring processes works, and I get the sort of good vibes that your post gives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: