Hacker News new | past | comments | ask | show | jobs | submit login

Here's a really simple test for whether a work-sample scheme is effective, or just a bullshit trend-chasing afterthought:

Does the work sample test offset all or most of the unstructured tech evaluation the company would otherwise do?

If it does, that means they have a work-sample test rubric they believe in, and they're hiring confidently. If it doesn't, they don't believe in what they're doing, and the programming challenges they're assigning can thus reasonably be seen as more hoop-jumping.

In the latter case, it's up to you to decide whether a prospective job is worth extra hoop-jumping. Some teams are worth it!




I think that's fair. I've had both the former and the latter, but unfortunately most of my experiences fall into latter case, where it's simply been hoop jumping. Most of my friends (all about to graduate, so a good number of examples) are experiencing the same.

For example one company gave a problem with five parts, with the final part being solve longest path on a bipartite weighted graph (which is quite a hard and time consuming problem). After that, the next step was a phone technical screen, then an on-site with 4-5 more interviews, most being white-boarding. It was basically hazing instead of an evaluation criteria.

An alternative is my last job, which had a take home test that took about 6 hours, but that was the whole technical part of the process. Being on the other side reviewing them, the problem absolutely gave enough information.

I totally get there's a right way to do it, but like most interviewing trends, companies seem to just be adding this as a step instead of revamping their process.


Does the job they're interviewing involve finding the longest paths on weighted bipartite graphs? Or is this just non-recursive Towers of Hanoi pretending to be a realistic work sample?


No, the position most definitely had absolutely nothing to do with longest path or combinatorial optimization.

Anyway, my larger point is that what I've been seeing interviewing is that these tests are becoming much more common at US startups without companies removing/reducing the rest of their technical evaluation process, nor really structuring the problems to be a good signal.

In an ideal world where companies do take home tests right, I think its a great solution. But what I've been seeing more often than not doesn't support that, making it hard to support.

I'm really curious what you've been seeing at Starfighter. Are partnering companies still going on to do a full technical interview? Or does Starfighter largely replace their normal technical evaluation?

Ignoring the fun of the challenges themselves (which probably isn't entirely fair), the latter makes it very compelling for a candidate. The former does not.


Most of our partners have a somewhat abbreviated interview for our candidates, but everyone (as far as I know) still techs our candidates out.

I'm actually fine with that! We make no pretense of having designed a screening process that is appropriate for every job. What I'm less fine with is the fact that the norm, even for abbreviated tech-outs, is 7-8 hours of on-site whiteboard interview.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: