Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I think they're right -- I, too, have observed a disconnect between how polished people sound when they talk about their work and what happens when they actually have to write code.

I've done a lot of work that people would probably find very interesting and useful. But I tend to choke on whiteboard code interviews because they're so high stakes. Any time spent thinking about the problem looks bad, so you have to talk a lot. But I can't really think and talk at the same time. So I end up talking rather than thinking and I do poorly.

Now obviously I'm going to push to move the status quo towards something that doesn't put me at a competitive disadvantage. So we both know that I'm biased.

But the idea that people can talk about what they've done and answer any questions that you have (about what they did and programming in general) but still screw up on the actual "coding" part might mean that the part where you make them write code is more noise than signal.

The problem is that you never find out because if someone bombs the coding part you simply chuckle and say "well that person is clearly a liar, or something!" and they don't go any further in the hiring process. So they never get hired, and because they're never hired, you can't evaluate their work performance. Which might be excellent when they're not being actively scrutinized by multiple people all at the same time in a high stakes situation.

Unfortunately to try and get some objective data on this you'd have to hire several people who talk about their projects well but don't do well on the coding part. An understandably impossible task unless your client is a Google or Microsoft and they know it's just a big experiment regarding hiring.

But until someone does that and reports back (and they won't because it'll be a competitive advantage) it's tough for me to swallow the "talks good but can't code so NOPE" that I tend to see bandied about.

Putting someone in a pressure cooker and then measuring their performance will only tell you how they perform in a pressure cooker. Which is usually quite distinct from what they're going to do day-to-day.




I think what you describe is a real problem and unfortunately one that getting data around is really tough, for the reasons you describe.

For what it's worth, when I observed a disconnect between how well people spoke about their projects and how well they coded, it was generally a situation where someone had perfected a pretty polished self-pitch rather than a situation where I drilled down deeply into what they had done, asked them what they'd have done differently if we varied up certain constraints, etc. And when they fucked up on coding, it was on warmup problems that was something you'd reasonably expect anyone with some experience to be able to do (e.g. explain why you might want to use a hash table over a linked list for certain scenarios, reverse a string in place).

That said, one of the reasons I'm really psyched about interviewing.io (the thing I'm working on now) is that we're getting a lot of comparative interview data, i.e. where the same person gets interviewed a bunch of different ways. Excited to see if we can draw some good conclusions about what works and what doesn't.


Actually, the experiments that need to be conducted wouldn't be that hard to do. In fact I'd be surprised if research of this kind has not already been done.

The problem of poor performance under scrutiny, where there is pressure for superior performance (as in a job interview) is well-described as a form of social anxiety disorder (or social phobia), a common condition affecting ~10% of adults in the US (lifetime prevalence). Of course, there's a range of mild to severe symptoms, nonetheless it affects a significant population.

The implication is that in a not-so-pressured setting candidates might perform very differently. Furthermore, writing code, solving software problems are generally incremental processes more akin to watching paint dry than putting out fires for all the externally visible action there is to see. A "whiteboard" exam likely isn't a good model of the real requirements of the job.

There's an enormous amount of research on testing methodology, testing is a huge industry. Ironically enough, one that is extremely reliant on software for analyzing test data in order to determine what is a good test of sets of knowledge or actual abilities. Seems like there's a clue in there somewhere about how software enterprises could find out who is really good at creating software.


> we're getting a lot of comparative interview data, i.e. where the same person gets interviewed a bunch of different ways. Excited to see if we can draw some good conclusions about what works and what doesn't.

So I think that only works if you hire everyone, whether they interview well or not. Or else the process is biasing the results and it's not representative anymore.

If you really wanted to get better information you'd have to go interview people who are already employees at a particular company and have outsiders (people who don't already know them) conduct the interviews. Then when you're done you can compare the simulated hire/no hire results and the interviewers recorded confidence numbers in their evaluation against the performance evaluations of the interviewed employees.

So long as the outsiders conduct many different types of interviews (especially besides what the company normally does) you might get a clearer view into what kind of interviewing works well and what doesn't.

I know some people that applied to and got hired by Google. Google seems painfully aware of how uncorrelated their interviewing process is with their hiring results. The hoops that these guys jumped through I never would. So even if I was talented enough to work at Google (I won't speculate here) they'll never actually be able to hire me unless they actively recruit me and don't make me run the gauntlet.

The whole problem is a really tough nut to crack. I suspect that all the pipelines are going to be biased one way or another. If I were in charge of hiring, I'd want to try and use several of them so as to not miss out on good candidates who are undervalued for whatever reason.

There's a lot of talent out there, despite everyone thinking that there's a talent shortage. The error actually lies in trying to have a one-size-fits-all solution to a problem that's definitely not uniform. Companies are failing to adapt to the human-ness of their "human resources" and it's costing them.


> The problem is that you never find out because if someone bombs the coding part you simply chuckle and say "well that person is clearly a liar, or something!" and they don't go any further in the hiring process.

This. I've been job-hunting lately and getting asked to do "technical challenges" and such like, which are useless to me because they are entirely asymmetric. They tell me nothing about the company except they are following the latest fad in candidate screening.

I pointed out in feedback to one of the testing companies a while back that they had no empirical basis for believing their evaluations didn't reject more qualified candidates for less qualified ones in terms of ability to do the actual job, I got a reply saying, "No, we have all kinds of empirical data! Our clients save $TIME in the hiring process!"

Which is great until you realize that perfectly competent engineers are being locked out of the hiring process by this nonsense. We saw a fad for this kind of testing in the mid-90's, just as the dot-com boom was starting to roll, and it didn't end well. The few companies I interviewed at that used coding tests of one kind or another all failed quickly, although it did give me the opportunity to ask an incompetent hiring manager at one of them how I'd managed to get a PhD in physics while having "below average mathematical abilities" according to one of their tests (which I swear had been written by an innumerate.)

HR people will simply assert that anyone who fails these kinds of tests is incompetent, and anyone who complains about it is just expressing sour grapes at their own failure, but that all side-steps the issue that there is no significant empirical validation on the quality of hires that such tests produce.

Their only real use from my point of view is that if the "interview" process is heavy on "coding challenges" and the like, I'm a lot less likely to bother with going through it, because it speaks to a company that has bought into policies that have no empirical basis and that provide the least amount of information to job seekers, and I'm not all that interested in working in the kind of monocultures such processes produce.

For senior people my favoured interview form is to mostly talk about a few obscure language features in their language of choice, and then have a free-form discussion about language design. Senior people who are any good care about languages, and have thought about languages, and can have nuanced, intelligent discussions about languages and the trade-offs involved. It acts as a good foundation for talking about other kinds of design issues as well. For junior people, some basic test of coding competency may be useful, but over the intermediate level they are very likely testing for the wrong thing, and either way we have no evidence.


I've known a couple physics phds about whom I have very low opinions re: their mathematical abilities.


And I know one who's excellent at the math to the complete ignorance of actually understanding what's physically going on. Once he has a differential equation written that's the law, full stop. Even if it produces results that can't possibly be right re: physical reality. Drives me bananas.


Amen. I do fairly decent on a whiteboard given some practice, but I still feel stupid compared to sitting and coding on my own without massive time pressure.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: