> I have, however, seen lots of junior interviewers ask idiotic questions and wash candidates out for bad reasons
Why are "juniors" interviewing candidates? Let alone juniors asking "idiotic questions", and having decision-making capacity?
> then turn around and claim the candidate ”failed at fizzbuzz”, just to cover their own ass
Coding questions are among the most quantifiable and objective questions there are. If you know of something even more objective, let me know because I'd love to try it.
They are repeatable and have well-defined success criteria. Ideally you have two interviewers in the same room, and it's very clear if the candidate did or did not solve the problem. You can't really fudge it, even as a single interviewer, unless you're willing to outright lie that the candidate didn't write a working solution when he did.
> I’ve never interviewed a senior person who couldn’t code at all. Not after the phone screen, anyway.
Yes, if you do a coding phone screen, then obviously you're screening out those who can't code. I'm sure you saw some impressive resumes fail that screen, though.
Note that "junior interviewers" are not necessarily junior, just junior as interviewers. They may be good coders, but they don't know what is reasonable to expect for the role, or how to test for it.
Furthermore coding questions stop being quantifiable and objective as soon as you add in, "What feedback do you give, when, and how?" Two interviewers asking the same question of equivalent candidates can get very different results based on how stressed they make the candidate, what hints they offer, what followup questions they ask, how well they telegraph whether the candidate is on the right path, and so on.
> Note that "junior interviewers" are not necessarily junior, just junior as interviewers.
Sure, but these are highly correlated. If you've been working in startups for a few years, you will have some experience in both coding and interviewing coders.
Even most of the successful large companies are pretty good at cultivating interviewing skills among their employees.
> Furthermore coding questions stop being quantifiable and objective as soon as you add in, "What feedback do you give, when, and how?"
I would moderate that statement as "become less quantifiable and objective". They're still more objective than "tell me about the project you are most proud of" or (worst) "what is your biggest weakness?" (perfectionism, duh!).
You can also control many of these factors by standardizing interviewer behavior. For example, "no feedback of any kind until the candidate writes a working solution", or working off a predefined set of hints.
Sure, but these are highly correlated. If you've been working in startups for a few years, you will have some experience in both coding and interviewing coders.
Even most of the successful large companies are pretty good at cultivating interviewing skills among their employees.
I would put it the other way around.
My experience is that most startups just put you in front of people and tell you to figure it out. People who have done that for a bit wind up with really bad interview habits because they never get corrected. My experience with large companies (Google and Amazon) has them giving interview training. I learned a lot more about interviewing from that than the informal practice that I got in startups.
> My experience is that most startups just put you in front of people and tell you to figure it out.
Some do, sure. And that's the difference between a startup and a regular company right there: that "figure it out" bit.
You'll mess up and fumble and come up with crazy ideas, 95% of which are worse than what the established companies are doing.
But you'll learn a ton along the way.
It's messy and inefficient but that's how startups are.
I personally learned a ton from hacking interviews at startups and all that "figuring it out".
Also, guess what? All these sacred "FAANG interviews" that seem to have come down from the sky, etched on tablets? They were developed by Microsoft back when it was a startup, then evolved by various SV companies at their startup stage, too.
The next great idea in interviewing also won't come from the thousands of engineers obediently marching to the tune of the same principles everyone is using, but from some startup trying something crazy and ambitious and miraculously getting it to work.
I don’t know if you’ve noticed this, but the average age of programmers these days is just slightly above “right out of undergrad”.
If you wait for the senior people to interview everyone, you’re going to be waiting a while.
”Coding questions are among the most quantifiable and objective questions there are.”
Bullshit. Coding questions feel objective, and programmers love to pretend they’re objective, but like any other human evaluation process, they’re subjective. It’s not like you automatically get the job if you get the answer right.
Unless you’re verifying that your team is generating consistent, reproducible outcomes, I can pretty much guarantee that they’re not.
OK, I suppose lack of seniors is a legitimate constraint.
Still, interviewing is so important, that I typically give it a high priority. Even when I had only one other senior engineer, I made sure each candidate spent a long time with each of us.
Typically, I'd go in with a junior developer, and he'd go in with some other junior developer. I'd even see the same candidate twice rather than have two juniors interview him.
I've been on the other side of that kind of interview and I know how badly it can get messed up.
> Coding questions feel objective, and programmers love to pretend they’re objective, but like any other human evaluation process, they’re subjective.
I don't see a reason to swear, as this is a friendly discussion. Either way, we'll have to agree to disagree.
Coding questions ideally yield an answer that can compile and run and return correct output for at least some valid sets of input. That's about as objective as it gets.
> Coding questions ideally yield an answer that can compile and run and return correct output for at least some valid sets of input. That's about as objective as it gets.
Except for coding style (incl. naming convention), code organisation, test coverage, arch style, level of abstraction..plus a myriad of other things. I don't know anybody who treat the code in a coding question as a black box. It gets about as subjective as you can get when reviewing, with the (objectively) correct answer weighted the least.
Shit code that gets the right answer rates worse than code which ticks all the above subjective boxes but gets the answer wrong.
This topic was pretty lively last time I commented about my interviewing practices. Coding is a creative effort -- yes. It's not objective.
But I see no other way to test if the person is competent than sitting them down with a problem to solve -- if only to have them explain what they do and don't understand, and what their process is.
This is a key point. I see a lot of criticism of the coding interview. But what's the viable alternative?
My goal is to figure out the candidate can do their job - write software - and that myself and the rest of my team would enjoy working closely with them on their daily tasks.
How else do I figure that out, if not by a coding exercise?
I’m not saying that you shouldn’t do a coding exercise. I’m saying that coding exercises are overrated and far too difficult.
Programmers today think that TopCoder questions are “FizzBuzz” tests. That’s idiotic. Joel Sposky wrote that blog post to say that you should do an absolutely minimal check that someone can write code, then move on to more important things. It’s just bizarre how people have twisted that over the years.
But to answer your question: focus on what matters. Communication, writing skill, the ability to read code (this is easily 10x more important than algorithmic wizardry IRL), good habits (e.g. “does this person insist on putting all of their code in a single file?” - an actual thing I have seen!!), opinions that match your team, etc.
All of these things can be answered in interviews, but nobody cares to try.
Communication is obviously being tested in both phone screens and onsites. It's not like a candidate in a coding interview can simply start coding silently, finish, and then pass to the next stage because their code works. A big part of a coding interview is working with the candidate.
> the ability to read code
How do you test for that in isolation?
There are exercises that do that: you show the candidate a piece of code, and ask them to find the bug. They have all the worst issues that folks here are complaining about, amplified: stressful, awkward, etc. In my experience these types of exercises are also more random: i.e. depend more on luck. Sometimes great candidates fail them, and poor candidates get them right on a fluke. So they are overall less indicative.
Ultimately, though, it's back to the first point: how do I test if the candidate can do their job, i.e. code?
A candidate may communicate really well, say all the right things in best-practice questions like the one you mentioned, write well in English, but still unable to code their way out of a matchbox.
Again, nobody said “don’t test if a candidate can code”. You should do a minimal test, then move on to more important things, rather than beating the same horse.
”In my experience these types of exercises are also more random”
Well, “your experience” also says that you regularly encounter senior candidates who can’t code at all. I doubt the statistical validity of your experience.
> Joel Sposky wrote that blog post to say that you should do an absolutely minimal check that someone can write code, then move on to more important things.
Interesting, and I won't look up the article again before writing this comment.
I remember it as saying he was having a hard time finding people that can code at all, that a FizzBuzz is weeding out a large percentage of applicants. I don't recall it as a gateway to move on after.
> Coding questions feel objective, and programmers love to pretend they’re objective, but like any other human evaluation process, they’re subjective. It’s not like you automatically get the job if you get the answer right.
> Unless you’re verifying that your team is generating consistent, reproducible outcomes, I can pretty much guarantee that they’re not.
It seems as though there are a fair number of things like this, where people sort-of play act science but don’t bother with the rigor. I guess it’s a type of cargo culting.
> Coding questions are among the most quantifiable and objective questions there are.
Only in the most trivial cases (for statements, not necessarily solution). If you want to assess engineering ability, specific coding questions are a small, tiny sample of the space, and generally the more difficult the problem the less useful it is.
Why are "juniors" interviewing candidates? Let alone juniors asking "idiotic questions", and having decision-making capacity?
> then turn around and claim the candidate ”failed at fizzbuzz”, just to cover their own ass
Coding questions are among the most quantifiable and objective questions there are. If you know of something even more objective, let me know because I'd love to try it.
They are repeatable and have well-defined success criteria. Ideally you have two interviewers in the same room, and it's very clear if the candidate did or did not solve the problem. You can't really fudge it, even as a single interviewer, unless you're willing to outright lie that the candidate didn't write a working solution when he did.
> I’ve never interviewed a senior person who couldn’t code at all. Not after the phone screen, anyway.
Yes, if you do a coding phone screen, then obviously you're screening out those who can't code. I'm sure you saw some impressive resumes fail that screen, though.