Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey, my name is Aline. I'm one of the authors of Beyond Cracking the Coding Interview and the founder of interviewing.io.

I have also wondered about the arms race and how it weighs against market forces for the bar going up, especially because as founder of an interview prep platform, I am complicit in said arms race and don't feel great about it (though I think the good of starting interviewing.io far outweighs the bad, but that's a story for another time).

So I looked at the data.

Between 2015 and the first half of 2022, I'd argue that "the bar" was about the same, even though a bunch of interview prep resources sprung up during that time (interviewing.io was founded in 2015, Leetcode was, Pramp was, Triplebyte was, HackerRank was a few years earlier, the list goes on).

Then, the tech downturn happened in 2022, and all of a sudden the bar jumped... because for the first time companies didn't feel like there was an acute shortage of candidates.

Here's some the data about the bar. At interviewing.io, after each interview, whether it's mock or real, the interviewer fills out a rubric. The rubric asks whether you'd move the candidate forward and also asks to rate them on a scale of 1 to 4 on coding ability, problem solving ability, and communication.

We can look at what the average coding score was over time for passing interviews to see how much the bar has gone up.

Between 2016 and 2022, the bar grew a bit (from 3.3 to 3.4). Starting in 2022, it shot up to 3.7. Is this the be-all and end-all? Of course not. But to me this is compelling data that market forces >>> the interview prep industry.



Hi Aline, thanks for the reply. That makes a lot of sense.

Even so, I do think it's worth considering what someone in your position can do to make things better. If an arms race is inevitable, can it at least be an arms race with positive externalities?

For example, if companies focused much more on security questions during interviews, that would create an incentive for devs to learn about security. Then we'd have more secure software as a positive externality -- in theory, at least.

If we could get companies to ask more questions about AI alignment, that could reduce risks from AI misalignment.

If we could get companies to ask more questions about optimizing the energy usage of apps and data centers, that could be good for the environment.

The pitch to hiring managers would be something like: The algorithms interview is not about algorithms per se. Algorithm knowledge is only somewhat useful on the job. Rather, the algorithms interview is about giving the candidate a chance to signal that they can master technical coding knowledge. It doesn't particularly matter what that technical coding knowledge is.

And, if you ask questions on topic besides the classic data structures and algorithms, that means you're measuring something different:

* You're measuring the candidate's passion to learn CS stuff that's not usually covered in interviews.

* You're measuring the candidate's ability to pick up something new on the fly.

* If you're transparent, and you publicize the topic(s) you interview for, you're measuring the candidate's passion for getting hired at your company in particular.

All of those measurements seem potentially more valuable than measuring how much time they had to study classic algorithms topics.


I love this comment, and as an author of this book, I don't disagree with a word of it.

A common misconception is that Gayle's original book put forth the "right" way to do interviews. Gayle neither invented or encouraged the current interview structure. Gayle discusses the timeline in more depth in a Blind AMA thread you can find online. I think a lot of people are under the impression that books like thes somehow steer the interview process toward this style of interview. At this point, all we are doing is looking at the process as it is TODAY, and trying to help provide transparency and equal information to everyone. We spend several chapters in this book talking about how broken the process is and making similar points to you, but we can't write a book on an interview process that doesn't exist and while Gayle's original book is well-circulated, she (or any of the rest of us authors) doesn't have sway over how big tech companies conduct their hiring.

With that said, I think we are seeing companies start to incorporate other interviews precisely for the reasons you've mentioned. It isn't uncommon for smaller tech companies especially to have a DS&A interview, but also include a system design interview, and maybe even a practical "build something simple like a tic-tac-toe game in front of me while I watch" kind of interview. I do believe things are getting better and more fair over time (remember two decades ago Google was literally asking riddles in an attempt to screen people). I don't buy into the narrative that these interviews should go away entirely (and if they did, it would take at least a decade) because they are still a reasonably effective way to interview people at scale. The Pragmatic Programmer guy actually had a great take on this here: https://x.com/GergelyOrosz/status/1891212829346435103


100%. I agree with everything you said. The interview is supposed to be about whether someone can master new stuff. I've heard good interviews described as "being smart together". Drilling on questions flies in the face of that.

Here's the problem. It's really hard to get companies to change.

A bit of a long-winded answer that I hope will come back around...

I don't feel great about my role in perpetuating the interview prep industrial complex. It wasn’t my intention – to this day, interviewing.io is first and foremost a hiring company that’s trying to make eng hiring efficient and fair (though bc of the downturn we've been doing less hiring than we'd like). Our goal is to find the best performers, regardless of who they are and how they look on paper, and get them into any company they want.

So why do we do mock interviews?

Mock interviews were supposed to be a way to attract people to our platform, not an end in itself. Over time, though, mock interviews have become a larger and larger part of our business, and that’s part of the reason I wanted to write this book. I'm hopeful that changing the conversation around prep and empowering engineers to learn the material rather than memorize the material will make a dent in an industry that is, as of now, headed in the wrong direction.

The other thing we’re doing at interviewing.io is gathering a massive interview data set. Unlike other mock interview platforms, we don’t tell our interviewers what questions to ask or how to interview. Instead, we let each interviewer run their own process. This means that we end up with a lot of “interview biodiversity” on our platform.

I'm hopeful that, over time, we’ll be able to use that data to do a retroactive analysis where we look at users’ outcomes and which mock interviews they passed and failed to figure out which types of interviews carry the most signal and, over time, conclusively come up with some best practices around the best way to surface good engineers. Because it can’t be what our industry is doing now!

The other piece of the puzzle is having enough mindshare, when we do figure out the answers, to have people listen. We've already blogged a LOT about what companies can do to hire better. Here's our collection of posts specifically for employers and how they can hire better: https://interviewing.io/blog/category/for-employers-how-to-h...

The reality is that employers don't care. They won't change their behavior unless you're somehow making things 10X cheaper or faster. So that's what we want to do, both with respect to how companies hire (making it be about what people can do instead of the brands on their resumes) and with respect to how companies interview (hopefully moving away from toy algorithmic problems that people can drill on and memorize).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: