Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The article mentions that the candidate who did best on this question was “the type who did high-school competitions.” I wonder if anyone stopped at any point to question whether that’s the ideal type of hire.

I think this is the most telling bit. Time after time, I've seen excellent competitive programmers end up being mediocre engineers. Almost none of them are bad and it's also a very objective (at least on paper) process which is probably why Google is happy to stick with the process but most of them also aren't amazing innovators.

I'm sure that university student who aced the interview turned out to be very smart - they went to a great university and can solve challenging puzzles. But I don't think they'll end up being the engineer that builds Google's competition to ChatGPT. I wonder whether a course correction where Google starts looking for people who create really cool projects even at the risk of not being able to solve any coding challenges would actually help.



> Almost none of them are bad and it's also a very objective (at least on paper) process which is probably why Google is happy to stick with the process but most of them also aren't amazing innovators.

Oh, wow, if I could get that kind of result ("almost none of them bad", my addition: "possibly great") from an interview, I would stick with that process.

I have also conducted interviews in a company with heavy focus on code tests, which obviously favors competitive programmers, and my anecdata was that I got very good results compared to the alternative (i.e. not doing that).


That said - that way of conducting interviews pretty often makes you feel like a jerk. And that is uncomfortable for both sides.

And I believe this is why this discussion always keeps popping up. And it makes you discard a lot of "possibly great" candidates at the cost of "almost none of them bad".

It is the eternal precision vs recall tradeoff.


>But I don't think they'll end up being the engineer that builds Google's competition to ChatGPT

OpenAI has a lot of strong competition programmers. Search IOI or ICPC in their team update blogs:

https://openai.com/blog/team-update

https://openai.com/blog/team-update-august

https://openai.com/blog/team-update-january


> I've seen excellent competitive programmers end up being mediocre engineers

I've seen the opposite. I know many extremely talented competitive programmers (IOI, CCO, USACO platinum, Codeforces grandmasters, ICPC, and the list goes on) and they are all excellent engineers. It's not about having a wealth of knowledge about DSA (although this occasionally helps, depending on the type of work) or competitive programming itself, but rather the skills it teaches. They all know how to learn new things very quickly and thoroughly, and it goes without saying that their general problem-solving skills are incredible. They've interned at companies like Snowflake, Google, Jump, Citadel, Jane Street, Waabi, Uber R&D, DataDog, SingleStore, etc. (all before or during their 3rd year, which is practically unheard of for "normal" students, especially during the tech recession) and have consistently received the highest performance grade (our co-op program requires employees to give them a rating on a scale from 1 to 7). There are also a few I don't know directly who are in fact working on things like Bard, or have founded their own companies (TabNine was one of them IIRC).


>> I wonder if anyone stopped at any point to question whether that’s the ideal type of hire.

> I think this is the most telling bit. Time after time

If I can rephrase the question as, does anybody check if the rating somebody receives in an interview is positively correlated with their performance rating in subsequent years?

The answer is yes people do analysis of that question and yes it is.


How could they perform this analysis? Does every engineer at Google excel? Is their revenue a permanent hockey stick? If so, the process is working. If not, it's not.

But if they have any poor engineers, their process clearly doesn't work so conclusively. Because those engineers passed their process. And they have no control group, unless they've secretly been hiring a group of engineers without asking them these sort of questions (or ignored the results for a group). I just don't see how Google could determine the process works without studying what happens if they don't do it.


> The answer is yes people do analysis of that question and yes it is.

I'm not sure. The author answered this question with a "no". They admitted they don't analyze or check how actual performance on the job correlates with doing well on this kind of interview questions. Maybe others do, but the author doesn't. I wonder if anyone at Google checks this.


You can wonder as long as you want but I've told you the answer ...

In general, even people internal don't bother to use moma to check this so your opinion is not alone.


I see, you mean "someone at Google, just not the author"? Because the author explicitly said he doesn't do follow ups and so he doesn't know if his question is a good predictor of actual performance. Maybe someone else at Google does the follow up, is this what you mean? Strange that the author doesn't mention this though.


> I see, you mean "someone at Google, just not the author"

Correct.

> Because the author explicitly said he doesn't do follow ups and so he doesn't know if his question is a good predictor of actual performance.

This would be difficult as a random engineer. People Ops has different access to performance/interview data so they can do this analysis; your interviewer can't (unless you share your future ratings with them and they wrote down all of the ratings).

You can do the analysis on yourself though but you'll have to jump through some hoops to get your own interview data. It's also a single data point so unless you can convince a bunch of other people to also do this it really isn't significant.

> Strange that the author doesn't mention this though.

I'll be honest, I think maybe 1k employees know that anybody has bothered to do this analysis. It would not surprise me that the author is completely unaware given most of memegen is as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: