Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At my old company, we never did take home assignments or whiteboard interviews. We just sat candidates down and pair programmed with them for a couple hours on whatever we were working on at the time. We paid them the same hourly rate we were making, (but didn't charge the clients for it).

If the candidate obviously didn't know what they were doing, we ended it early. Usually you knew within 15 minutes.

One candidate hated the interview process, and walked out. One candidate, likely a future business leader, argued vociferously for a higher hourly rate during the interview, and later with the founder.

But I would describe our success rate for finding good programmers who were easy to work with as "extremely high", and our rate for false positives as "I can't remember ever having one in about 4 years of working there".

The overhead for the engineering team to run these was not zero, and not everybody enjoyed doing it. But some people did, so we had them do it more often. I have no idea why this isn't a more common practice, I would recommend it to anybody.




Can you go a bit more in depth about the pairing session? Most companies I've worked at, even if the person coming in was familiar all technologies we were using, I can't imagine them grokking nor just "skipping over" the datamodel and system architecture/design. All non-trivial projects I've worked on had a "bad" codebase as a result of deadlines, pivots and misunderstandings. The code is always steadily improving, but never good enough that someone can just waltz in and get to hacking on it.


I can't answer for the parent, but I've done similar style interviews. Because the candidate is not familiar with your codebase, you must be the expert. Your job is to let them drive as much as possible, while you "unstick" them when they trip over what they can't possibly know. It is a collaboration, which is part of what makes this a good interview style IMO. Sometimes a candidate will offer an otherwise good solution, but it won't work well within your codebase -- in this case you explain why, and then you iterate.


Yeah, usually we would give a little bit of context, and then look at the sprint tickets in Jira, and take one. The candidate was not expected to be familiar with the codebase or product at all beforehand, and they weren't expected to become an expert on it during the course of the interview. But coming up to speed quickly is one of the skills we liked to see, since that was part of the job.

They should be able to 'hang', in the sense of being familiar with how we approach problem solving as programmers. They should ask questions when needed, and answer questions as they arose. The goal was to just figure out what level they were at in general, and what it would be like to work with them. They weren't expected to come up with the best possible solution, because usually there wasn't a single right solution.

We would ask them questions like "well, you saw the ticket, how do you think we should approach this?" or even just natural things like "this function we just wrote isn't working, what's going on here?". The point is that they are working on real problems — literally real problems — and not abstract CS exercises.

Since we mostly did pair programming every day, the other thing you'd get a sense of is how easy someone is to work with. Not everybody had experience pairing, which is to be expected. You found out quickly who took to it, and who wanted to be a lone wolf. We didn't not hire someone because they weren't good at pairing, but we factored it in. Some people just did not communicate very well, which is a problem.

(This also gave them a good sense of what it would be like to work with us, and sometimes people decided not to take an offer because they didn't like the way we worked: good outcome for everybody)

They would usually do two rounds of this, both on the same day. Morning session, then an afternoon session. We offered to take them to lunch in between, which exposed them to more of the team, and let us talk to them in a more casual setting. Part of the interview would also be a private talk with one of the founders, and an HR person (in a different session).

The people the candidate paired with would provide input like: "I would recommend hiring this person" or not, but didn't make the decision themselves.

Here, I should admit that we did have a set of artificial problems we'd give candidates if we were hiring for a skillset that did not have a real project to test them on. We did both web and mobile contracts at this (consulting) company, so if we were hiring, e.g., an Android dev but didn't have an Android project for them to work on, we'd have them check out a repository and work on a fake ticket instead. Usually this was with a developer beside them, but I can remember times when the candidate had to do it solo.


This is great. If I had my own company I would do basically the same:

1. have a look at the candidate's github / stack overflow profile. If you like what you see, invite him/her for an interview; 2. have a couple of pair programming sessions as you described, with different team members (and don't let team members influence each other during the hiring decision); 3. invite the candidate for lunch and see how he/she treats others (basically an asshole test)


> 1. have a look at the candidate's github / stack overflow profile

Not everyone has a github or stackoverflow account!


> I have no idea why this isn't a more common practice

It used to just be “the practice”. The folks who don’t do this are trying to automate hiring.


> The folks who don’t do this are trying to automate hiring.

Or they are trying to emulate Google and similar companies without understanding why they do it.


I'm 15+ years of experience, I funded a few high level projects for corps and a few startups including hiring my teams. I was always conducting my interviews by making some work with the candidate and I found it best approach.

And quite recently I sat on another end of hiring table. I'm good at algorithmic challenges like letcode or hackerrank, I always liked puzzles like this. And yet, last interview I failed. Purely because I got a bit nervous and did not manage to deliver fully working solution in 15 minutes in front of interviewer. And that after passing their hackerranks and answering their general system questions. I guess their approach was all or nothing. Is it good or bad, I don't know. But from hearing their problems I knew their problems is something I eat for breakfast :)


This is great for small companies, but large companies are interested in standardizing the interview process. Leetcode is great for that since the outcome is pretty binary.


Sure, great for metrics, terrible for actually doing the job.

Leetcode can never compete with or allow a candidate to demonstrate real multi-dimensional or multi-systems thinking. You may be able to write some bass ackwards arcane algorithm and prove you’re “31337” but can you make that work at scale with a data store that you have no information about? How’s it going to get the data to operate on in the first place? Will your data retrieval query cripple a prod database? Read from replica? How far behind can that data be without making your fancy algorithm useless? How will you parallelize that work set? In-request or background job? Multi-threaded or single thread? What about changing state in multi threaded contexts? Database indexes? Schema design? How will you handle failures in your algorithm? How will you handle malformed data? And most importantly, does what you’re doing benefit the user, or just your epeen?

Leetcode is bullshit. It’ll never let somebody show you they’re good at anything other than solving bullshit problems. And you’ll lose everybody who CAN work within the larger context by chasing candidates who’s chief skill is solving bullshit problems.


Leetcode is a proxy IQ test. Anyone who can do leetcode can learn how database indexes work (hint: it uses Data Structures and Algorithms) but not the other way around.


I mean yes, it does filter for those that are willing to put up with some bullshit to get a job. But you haven't convinced me that the group that's unwilling to put up with this is unqualified.

There are many senior people or folks with families/etc that don't have the time or patience for this ridiculous cargo cult hazing ritual masquerading as a hiring technique.

I would say your assertion that "leetcode is a proxy for IQ" is a bold statement to make without some bold sources to back it up.

I've personally met a number of people that are great at these small scoped problems but suffer at broader systems engineering.


> the group that's unwilling to put up with this is unqualified

Companies that get a lot of applicants care more about false positives than false negatives. So they're fine not hiring a lot of qualified people, as long as there's a low chance of accidentally hiring an unqualified person. Apparently LeetCode-style problems work for that type of screening.


companies can win by hiring exceptional people with deep experience. the cost of not hiring these people has been unquantified. i would never want to be in their position. imagine a startup hires that one person who will make a billion dollar difference?


The startup isn't likely to be getting resumes on the scale that the big tech companies do and can afford to spend the time to look at each resume and maybe even do an interview with each candidate.

However, if you are getting resumes on the scale that the larger, well known tech companies do, then the individual attention cannot be paid on each resume. At that point, one looks more at a quick filter to try to remove the risky hires.

If you've got 1000 resumes and 80% of them are bad hires, and after an online assessment that takes it down to 200 resumes of which only 50% would prove to be bad hires, that is a significantly improved pool to consider.

Yes, it is possible that a great candidate was removed in that filter, but its also possible that one still remains in that pool.

Startups may be cargo cutting big tech interview processes, but the process has value for big tech companies and any others that have more resumes than they can reasonably deal with.


cost of living is driven by FAANG and startups use the same leetcode hiring process as FAANG, but pay less. the equity you get has a miniscule chance of becoming valuable.


No, that's not true, LC style problems are not bullshit. There's no rote-memorization required. Just basics of a language that you know and the very basic constructs.

95% of programming is data structures and algorithms because that's where all the performance gains are usually made. The other 5% is knowledge of the hardware and micro-optimisations like avoiding branch mispredicts and understanding the compiler.

Hashing in databases is very similar to how hash-tables work as is sharding.


It is true, most of the leetcode problems use the same underlying collection of approaches. If you don't know them (by memorizing them), you probably won't solve the problem. There is some value in knowledge of those approaches (algo and DS stuff), but these toy problems are not a good way to evaluate how someone would use then in a real life situation.

"95% of programming is data structures and algorithms" - it's just not. 95 percent of programming is gluing together libraries, refactoring code and reorganising what you have to solve new business requirements.

Your focus on performance (albeit important) above all else makes me question how much real world experience you have building large scale systems. (Unless you're a systems/embedded developer).

But this is the crux of the issue: using metrics that don't map well to what we're trying to optimize for. Perhaps your trying to optimize for finding great performance focused, low level software devs? If so that's different that my goal.


>can you make that work at scale with a data store that you have no information about? How’s it going to get the data to operate on in the first place? Will your data retrieval query cripple a prod database? Read from replica? How far behind can that data be without making your fancy algorithm useless? How will you parallelize that work set? In-request or background job? Multi-threaded or single thread? What about changing state in multi threaded contexts? Database indexes? Schema design? How will you handle failures in your algorithm? How will you handle malformed data? And most importantly, does what you’re doing benefit the user, or just your epeen?

Some of these questions are perfectly valid during LC coding rounds. Others would be more appropriate in a system design round.


Agreed, but I see the impulse to standardize as the source of a lot of the problems people have — by people, I mean both good engineers, and the companies who can't hire them.


Honest question, how would you handle hiring at FAANG scale without standard processes?

For reference we had an opening in my team and 1600 people applied.


This process still relied on screening, including technical screens. The biggest difference is that it substituted a day of whiteboard interviews with a day of pairing, which we found to produce better results, and candidates really liked too.


Apple used to have a per team interview process(don’t know if it has changed). While the interview experience can vary a lot, there are certain pockets/teams that are spectacularly brilliant and have the freedom to maintain the bar & conduct targeted hiring.


Having 1600 applicants for a SE role is not a unique, but definitely a very niche, problem to have...


go on linkedin and message the people you think can do the job based on prior work and experience or degrees. trash the rest.


sure, let's spend time solving Leetcode problems instead of understanding how HTTP works.


This!

> I have no idea why this isn't a more common practice

My maths teacher once said in class during lecture, "Kids, when you venture into the outside world, you will realise 'common-sense is not a common thing!'".


the goal of the leetcode interviews is to get into who is the smartest mode, so the cost of hiring can be driven down. those who conduct the interviews are unaware of this. the process you describe would result in paying people more which is not what FAANGs and those imitating them want.


This is exactly the right way to do it. Not only do you find out if the candidate can program, but how well they can pick up unfamiliar code, tools, etc, can they communicate, can they work with others, etc, etc, etc.

Leetcode bears little resemblance to everyday programming. Of course it's sometimes necessary to understand big-O, and that various data structures and algorithms exist, and which ones you might apply to certain problems, but coding them from scratch, from memory, within a fixed time? Pretty much never.


> but coding them from scratch, from memory, within a fixed time? Pretty much never

This is definitely true, but having to do it quickly and on-the-spot is one way of measuring just how comfortable you are with it. You might never have to do it in an hour, but being able to is a proxy, however weak and indirect, for your competence, and guess what, I only have an hour to talk to you.

I love and hate leetcode interviews. I enjoy the challenge, and I hate being put on the spot and knowing that poor performance on one specific problem might result in an unwarranted thumbs down. I sincerely hope any interviewer whose chair I'm sitting in treats them the way I do, which is that it isn't important that you come up with the optimal solution; it's important that you can think through the problem and come up with any solution, however inefficient, and then honestly evaluate this solution (including big O) and point out how it could be improved. You probably don't have time to implement that improvement, but you've demonstrated competence in both thinking and coding along the way.

Edit to add: When I'm preparing to interview, I practice leetcode-style problems, but not so much to memorize all the questions and their answers as much to just practice my quickness for thinking and coding on that type of thing. It feels kinda silly because, as you point out, that's pretty much not a part of the job at all, but it's very very helpful to exercise your mental muscles for that sort of thing so you don't stumble over the simple mechanics of the problem. It feels absolutely great when you get a question you've never heard before but you've put yourself in the mental state to solve it and you kill it. And like it or not, I've personally found that people who enjoy this type of challenge are often the exact types of coders I want to work with.


I don't agree it's a good indication of competence; leetcode skills are almost orthogonal to the skills required in general programming.

It might be different if you're being hired to develop a new video codec, or distributed consensus algorithm or whatever, but for the typical large web property or enterprise database, leetcode is all but irrelevant.

Code cleanliness, code communication, defensive programming, properly considering edge cases etc, flexibility, API UI design, and how other people interface to your code, etc, etc, all these things matter far more than advanced algorithms, which will rarely be required.


> I don't agree it's a good indication of competence

my statement was that it's "a proxy, however weak and indirect, for your competence"


If this interview process is so golden (based on your metrics of success) why isn’t every one doing it?


Not OP but here are a couple reasons:

* It doesn’t scale well to huge organisations.

* It’s not standardised “repeatable” in the sense that you cant generate datasets from this that let you correlate interviews with subsequent outcomes and improve hiring metrics company wide

* It requires expertise in pair programming, which most companies don’t have

Basically, OP’s hiring process is an excellent example of “doing things that don’t scale” to gain an asymmetric advantage as a small organisation.

They don’t need to derive metrics, they can invest the time, they have a team who are skilled at pair programming - so they can have this hiring process and other people can’t.


Brilliant.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: