Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've done code interviews with hundreds of candidates recently. The difference between those who are using LLMs effectively and those who are not is stark. I honestly think engineers who think like OP are going to get left behind. Take a weekend to work on getting your head around this by building a personal project (or learning a new language).

A few things to note:

a) Use the "Projects" feature in Claude web. The context makes a significant amount of difference in the output. Curate what it has in the context; prune out old versions of files and replace them. This is annoying UX, yes, but it'll give you results.

b) Use the project prompt to customize the response. E.g. I usually tell it not to give me redundant code that I already have. (Claude can otherwise be overly helpful and go on long riffs spitting out related code, quickly burning through your usage credits).

c) If the initial result doesn't work, give it feedback and tell it what's broken (build messages, descriptions of behavior, etc).

d) It's not perfect. Don't give up if you don't get perfection.



Hundreds of candidates? That's significant if not an exaggeration. What are the stark differences you have seen? Did you inquire about the candidate's use of language models?


Yes. I do async video interviews in round 1 of my interview process in order to narrow the candidate funnel. Candidates get a question at the start of the interview, with a series of things to work through in their own IDE while sharing their screen. I review all recordings (though I will skip around, and if candidates don't get very far I won't spend a lot of time watching at 1x speed.) The question as laid out encourages them to use all of the tools they usually rely on while coding (including google, stackoverflow, LLMs, ...).

Candidates who use LLMs generally get through 4 or 5 steps in the interview question. Candidates who don't are usually still on step 2 by the end of the interview (with rare exceptions), without their code quality being significantly better.

(I end up in 1:1 interviews with perhaps 10-15% of candidates who take round 1).


So you’re not _interviewing_ them, you’re having them complete expensive work-sample tests. And your evaluation metric is “completes lots of steps in a small time box.”


Seems more like trying to find out the most proficient LMM users than anything else. I’ve never done interviews but I imagine I’d be hard pressed to skip candidates solely because they aren’t using LLMs.

Each to their own and maybe their method works out, but it does seem whack.


The thing is, when you're doing frontend, a human programmer can't write 4,000 lines of React code in 1 hour. A properly configured LLM system can.

This is why I wouldn't hire a person who doesn't know how to do this.


What are you doing where 4000 lines of LLM-generated code per hour is a net positive? Sounds like a techdebt machine to me.


UIs in React are very verbose. I'm not saying this is running 24/7.


Is the question actually difficult, though? If you ask for some standard task, then of course those who are leaning heavily on LLMs will do well, as that's exactly where they work best. That doesn't tell you anything about the performance of those candidates in situations where the LLM won't help them.

I suppose, if you are specifically looking for coders to perform routine tasks, then you'll get what you need.

Of course, you could argue that ~90% of a programmer's work day is performing standard tasks, and even brilliant programmers who don't use LLMs will lose so much productivity that they are not worth hiring... Counterpoint: IMO, the amount of code you bash out in a given time bears no relation to your usefulness as a programmer. In fact, producing lots of code is often a problem.


No, I'm not doing leetcode or algorithm questions - it's basically "build a [tiny] product to specs", in a series of steps. I'm evaluating candidates on their process, their effectiveness, their communication (I ask for narration), and their attention to detail. I do review code afterwards. And, bear in mind that this is only round 1 - once I talk with the ones who do well, I'll go deep on a number of topics to understand how well rounded they are.

I think it's a reasonably balanced interview process. Take home tests are useless now that LLMs exist. Code interviews are very time consuming on the hiring side. I'm a firm believer that hiring without some sort of evaluation of practical competence is a very bad idea - as is often discussed on here, the fizzbuzz problem is real.


> it's basically "build a [tiny] product to specs", in a series of steps

That seems like exactly what the person you're replying to is saying - that sounds like basic standard product-engineering stuff, but simpler, like any of a million examples out there that an LLM has seen a million times. "Here's a problem LLMs are good at, wow, the people using the LLMs do best at it." Tautolgy.

So it's great for finding people who can use an LLM to do tiny product things.

In the same way takehomes had all the same limitations. More power to you if those are the people you are looking for, though.

But it also sounds like a process that most people with better options are gonna pass on most of the time. (Also like with takehomes.)


> "Here's a problem LLMs are good at, wow, the people using the LLMs do best at it."

Yes, product engineering, the thing that 90% of developers do most of their time.


But what LLM haven't look at ?


LLMs have looked at everything that exists so far. If all you're creating is yet another "Uber for Dogs", LLMs will do fine.

But if you want to create a paradigm shift, you need to do something new. Something LLMs don't yet know about.


> I ask for narration

That's a mistake. There are plenty of people who are not good multitaskers and cannot effectively think AND talk at the same time, myself included.

IMHO, for programming, monotaskers tend to do better than multitaskers.


Haven't coded for a couple years (but have been a dev for two decades) and haven't used LLM's myself for coding (not against this), so am really just curious, wouldn't you want to know if a dev can solve and understand problems themselves?

Because it seems like tougher real-world technical problems (where there are tons of dependencies with other systems in addition to technical and business requirements) needs for the dev to have an understanding of how things work, and if you rely on an LLM, you may not gain enough of an understanding of what's going on to solve problems like this...

... Although, I could see how devs that are more focused on application development and knowing the business domain is their key skill, wouldn't need to have as strong an understanding of the technical (no judgement here, have been in this role myself at times).


> Haven't coded for a couple years (but have been a dev for two decades) and haven't used LLM's myself for coding (not against this), so am really just curious, wouldn't you want to know if a dev can solve and understand problems themselves?

Yes, definitely, though I lean more on the 1:1 interviews for that. I understand the resistance to this from developers, but there's a lot of repetition across the industry in product engineering, and so of course it can be significantly optimized with automation. But, you still need good human brains to make most architectural decisions, and to develop IP.


Ah, I see, round 1 is just the initial weeder, while on top of this, you'd like devs that are using LLM's for automation. Sounds like a good balance:)


Are you concerned that eventual LLM price hikes in the near future reflecting their real cost might explode your cost or render your workforce ineffective?


if it's real that person interviewed at least one candidate per day last year. Idk what kind of engineering role in what kind of org where you even do that.


I suspect he doesn't do much engineering, which would explain why he's impressed by candidates who can quickly churn out small rote sample projects with AI. Anyone who actually writes software for a living knows that working on a large production code base has little in common with this.


When I've had an open req for my team at a California tech company I've had days where I would interview (remotely) 2-3 candidates in a single day, several days a week for several weeks straight. It's not impossible to interview 100 people in a few months at that rate.


So... do you really think you were doing it right?

Sorry this is a little harsh, but how do you get anywhere near 100 people before realizing the approach must be horribly flawed, and devising and implementing a better one? Surely it behooves you to not waste your employer's time, your own time, and the time of all those people you're interviewing (mostly pointlessly).


These were 25 minute phone screens for candidates that had been sourced by recruiting, a few years ago at a company that was going through hyper-growth (hiring hundreds of engineers in a year). Phone screening several dozen people for 2-4 eventual hires doesn't feel too inefficient to me.


There are companies whose product is high-quality mock interviews. I wouldn't be surprised by that number of interviews in just a year and it can easily be more than one candidate per day.

Edit: there are also recruitment agencies with ex-engineers that do coding interviews, too.


I'd add to that that the best results are with clear spec sheets, which you can create using Claude (web) or another model like ChatGPT or Grok. Telling them what you want and what tech you're using helps them create a technical description with clear segments and objectives, and in my experience works wonders in getting Claude Code on the right track, where it has full access to the entire context of your code base.


> The difference between those who are using LLMs effectively and those who are not is stark.

Same here. Most candidates I interviewed said they did not use AI for development work. And it showed. These guys were not well informed on modern tooling and frameworks. Many of them seemed stuck in/comfortable with their old way of doing things and resistant to learning anything new.

I even hired a couple of them, thinking that they could probably pick up these skills. That did not happen. I learned my lesson.


Isn't that more correlation than causation, though? The kind of person who's not keeping up with the current new tech hotness isn't going to be looking at AI or modern frameworks; and conversely the kind of person who's dabbling with AI is also likely to be looking at other leading-edge tech stuff in their field. That seems to me more likely to be the cause of what you're seeing than that the act of using AI/LLMs itself resulting in candidates improving their knowledge and framework awareness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: