Is it just me or is it somewhat pointless to be lamenting the fact that this may or may not be a terrible part of the hiring process? At the end of the day if you sit down for the interview and are given this question, you can either do your best to solve it or walk out of the interview and go to a company that you feel is 'more worth your time'. When you're playing by Google's rules, there's not much to be accomplished in the form of complaining.
My guess is that many hiring practices will change(and are changing) over time based on the internal work that people are doing to drive better initiatives, and less based on comments on HN.
Personally, I would like to see what people thing about the actual solution to this particular problem...
When you're playing by Google's rules, there's not much to be accomplished in the form of complaining.
The problem that so by now many companies have been infected with the idea of using "Google's rules" as a playbook that it's still going to take several years for the contagion to wash out of the system.
Meaning we need to keep pushing back on multiple fronts. Given the insidiousness of the problem (and the psychological cost it has imposed on a generation of engineers; not to mention the sheer time cost) -- public exposure, followed by heaping mounds of ridicule (you can call that "complaining" if you like) would seem to be not only a valid, but unfortunately necessary part of our suppression strategy.
Otherwise, the people who keep foisting these questions us (as if they were a cool and nifty way to size up candidates) -- just aren't going to get it through their heads.
Personally, I would like to see what people thing about the actual solution to this particular problem...
I used to enjoy solving problems like these, during my high school and college years. Sometimes really, really enjoy it.
But now that I build real systems for a living (which generally involves much meatier problems to solve) -- and it's become part of the standard hazing process in far too many shops, for far too long -- I couldn't begin to care less.
One of the problems I have with these types of questions where the interviewer is going to 'help guide' the candidate is that there is way too much opportunity for the interviewer to stall the candidate, either intentionally or not, based on their own bias toward the candidate.
At the most extreme, these questions are a test for whether the candidate had the same CS professors as the interviewer.
> you can either do your best to solve it or walk out of the interview and go to a company that you feel is 'more worth your time'.
The problem is so many other companies have copied this practice because hey, Google asks brainteasers and gets the best candidates, so we should do it too!
They have, but we're definitely seeing a turn towards a better direction recently. The software engineering job market has gone through some incredibly fast changes in only the last 10-15 years, and we will likely never perfect the process as the landscape changes, but I guess the optimist in me believes that at least some companies are doing their best. Its just my view that when the first 10 comments on this post echo that this is a terrible practice, it makes it difficult to have a more technical conversation about the OP.
It isn't pointless. Criticizing leetcode style interviews produces an infinite supply of social media comments and upvotes. The conversation will never progress, but it lets people argue on the internet, which is a key value proposition of headlines today.
>On a more general note, anytime a problem admits a dynamic programming solution, there almost always is a graph based approach.
This concept was introduced to me back in my algorithms class and is pretty useful. For anyone looking for a longer explanation, page 167 of the textbook [0] has the nugget and some examples:
>Every dynamic program has an underlying dag (directed acyclic graph) structure: think of each node as representing a subproblem, and each edge as a precedence constraint on the order in which the subproblems can be tackled.
Draw a graph with vertices labeled 0 through N, then add edges i->(i+1) and i->(i+2). The answer is the number of paths from 0 to N.
This is still pretty specific to counting paths in the same way the original knight problem is, though.
The other comment is talking about how you represent each state for the recursive function as a vertex, then connect it to its dependencies (basically taking the recursion tree, but merging identical calls).
This problem was used at my work years ago. We now have a hiring guild dedicated to making a better interview process. Multiple people said they were originally in the guild to prevent this problem from being used haha.
Random question but why is it called memoization instead of just caching? Isn't it the same thing? For some reason I never ran across the word memoization when I was doing CS a number of years ago.
Memoization is only when you cache the results of a referentially transparent function in software, caching is more general. You have cpu caches, browser bashes, even entire servers can be caches of common web requests etc. So using the word "memoization" is more specific and leads to less confusion.
I was surprised none of the solutions exploited the symmetry of the graph. There are 4 sets of numbers you can be on in a path of length 2+, and it only matters which set you are in: {0}, {1, 3, 7, 9}, {2, 8} and {4, 6}. 5 should be special cased since only paths of length 1 are possible.
That doesn't give you a big-O speedup, but it should be a 2x performance improvement for the algorithm that is linear in the number of hops.
Why? Note that the point of the problem isn't if you can solve it. It is how you approach it. A problem like this gives you a lot of signal.
* Can the person program at all.
* Do they consider the performance of possible solutions.
* Are they good at explaining their thought process.
* Do they consider edge cases?
* Do they write tests?
I'm not saying that it should be the only interview, or it is the most important. But it clearly provides differentiation between candidates. I'm also not saying it is the perfect way to get this differentiation, but I think it is far from "terrible".
> * Can the person program at all. * Do they consider the performance of possible solutions. * Are they good at explaining their thought process. * Do they consider edge cases? * Do they write tests?
All good things to look for. The problem is, day-to-day you don't have to do all of those things in 45 minutes while several people at the table watch you. Personally I'm a fan of a short programming exercise (perhaps even this problem) that you can take home and walk through with the team at a later time. I know those are detested by some, but I think they're fair (assuming the task is not something outrageous; something that can be knocked out in an evening perhaps) and give a much better idea of what an employee is capable of.
Can the person program at all. Do they consider the performance of possible solutions. * Are they good at explaining their thought process. * Do they consider edge cases? * Do they write tests?*
That's what you'd like to think it's testing for.
In reality, it's testing for: "Here's a hoop - would you like to jump through it for me please? BTW not only will stumbling or even hesitating pretty much disqualify you - we love to dish after hours about those who fail to make the cut."
The point of leetcode problems isn't to find out if you're a good software engineer. It's to find out if you're willing to spend hundreds of hours practicing tedious bullshit to get a job and google. If you can solve this in the interview you can become a good software engineer.
Memorizing known solutions to known problems does not guarantee good problem solving skills.
Furthermore, you’re now selecting for people that are highly willing to “spend hundreds of hours doing tedious bullshit.” If you were a CEO, is this the type of people you’d want to stack your organization with?
I think this hits on why these types of problems are potentially bad. You could be the world's worst software engineer, but if you'd taken a graph theory course recently, you'd pattern match this, say "sure, can I use Matlab?" and be done in under 10 minutes.
Or you could have tons of experience deploying robust production systems but have never happened to learn / need to use dynamic programming, in which case coming up with it in 45 minutes during an interview is not going to happen.
The junior and the senior aren't competing for the same job though, this just checks that you still remember the theory you learned in college. And no, just because you are senior doesn't mean that you get a free pass forgetting everything you learned in college. I haven't practiced problems like this in over 5 years and I still ace them.
> this just checks that you still remember the theory you learned in college.
Sure, but not everyone went to college or studied computer science / math. This seems like one of those problems that's really just testing a very specific type of preexisting knowledge.
You'll get full score even if you don't know the matrix solution, just doing a memoized recursive one is enough. However if you had that specific knowledge then they still test whether they have the skills to apply esoteric knowledge correctly which is also very valuable.
Anyway, Google no longer has this question. Once it gets known outside Google bans it from interviews. Sometimes people give banned questions but in general they don't and it is taken into account. I remember it being popular as a question 4 years ago, popular questions tend to leak quickly and get banned though.
I assume they all immediately handed in their letters of resignation and/or hired the candidate in as a principle engineer.
Sarcasm aside, the very fact that they ban questions demonstrates how useless these types of questions really are.
If you ban it because people might know it due to its popularity then what are you testing for? I thought it was for whether a candidate could solve the problem, but it appears to be to check which percentile of a special 'knowledge' club they belong to...
If you are an experienced programmer, how long does it take you to learn the handful or so algs/concepts you need for these kinds of problems? It'd probably 20-30 hours of learning, which is a reasonable expectation to ace interviews. - And it is not like you have to necessarily come up with a solution all by yourself, communicating your issues well, and the interviewer will probably help you, if you have an otherwise impressive résume it shouldn't pose a problem at all.
What is your point? Companies wants to hire smart people which is why they have these interviews. This applies no matter what level people are at. "I forgot" isn't valid defense. All the people in these threads who says things like "I knew this but forgot" are really cringy.
The point may be that many (if they're truly senior, not someone with 3-5 years in the industry) will not have used these theories in 10+ years. The real-world application is questionable.
And my point is that I'd rather hire someone who didn't forget these things. When you learn something well you don't forget, it is like riding a bike. If you learned how to ride a bike 10 yards and then stopped and never did it again then you would forget how to do it, saying "I passed the bike riding test when I was 10, I don't need to do it again" isn't a good defense.
I mean, why did you even go to college if you intended to just forget everything afterwards instead of learning the things properly for life? I'd rather not hire people who just do the work necessary to pass and get the degree and don't want to learn more.
I'd say it shows that for most day-to-day work in the software industry, a certain percentage of the CS material we learn is simply unnecessary. No one has ever asked me to re-implement well-known algorithms that are found in well-established libraries. There are, of course, people who do this, and they do need such a background. But they are likely outliers.
If someone just pattern matched it I would give them a backup question. The point of these questions isn't seeing if they can solve it, it is seeing how they think about problems.
I'm also not worried if they make a good solution, or even get a solution at all. But almost all candidates can start making progress, come up with some possible solutions and tell me about those solutions.
If you then diagonalize that connection matrix, you can get constant time (though you might have to make a special case for starting at the `5` key):
"Diagonalization can be used to efficiently compute the powers of a matrix..." [1]
Diagonalizing the matrix makes it easier to calculate the exponent by making it a series of numeric exponentials, but calculating the exponent of a number is still a log(n) operation.
Also given that this is a discrete problem moving to floating points like you'd likely have to do when you diagonalize could easily lead to errors in the final answer.
I tried but Wolfram Alpha doesn't take inputs that long hah. In any case even if it works you will likely get irrational terms which will make it only theoretically nice but not practically computable, like the closed-form Fibonacci formula.
Algorithm questions are the tech version of word problems. What they tell you is how good someone is at solving these types of problems. Solve a lot of these problems if you want to work at Google.
The point of these problems is to find out how committed an applicant is. If you can ace them the interviewer knows that you have enough critical thinking skills to succeed and enough commitment skills to learn whatever you need.
I would stop at "level 3" in this blog post intentionally.
Writing an unrolled dynamic solution as opposed to a simple cache is an extremely error-prone mental gymnastics in my experience. The initialization procedure and indexing are especially susceptible. Moreover, the resulting code is usually barely readable.
I wish people would stop expecting it. In practice, the memoization approach is sufficient in 99% cases, and having a 3% chance to make an error in unroll code that causes user data to be misplaced is a much worse option.
I got a more general comment / question regarding this form of interview. Is this some kind of hidden age discrimination? Now I just might not be the target audience for those interviews or exactly one of those they mean to filter out, but after more than two decades of experience in the field (in various roles, but my first paid job as programmer was in '92) I feel uncomfortable with the kind of brain teasers I find on leetfree.org. Some are outright silly (e.g. "wiggle-sort"), some might be valid in some niche, but hardly common place (e.g. sparse matrix multiplication -- I haven't ever had the need to multiply matrices since leaving the University, but I haven't done any 3D graphics or robot control since then either).
If the companies which employ such interviews are aware of the distance of those puzzles to the day-to-day labor of a software engineer, then I suppose it's OK. But seeing that the interviewer himself was direct hire from University with no outside professional experience makes me wonder whether they are creating their own ivory tower. In case of Google this seems to work out for them (they decidedly wanted to make things different than then already established enterprises and having seen those, I say more power to them), but will it for others?
There is probably some of that, no process is perfect... If nothing else, at least it's more objective than a more bias-prone personality fit approach. But it's also possible to practice - even 20+ years after University - if one wants to interview at Google or another company that uses those kinds of interviews. Reading a few blogs like this and some live interviews on a site like interviewing.io or some such will put you back to graph walks or sparse matrix multiplication in no time.
On the subject of Google Interview challenges and keypads, I wrote about a Google keypad challenge here: https://umaar.com/blog/my-code-exercise-submissions-part-1/ - it's all about outputting keypad directions such as "right, right, ⏎" based on the desired input.
Unpopular opinion here: These type of questions are horrible at filtering out bad software engineers. Employers should be focusing on principles of software engineering rather than random math problems. Most new grad will write code like they were solving leetcode problems. It is sad that we need to teach them standard tools, protocols, concurrency, software engineering practices right out of college.
So here's the thing. And I know I'll get knocked to r/iamverysmart for this. It timed myself. It took less than 10 minutes from reading the problem to coding up a working dynamic programming solution (no peaking at the rest of the article). A big chunk of that was (unsuccessfully) trying to come up with a closed-form solution.
I didn't get the log(n) solution until I found it existed at the bottom, but once the author mentioned it's existence, that took less than 5 minutes to figure out. I didn't code that up, though.
I'm probably an above-average programmer, but I'm not a 99.99% outlier. I interviewed early Google. The questions were tough for me (much more so than this one).
Things which jumped out at me: "The better the candidate, the fewer hints I tend to have to give, but I have yet to see a candidate who required no input from me at all."
I read this as "We're scraping the bottom of the barrel for candidates. Our candidates are nothing like those who applied to Google circa 2000"
And: "I didn’t even know it existed until one of my colleagues came back to his desk with a shocked look on his face and announced he had just interviewed the best candidate he’d ever seen."
I read this as: "Our employees are dumb. In using this interview question for years, none of us ever noticed the obvious."
What's going on there? I mean I can see messing up an interview question like this under the stress of an interview (I literally confused linked lists and arrays in probably my worst interview ever). But no candidate? Ever? And no employee? Come on.
There's something deeply wrong at Google. It's been deeply wrong for a few years. I hope someone fixes it.
Note that you need to pass at least 4 out of 5 interviews, each asking about 2 questions. But yeah, you don't have to be a genius to get into Google nowadays. What you need is the creativity to solve problems you haven't seen before (which is why Google bans questions once they get out), basic algorithms skills and the programming skills to implement it at a reasonable pace.
Yeah. I'm aware of that. It's a bunch of straightforward questions, a pretty high bar for how many you need to answer, and so it's pretty random how many of them you mess up, so it's kind of like a die roll to get a job. I had a really bad interview at a peer company once, and I'm well aware how it goes.
But I kind of miss the old Google where you DID need to be a genius. I interviewed with Google probably around the year 2000, and there was a genuinely hard ball packing problem. I'm kind of curious what happened. Anyone who could answer that question would find this one trivial -- and not just the dynamic programming solution, but the O(log n) one which apparently no one at Google (or even applying to Google) noticed.
What I hate about Google is that everyone there still thinks they're a genius. In 2000, it felt okay, since for the most part, they actually were. Today, it's kind of obnoxious.
> But I kind of miss the old Google where you DID need to be a genius. I interviewed with Google probably around the year 2000, and there was a genuinely hard ball packing problem. I'm kind of curious what happened. Anyone who could answer that question would find this one trivial -- and not just the dynamic programming solution, but the O(log n) one which apparently no one at Google (or even applying to Google) noticed.
Fwiw this is very much untrue. While this person may not have known the optimal solution, it's well documented within google.
> What I hate about Google is that everyone there still thinks they're a genius. In 2000, it felt okay, since for the most part, they actually were. Today, it's kind of obnoxious.
News to me ;)
There are certainly places where I think Google is a world-leader. But that doesn't require or imply that everyone be a super genius.
Early Google was incredibly arrogant and aggressive around only hiring people they thought were brilliant. The model was small teams of super-elite people. They took some flack for hiring for high caliber people into even relatively menial positions (e.g. random office admins). Even for things like the cafeteria, they wanted a world-class chef.
On the whole, it actually worked very well. Everyone wanted to work there because you were surrounded by top people.
Early Google: Phenomenal ability to build effective, high-quality products. Everyone wants to work there. Stainless reputation with the public.
Current Google: Limited ability to build effective high-quality products (but retty good at sustaining the successful products they have). Mixed reputation as an employer. Mixed reputation with the public.
You can extrapolate from there. I'm concerned about the second derivative.
I think the second piece there is employee growth. In 2001, Google had 300. In 2004, it has 3,000. By 2011, it had 32,000. today, it's over 100,000. You can't quite fit an exponent to it, but it's pretty close. If revenues don't keep growing exponentially....
I'm not implying it's failing, by any means, but it's gone to the same place as any other large corporation, and not a strong position to maintain pole position from.
Likewise I was surprised at how trivial this seemed?
My first solution was matrix multiplication -- I remember distinctly in undergraduate discrete mathematics learning that matrix exponentiation solved the hops-on-graph problem. I did not think to convert to binary to make logarithmic time, however.
The "convert to binary" is a standard algorithm for modular exponentiation. Anyone who has taken a cryptography course will know it.
It's not the fastest algorithm (by big-O; probably is in practice), though. You can do a matrix decomposition, exponentiate the eigenvalues, and convert back. Speed will depend on how you do the exponentiation, and that's gets into a pile of optimized numerical methods.
Typical brain-teaser interview question that will have absolutely no correlation to on-the-job performance but makes the interviewer feel really smart. I thought Google had gotten away from these.
This isn't a brain teaser, they actually correlate with on-the-job performance. When I worked at Google I looked up their internal reports on the subject and they showed that the score people got on these algorithm interviews correlated pretty well with their performance ratings after they got hired. In other words the group of people who got hired even though they had low scores performed significantly worse than the people who got hired with excellent scores.
General mental ability is regularly affirmed as one of the most predictive factors for successful hires, it's just that it has a questionable legal history.
I've been doing this for a long time, and I've never seen any kind of correlation between how well someone solves silly algorithm puzzles in a 45 minute window and any kind of real-world performance. Also, there's no apparent a prior reason why this would be the case. It might be that the way performance rating is done at Google is constructed so as to be correlated with silly brain teaser performance.
Maybe for Google-type/scale problems, but I'd say that the engineers I've worked with and hired for "normal" software jobs (e.g. here's a boring business domain, make it CRUD, make an API, make a UI for it) - there is almost an inverse correlation between being a CS/algorithm genius and being happy/successful at these everyday roles.
From observation, many super sharp CS people very frequently want to write systems from scratch, get bored, then move on. It's really hard to pull them back to use off-the-shelf tech, don't over optimize, etc. Many of the best folks I've worked with in these roles are not CS majors at all (EE, ECE, etc) and this algo screening would filter them out.
I wouldn't be surprised if the people Google rejects because they lacked technical skills are much better than the people Google rejects because they lacked soft skills. Accumulate that for every company paying more than your company and the people with great algorithm skills left will likely be social misfits in some way.
> It's really hard to pull them back to use off-the-shelf tech, don't over optimize, etc.
There is an argument to be made here about career and skill growth. I left my previous job which was basically business-logic-to-CRUD-in-a-complex-domain simply because I stopped growing there. The moment you stop growing in software industry is the moment your career dies, at least that's my perception at this time given my personal experiences.
This is my feeling as well, I think it's the difference between a computer scientist and an engineer. You just need to know who you are and what roles you prefer.
Personally, I don't think correlation is good enough. I never doubted that people who are good with a single given abstract problem are correlated with people who are good candidates. So yes, it non-arbitrarily identifies good candidates, but it also arbitrarily eliminates a significant chunk of good candidates.
Yeah, it eliminates a lot of good candidates. If you can't match Google on benefits then you should not copy their hiring strategy and get their rejects, instead try to get the diamonds their rough process misses by doing something different.
Eliminating good candidates is not in and of itself a bad thing if it a) decreases your risk of a bad hire, and b) you want to optimize to decrease the risk of a bad hire at the expense of eliminating some potentially great ones.
How do they control for bias in that a person with a high or low score may just be perceived to do better or worse, may be given better or less opportunities, etc. if their scores are known to the managers, etc? In other words it’s a correlation but is is explanatory?
Why would managers get employees' scores on random interview questions? Why would that affect project assignments months or years later? Why would we assume folks at Google in charge of creating effective interview processes wouldn't be capable of the most basic statistical analysis, by making sure they had a large enough sample, enough performance reviews for each employee, etc?
This isn't a brainteaser. It's a programming problem.
Brainteasers are like "how many ping pong balls fit in a 747" or "you wake up an inch tall in a blender, the blades start spinning in a minute. How do you survive."
The problem itself appears to be nonsensical with no real world value but as long as it is a well defined problem with a concrete solution then it should be fine. Ideally the interviewer should be looking for how you approach the problem and what you do to obtain an answer (whether they do or not is a separate topic)
Granted, the 747 question also technically does have a concrete solution I guess, but I would end up asking things like what's the volume of a 747 and the dimensions of a ping pong ball and then point out there will be gaps as we fill the 747 with balls so it's not simply dividing the volumes, write a program based on these observations, give a disclaimer that it will most likely be inaccurate, and hope that is good enough to pass.
But that blender question... uh yeah... I'd probably spend too much time asking clarifying questions to actually come up with something plausible.
The blender questions relies on knowing that muscle strength scales with the square and body mass scales with cube. This is why insects for example can be so strong relative to their weight. So if we scaled you down you would be strong enough to just jump out.
Google does still use questions like this, it's just that they're relevant to programming/system design. So a question might be "How many servers are needed to run Gmail?"
The reason the 747 question is bad is because it has little to do with programming and a lot to do with how much you happen to know about planes and ping pong balls and how to estimate volumes of irregularly shaped objects, none of which have much if any correlation with programming ability.
They don't (for SWEs), or at least you aren't supposed to. I think for PMs there may be questions more in this realm, but for SWEs, your questions should all be programming problems.
Your right though that candidates can get system design interviews, but "how many servers does it take to run gmail" is not anything like a system design interview. "How would you architect gmail" is. System design questions by their nature don't really have a correct answer. Your right though that candidates can get system design interviews.
Estimating how much traffic needs to be handled and what kind of resources it would take to handle it are absolutely part of both of those questions, which in reality are the same interview problem because a System Design interview isn't a single question but rather a long series of guided questions that cover architecture, scaling, and a lot more.
It might be an OK problem if the interviewee is familiar with 747s and ping pong balls, and will be interacting with physical objects as part of their job. I have a terrible time with physical measurments and area and space, but thankfully it's not relevant to my work.
Not hard, but still disadvantageous to people who haven't seen it before as that's one more thing they have to learn and keep in mind during a stressful interview.
Sorry, what is a square? I come from a society that exclusively uses a polar coordinate system and have no such concept of your cartesian-biased constructs. (/s)
While I get this sentiment, and agree with it, I would like to also point out that there are almost no clear cut and dry problems in the real world. One needs to be able to know how to ask questions in order to get a good enough grasp to begin formulating some kind of solution.
When you are taking an SAT question, there is no opportunity to ask 'can you tell me how tennis is scored?' That is not the case in some/most interviews.
I have a worse example of this -- I've seen a programming question going around that involves tournament brackets (specifically how they're seeded). I'm not particularly into sports, but if you were, you would have a huge leg up on this problem because you'd already have an intuitive sense of how brackets work and are constructed, that I wouldn't.
Swipes like "This reads like satire" break the site guideline against calling names. This comment would be just fine without it. Please edit them out in the future.
Edit: we've had to ask you this more than once before. Worse, you've been doing it in other places too, e.g. https://news.ycombinator.com/item?id=24683603. We ban accounts that do this. I don't want to ban you, so would you please review the guidelines and stick to them?
Banned means that an account's posts are killed, and only users with 'showdead' turned on in their profile can see them (with the exception of posts that enough users vouch for as good, which certainly hasn't happened in this case). If you turn on 'showdead' on, you're committing to see the worst of what the internet has to offer. If you don't want to see the bottom of the barrel, turn off the setting and you won't.
We don't delete posts outright except when the user who posted it asks us to. So yes, banned accounts can continue to post, except the posts go into [dead] status immediately, which removes them from public view. Only the tiny minority of users who've signed up for 'showdead' see them.
That balance is the best we can hope for on a site that allows anyone to create an account. If we stopped banned users from posting altogether, they'd just make a new account, and post the same stuff except now they wouldn't be banned anymore. Edit: In other words, it's an optimization problem and this seems to be the optimum, even though it's not entirely satisfactory.
Whoa can you please not break the site guidelines like this, regardless of how wrong or un-smart someone is or you feel they are? Taking nasty shots at other users is the fastest way to poison the commons, and entirely unnecessary.
Edit: we've already had to ask you repeatedly not to do this. Continuing to do it will get you banned here. I don't want to ban you, so would you please review the guidelines and stick to them, regardless of what other commenters do?
I think the parent comment just worded it poorly, but is not necessarily wrong.
They were probably just trying to say that a knight's move looks like a straight line of 3 cells in one direction (which includes the starting position of that piece) and then 1 more cell in a perpendicular direction.
Given how primitive and easy to understand the movement constraints in this particular case are, this seems like an unnecessary concern. Note that I am not dismissing the rest of your post.
> A person who has never seen chess will spend more of the interview just trying to figure out what is meant by the Knight's move.
I mean he explicitly spells it out as part of the problem. I highly doubt in an interview they'll just plop down a term like "knight's move" and refuse to clarify what is meant by it, if necessary. It's not exactly the sort of thing that takes living a life of privilege to understand.
The difference is on the margin. If typical candidates who reach the optimal solution reach it in the last 5 minutes, and if it takes 5 minutes to discuss with a non-chess-playing candidate how a knight moves, then you have severely disadvantaged that candidate.
Isn't this somewhat dependent on the outlook of the interviewer? I know interviewers will greatly vary, but is it not more important to consider what my path to the solution was, rather than what the actual answer is?
I would be more inclined towards a candidate that knew nothing of a problem but was able to explore a way to the answer vs. a candidate that knew the answer simply because of hours of rehearsal
My guess is that many hiring practices will change(and are changing) over time based on the internal work that people are doing to drive better initiatives, and less based on comments on HN.
Personally, I would like to see what people thing about the actual solution to this particular problem...