>It sounds like Asana wants to determine if people can do this kind of thing.
Same for Facebook/Google/Apple/Microsoft/Amazon/etc that also use whiteboard interviews. Those types of companies value seeing candidates' thought process more than the final code of work-at-home test questions. There's a signal there in whiteboard interviews that can't be replicated by at-home tests. Whether that unique signal is valuable or worth the cost is up for debate (as the ongoing disagreements about hiring practices show).
I am not disputing that it is possible to design serious, rigorous evaluations that somehow hinge on candidates not using their normal tools to answer programming questions.
What I dispute is that anyone really does that.
It is awfully easy to rationalize any hazing ritual with this sort of six- degrees- of- concurrency- and- formal- methods logic. But there's a simple observation to make about all these questions: they do not mimic the experience of doing the work the interviewer is evaluating candidates for.
The more a question diverges from the experience of doing the job, the more process and framework and structure and rigor is needed to extract value from it. But really: most of these questions are just the same dumb fucking programming questions, just with different variants of the interviewer repeatedly whacking the candidate over the head with a paper towel roll while trying to answer.
I think one thing often missed in these discussions is that the interviews that work best for AmaGooBookSoft are not necessarily the same as the interviews that work best for smaller companies.
I've interviewed at startups that have some of the best work-sample testing that I've ever seen - but I could immediately tell these systems wouldn't work well at larger companies.
There are entire companies devoted to helping people prepare for GooAmaBookSoft interviews. If any one of these companies standardized on a single interview for all candidates it would immediately turn into a situation like standardized testing in high school where students study to the test instead of learning the core material.
I guess my advice is that if you're a startup, take advantage of this! You can do things that don't scale. Interview in ways larger companies can't. Don't assume that the way these companies interview is the best for you.
That's what I'm disagreeing with - some small but important parts of my work are, in fact, "reason through this code cause tools can't help you."
As another example, I ask people to set up numerical/statistical algos without worrying about specific numpy details. I trust people to translate a log likelihood formula to python, but I want to see them actually derive that formula. This isn't some hazing ritual - it's important to actually understand the statistics to avoid getting stuff wrong.
Again, the tools won't help you - if you set things up wrong, you just get a bunch of false positives.
I understand this isn't very important for the average dev writing CRUD or ETL apps. I'm just pointing out a particular use case where their constraints are reasonable.
I don't think anyone is arguing that you shouldn't ask candidates to prove domain specific knowledge as part of the hiring process. The issue is that typically people use proxies for proving the candidates have the skills they need for a job, and you should design the proxies to be as close to the job as possible.
So if deriving log likelihood formula in high pressure, time limited situations, is a good proxy for the kind of work the candidate will be doing, by all means do it that way.
If on the other hand, they are more likely to be asked on the job, to go away for some time, think about the questions needing answers in a data set, design an algorithm to suss out those answers and then persuade their colleagues that it is correct, you should design your interview process to align with that.
It sounds like they are trying to do exactly what you suggest:
We’ll ask you to solve some coding questions in a language and text editor of your choice. Feel free to bring your own laptop, or we’ll be happy to provide one. Notably, we ask candidates not to compile or run their code during this exercise, and not to refer to online resources. Our goal is not to simulate day-to-day software development — where we read docs and write lots of tests! — but rather to see how you reason about your code and input cases. For that same reason, we won’t ding you for superficial syntax errors or misremembered function names. After leaving you to work through the questions on your own, we’ll sit down together and talk through your solutions (including any ideas you didn’t have time to commit to code).
In any case, all I'm disputing is the idea that formal reasoning about algorithms (without stressing function names and compilation) is a useless skill. We have tools that make it unnecessary for many entry level devs, but that doesn't mean everyone can ignore it.
Everyone has heard of interviews that pointlessly refuse candidates the opportunity to consult Internet references, like every working programmer does every time they code anything ever. But your process goes a step further: you won't even allow candidates to compile their code. How can this possibly be helpful to your process?
To me it sounds exactly like they want you to demonstrate some semi-formal reasoning about algorithms, and you think this is not a realistic representation of work. What do you think they mean, and what do you oppose?
Same for Facebook/Google/Apple/Microsoft/Amazon/etc that also use whiteboard interviews. Those types of companies value seeing candidates' thought process more than the final code of work-at-home test questions. There's a signal there in whiteboard interviews that can't be replicated by at-home tests. Whether that unique signal is valuable or worth the cost is up for debate (as the ongoing disagreements about hiring practices show).