Someone is getting their ego boost. what a waste of time / resources in creating this? surprised to see to it come up on the feed. Unfortunate, just like the stuck ship in Suez Canal. Hope all the egos are boosted enough to float away.
Most hiring interviews involve the interviewer's knowledge about a particular domain, language or framework since it is easier for them to evaluate answers for those given the limited time window they have. If according to this matrix, the interviewee has levels of expertise in each row different from what the interviewer has, then the candidate is bound to appear to fail. I am not implying that this is a bad tool for hiring, just that it is not give a definite impression about the candidate.
Over the 6 years period when I have worked as a software developer, my work has been split mostly between python and ruby, solving problems in different domains based on what was required as part of my job. However when interviewing at Ruby or Python dev companies, a lot of questions asked are very specific to language based design patterns/concepts which I end up being unable to answer because of not being aware of the exact vocabulary or because I have never encountered them. Does that make me a bad hire ? Maybe. Maybe not. Does it mean it that the interviewer is asking the wrong questions ? No, they are asking questions on what they have worked on and is possible for them to evaluate.
After realizing this while interviewing candidates myself, I allow candidates to search for concepts that they are unaware of, and demonstrate what they understand or how they would use it given 5-10 mins of going through any documentation they find(or I can provide). Because (1) there is a limited amount of information a person can store and reliably extract from memory and (2) just because someone is not aware of a design pattern/concept, it does not imply that they cannot adopt it.
Articles like this remind me that there's an entire world of people who have to work day-to-day in environments much more hostile to creativity and fun than I've ever had to endure.
This feels like something I'd come across in an archeological dig, something that tells us about our past, and how far we've come since then.
ITT: How to hire formulaic robots and reduce an entire profession to paint by numbers. This is really no different to current hiring though. In that you force people to learn arcane algorithm skills they will likely never use at any software company and you end up with entire books written on 'cracking the interview' over being a good engineer. The best companies I've ever seen actually looked at the work I had done enough to assess what kind of skills I had. But hiring is about being lazy. Good luck finding many companies who will actually treat you like an adult these days.
First of all, the third column here is highly editorialised. If you have a look at anything related to languages, he always lists Erlang, Oz, Prolog -- I wonder why? I won't go into a diatribe on this-or-that nonsense but it would be fairer to talk about languages by their capabilities i.e. either include more languages like Ocaml (which is my hill to die on for "high-society" languages) or just say "has pattern-matching and concurrency primitives".
As for the Systems Programming section, does he really think that knowing how a compiler works is a system-related topic?? and why is that level 1? shouldn't the first level be understanding things like system calls, filesystems, drivers, kernel operation? processes? etc. Once again I think this matrix has been constructed with little consultation from domain experts for that area.
I agree for the most part with the criteria for programming itself, but I need to nitpick at the claims about using IDEs. Why is "written a custom macro" a tier-3 requirement for IDE proficiency? wouldn't it be more reasonable to say "has written a plugin"? and moreover what does this really have to do with programmer competency? a beginner could write a macro or plugin for VS Code for example. Therefore I don't think it's a good marker.
I guess like most discussions on programming, it is about 60:40 in terms of objectiveness. The remaining 40% is largely grounded in the author's personal experiences and you can tell that he had trouble deciding what the top-tier programmers would be capable of, preferring largely complexity-increasing exercises instead of complexity-reducing tasks. And for me the latter is the sign of an expert.
> As for the Systems Programming section, does he really think that knowing how a compiler works is a system-related topic?? and why is that level 1? shouldn't the first level be understanding things like system calls, filesystems, drivers, kernel operation?
I think the idea is that most people will start exploring the world of programming from higher layers working their way down. People use software, then they write software with tools (languages), then they try to understand the tools, then they try to understand what the output of the tools mean on a yet deeper level, going down the rabbit hole all the way down (to physics? but I guess the lines is drawn elsewhere).
This may be wrong but it's obvious from the rest of the matrix that this is probably just based on how the author and maybe their friends gained "proficiency".
None of this is generally applicable. I do think the,matrix is still valuable. It provides a view of how some people may evaluate competence (even if misguided wrt ordering/levelling) and provides a set of goals that are not bad per se. Being on the highest levels mentioned there does probably in fact translate to a certain high degree of experience and competence.
I would go on further and say that it's not only for the recruiters but also for the hires to understand whay are the general skillsets they can have to expand over a period of time.
Block-chain Dev, Android Dev, AI/ML devs are domain specific but in each of those there are atleast 3-4 domains mapping out in general from the PCM.
There will be a domain specific competency matrix on top of PCM.
A lot of dismissive complaints that boils to "this is objectifying programmers", and yes I get it. But now I'm on the other side (VP, but still coding), and actively recruiting, I realized some things taken for granted from myself and my good engineers, are still missing from a large number of candidates I've interviewed so far.
I usually ask the candidate for a live coding test over zoom. "Here's what you need to code, please share your screen while you're doing it so I can see how you work. You're free to google, of course, because that what we all do at work. The guy who wrote the quiz did it in 5min, I did it in 10. I give you 20 just in case. Please start now."
Then I watch them code. After that, we discuss their approach. But usually the second step never happened, because as soon as the timer stops and I see their code, the code (and how they're writing it) usually tells everything.
That's why programming has declined from Software Engineering to 'coding'. I'm sure there are similar scoring tables in Civil Engineering, and Engineers there dont whine because they don't know or y principle.
Some examples:
Being asked the following questions in an interview: ‘You are going to build the bridge that you were given as a case study earlier this morning. What are the ground investigation steps that you need to undertake? What sort of tests will you do?’
At the end of an interview being given some beam sketches and being asked to draw the shear force diagram and bending moment diagram.
Being asked to draw a strip foundation and build up to how you'd construct a suspended slab.
Being asked what factors you would need to take into account if asking to site a railway station.
How do you compress months of product development in a zoom session? Answer: you don't.
That's why they don't build a bridge over zoom, just like nobody ask you to build a complete app over zoom. They only ask the engineer basic engineering questions to ensure the guy is not a dud.
the IDE section is a bit dated. as a hiring manager, my belief is that IDEs are good enough (and even have collab features that are good in work-at-home-epoch) that it's not an unreasonable to never use/never have used vi/vim/emacs.
If you're doing server administration, not being able to use vi is unacceptable; but these days everything is deployed by container and logging into raw linux systems is not really a thing as much (your company may beg to differ and that's fine).
Is nano really That Bad? It's always been good enough for my needs, and in the cases where it's not I generally hop on Sublime or vscode, make the changes I want to make, then paste it into my term.
The place nano is best is in environments which aren't otherwise setup for working on. e.g. a fresh linux install or some ephemeral server/container. If you can do this remotely e.g. with the VSCode Remote, great.
I'd suggest if you know a few vi keybindings, vi would be better than nano. e.g. navigating through the document will be quicker with vi than nano.
It's not that nano is bad, but for server administration you cannot be sure what exists on the other end, and due to security rules you can't always install what you want, except, there will always be vi. Nano might not be there when you need it (I have learned this the hard way).
It isn't, but frequently the default editor is set to vim. That said I would question the judgement of anyone that judges you based on your favored editor.
In interviews I ask candidates what their favourite editor is. Not to judge the choice, but to try to gauge if they care enough about the tools they use to have looked at different ones and taken a position.
I feel that there is a strong correlation between overall competency and whether someone cares about their tools, and their passion for their editor is a way of getting hints of that
there needs to be a -1, where someone is just faking it; and maybe a -1(') where someone is really good at leetcode but terrible when it comes to architecting simple systems, writing tests, being a good team player.
Note that -1(') devs might yet be a good fit for a place like google where shaving off 10ms off of a good chunk of requests worldwide is worth tons of money, and there are tons of other teams that can make up for that dev's shortcomings by building up protective infra.
Most of these are ok to roughly score developers. Might want to weigh the importance by position and calibrate against the other candidates/employees in that org.
However, one criterion I disagree with is years of experience. I suspect (based on personal experience) that the variance of the other categories with respect to years of experience is a lot higher than one would think. So there's no real utility to including it.
Are you taking credit for this? I have seen this Matrix before, posted by the original author, and you should credit him.
In any case I agree with the majority here that this tool is akin to standardized testing which at long last is seen for the bad tool that it is for finding objective quality.
Makes everything sound so mechanical and boring. Do they want normal people or some bot like creature that they can just assign numbers in a matrix? Depressing.
You don’t want to hire boring bot-like people, but you also don’t want a totally subjective hiring system full of bias. I think having an objective scoring of some sort is a good reference even if it’s not the final decision.
Here’s the thing: there is precious little if any empirical evidence suggesting that what is objectively measurable (e.g. solutions to textbook CS problems) has utility as an indicator of engineering competence. There are exceptions of course, for example in roles requiring a person implement such things as for a library, but these aren’t the rule.