Personally - I don't see a lot of value in this list (I read all 50 items and watched the video and I regret wasting the time).
There are certainly some valuable concepts - but the presentation is fairly incoherent (including the video of the talk he gives) and not particularly helpful.
Many of these items are utterly unrelated: Some are fairly specific to gaming (profiling/memory layout/timing) some are basic professional expectations (I can schedule my time well). In neither case is there real advice for people who actually struggle with some of these areas.
My item number 51 would be... "I can articulate my point in a more compelling manner than an incoherent collection of 50 things I don't like"
I think it has a so what problem. It sounds nice but then so...? I failed to understand what exactly the author wanted to convince people to do. Expectations - for who? Let's say someone does exactly this, then failed to actually ship something that people want then it's as useless as not knowing the list at all.
> I failed to understand what exactly the author wanted to convince people to do.
He’s writing a list as part of his imaginary shower argument with his boss and coworkers as to why they are all inferior and he’s perfect and why all the problems they are facing are easily attributed to his list of 50 points of things he feels he does that they are failing to live up to.
Does it? Almost every day we get articles on every existing medium regarding the importance of communication. Even the tech circles in 4chan parrot it. Yes, the place known for anti-social recluses parrots it, too.
And it's showing. There is so much emphasis on communication, people started equating quantity to quality. "Just talk more, that's what great communication is about" wouldn't be an observation too far from the truth.
Far too little is invested into researching what makes quality communication. It's just the same old rational arguments against rational arguments, day in, day out. For a field entirely about managing and sharing information, be it to machines or to people, that's pathetic.
If you only compare the sermon to the hymns, it does seem a little redundant. But if I look around my workplace, clear, frequent communication is absolutely a huge need.
Again, this is purely anecdotal and at best rational only. Our field is already loaded with rationality, and most of our "this is clearly better" cases fail to reveal any improvements in case studies.
If people are so certain it is better and believe it is important, let them stake money on it and do research. Until then, the cat is both dead and alive.
Not just that, but also "am I in the habit of doing things that are conducive to situational awareness" which is perhaps the even more important bit! You can have all the situational awareness in the world now, but when it really matters is when the context changes.
No disagreement, but I'd suggest that situational awareness is an ongoing thing, so if you don't notice when context changes, you never really had situational awareness.
One funny thing about situational awareness is that when you lose it, you don't always know, because, well, you've lost your SA.
Put differently, something you consistently find when you postmortem incidents where people have lost their SA is that they thought at the time they had full SA. They only notice they've lost it when things are clearly going downhill.
So it's important to regularly re-ground yourself with reality even if you think you don't need it. That's a critical component of maintaining SA.
I found some value in this list (read it, but didn't watch) but I agree that 50 unorganized bullet points is too much. I think they could break it down into 6 or 7 larger themes: communication, being a good citizen, using a systematic development process, investing in developer tools, understanding real world end-user requirements, technical requirements, and data flows. All good advice, maybe not the best delivery.
I find it ironic that the lists focused so much in communication, and yet it had me confused and lost (ie: It failed to communicate to me what this list is about).
> 1. I can articulate precisely what problem I am trying to solve.
> 6. I have a Plan B in case my solution to my current problem doesn’t work.
> 9. I can clearly articulate unknowns and risks associated with my current problem.
These rules imply one of 3 things about the author:
* That author only encounters problems that have been fully solved before
* The manager gives zero weight or value to discovery
* The manager expects the whole project to have been fully specified before starting
Yikes, that's toxic! I think it's important that engineers have the mindset of understanding the problem, but that means that figuring out the problem is part of the work! Which then means that engineers should definitely have periods where they don't understand the problem, where they don't have a Plan B yet, and don't know what the unknowns are, because they can't know what the solution will be until they've started the work.
This is a strange interpretation. It does not matter whether you are working on the simplest web page or a brand new problem that no one has solved before: it is absolutely necessary that you clearly articulate the problem you are trying to solve. Even if you’re a scientist working in a purely exploratory, theoretical setting, you still have to be able to articulate your problem. You may refine that as you go, but if you sit down to work on something and realize you can’t articulate the problem, the first step is to stop and focus on defining the problem better.
Point 1 is a prerequisite to points 6 and 9. But once you’re ready to start actually building a solution, you should absolutely have an understanding of the risks and a Plan B in case your solution fails.
Nowhere in the article does it imply that you can fully understand a problem without putting in the work. The difference between juniors and seniors that the article is highlighting is that juniors will jump into implementation without knowing these things, while seniors will take time to do some research and figure these things out first. Usually, if you’re senior, you should also have the experience to do so quickly, but that’s not always possible.
It’s reasonable to have tickets that are “is this API interesting to us?” or “how do clients use this new system?” It’s not a “problem”, and there aren’t definite goals. The task is open-ended, and it’s ultimately up to the developer’s judgement to decide that they are done.
To define terms, “the problem” is the issue faced by the client or the customer. The task isn’t the problem. A team should tackle tens or hundreds of tasks while still learning to understand the full problem. This is the motivation behind iterative processes like Scrum.
The author’s message is that if you don’t fulfill all 50 requirements _at all times_, you don’t even deserve to be a software developer (not even a junior). I disagree. I think I get the message that these rules are trying to send, but I think they are, as a whole, unreasonable in a professional, business environment. Everyone should be a product engineer, but being product-focused necessitates speculative activities that exist for no other reason than figuring out the boundaries of the problem being solved. And those activities need to be repeated as the problem is being solved, to evaluate if the problem is correctly understood. Someone, whether that’s a junior or a senior, needs to be spending time doing things that aren’t articulateable so that everyone else can articulate exactly why they are working on their current tasks.
As I become more senior, I’m starting to appreciate all of the tasks that don’t go onto the sprint board and aren’t articulateable. They exist because I need to know enough about Product’s or Account Management’s job to communicate with them, and I won’t know what exactly “complete” is until I’ve reached completion on those tasks. That’s part of the job, and I reject any list of rules that leaves no space for those activities.
Quoting a researcher: "If you knew what you were doing, it would not be research." (Daniel Lemire, while talking about research grant proposals.)
> 1. I can articulate precisely what problem I am trying to solve.
You can read this in multiple ways, but I'm reading it as: "I'm not participating in the process of figuring out which problems we can solve. I will only ask for clarification of your intent, never try something on my own."
Periods when engineer don't understand the problem should be spent on analysis of the problem domain. "Now I am working on defining the problem domain" - is an activity to work "I don't understand the problem" task. During that period probably zero code will be written.
> That author only encounters problems that have been fully solved before
He doesn't, otherwise there would be no talk about "plan B" and risks. When you actively write a project code, you should know that solution is possible. Having plan doesn't mean "problems have been fully solved before".
You may have POC which doesn't end in resolution, but it should be clear what is POC for and a failure is possible outcome.
Number 1 should not be controversial for vast majority of IT. If you cannot articulate what problem you're trying to solve, you won't know when you're done or how to approach it or what it's value is. Anything from scope creep to underfunding to business mis alignment are classical dangers of not having #1. (note it does not read "I know exactly the specific solution and implementation details". Just says "understand what the heck you're trying to accomplish", which I would expand as "understand and agree")
#2 I view tactically and or operationally. It's good, when choosing a tool or approach or algorithm or vendor etc, to know what some alternatives may be. Presumably, you consciously or subconsciously did that work anyway during solutioning or POC or just brainstorming phase.
#3 is basic project management. Articulating risks is crucial. Ideally the developer can do it but if not, for love of all that is unholy, somebody should :). And you should be able to articulate some key unknowns - e. G. I don't know the performance of this yet to be developed application until we get to performance testing.
"No one has ever solved this problem before and I'm not sure where to start" seems like an important unknown/risk to let your manager/lead know about. Likewise, the backup plan can just be "we scrap that feature" or "we solve a much simpler problem". The point is to be deliberate about what you're doing and communicate potential setbacks.
Exactly. The hardest part about software is deciding how to implement the tools we have, and not knowing the answer until we start developing and uncover the true problems we must solve. A list of small tasks to complete is solvable by AI.
The problem to be solved can be "I have been given a single sentence specification and I have no idea what the boss wants".
The plan B can be "The boss cannot tell me what they want in a succinct way, and/or I am not confident what they want, so I will make various suggestions and get them to choose an option."
The unknown risks - all developers intuitively should know this unless they are very new. Stuff like "this will work, but not effort is going in to how to scale this up later", or "this requires digging up some old code no one has touched for ages, which could explode the time estimated if it is hard to understand that code or refactor it".
We have a problem where there isn't ever a mild criticism or offering a different perspective. Just because you don't see the world your way, doesn't mean that others are toxic.
I think this is serious problem and should be reflected upon. Sometimes to push your perspective, it starts with considering other perspectives with good intentions. May be people will listen to you.
And that's precisely why 'spikes' or 'POCs' exist. It's way more common to not have all the answers if you are working on anything remotely interesting. There should be some time to explore solutions. In even more interesting cases, solving the unknowns is the entire project.
It seems that the author has never encountered the "research" side of R&D.
Where does the article imply that seniors don’t do research? How exactly do you expect that anyone can know these things without doing research? The point of the article is that juniors jump into the work without doing the research to figure these things out, while seniors take the time to figure these things out. Sure, maybe they can do that quickly based on experience, but I don’t think the article implies that seniors don’t do research.
Having a plan B for things is a bit silly sometimes. If you need to parse some data, your plan A is to write a parser, and your plan B is...to write a parser in a slightly different way? Pair program as you write the parser? That's not really plan B, it's just redoing plan A if you mess up.
Plan Bs make sense for external dependencies, but for internal work they imply your current solution has feasible alternatives, which is often nonsensical.
1) Are you solving the problem that someone (your customer / manager / stakeholder) wants you to solve right now?
what makes you think the thing you're working on is the priority, have you checked with someone or got convincing data
2) Are you taking longer than expected?
if yes, why? What's a practical solution now and for the future so everyone remains happy? If no, why do you think so?
3) Are the people in charge of your future happy with what you've delivered?
How do you know and what's the reason if they're unhappy?
4) If there's a disconnect on any of the points between you and the stakeholders, what is a practical solution that you can implement it quickly?
5) Are you knowledgeable enough that people can rely on you to solve their problems in the most practical manner?
> Say it’s Wednesday, you have a project due on Friday, and you get some new task dropped on your lap. You think “I’ll do the new thing now, and make up the time for the original task by Friday”… mistake! Communicate about the conflict on Wednesday. Your product manager will help manage the timing and risk.
My product manager was fired a while ago and no one has replaced him formally yet. A C-level guy is micromanaging my team's work now and he sucks at it. I really miss being able to push back on these conflicts.
Assuming (yeah, I know...) new task is 'critical, it's usually enough to go 'We can do that, but it'll push X out. Is that ok?' And if it's not ok, reasonable management will either pick a priority or get you some help to get them both done on time.
If you're often getting feedback that you 'need to get them both done by Friday.' you might want to evaluate if your management has their act together.
> If you're often getting feedback that you 'need to get them both done by Friday.'
My response is always that I’ll try, and we’ll see which one ends up half-finished on Friday. This either gets them to pick one, or assume everything will be fine until Friday, when it blows up in their faces.
I don’t think I’ve ever had someone (even the manager) blame it on me for some reason.
This guy in particular has collected some serious credibility. In the past he's worked at Insomniac Games. He is also credited for popularizing Data-Oriented Design and has delivered at least one talk that went viral and that video thumbnail of him in his red flower shirt has become a meme for a no-bullshit and requirement-focused engineering approach. Go check him out on Youtube.
You wrote: <<This guy in particular has collected some serious credibility.>>
I'm confused. The blog post is written by Adam Johnson. Are you referring to Mike Acton or Adam Johnson?
I never heard of Adam Johnson before this HN post. From his books, he appears to be an expert in Django. Yes, I agree about Mike Acton and his ideas around Data-Oriented Design. It sounds like a very interesting approach to programming in a resource constrained environment.
I'm referring to Mike Acton (as you already figured out). As I understand it, the author is only summing up a talk by Acton to written form.
> It sounds like a very interesting approach to programming in a resource constrained environment.
It's more of an approach that is maintainable and straightforward, and only coincidentally (well, not really...) also straightforward and fast to execute.
His expectations around producing documentation go only as far as "Think about what documentation or data users need to understand and use your solution."
That seems like rather a low bar, compared to items like "I can articulate how all the data I use is laid out in memory."
I'd prefer to live in a world where a professional software engineer was expected to write documentation, and expected to be competent at it.
> I can articulate how all the data I use is laid out in memory.
That's not a high bar, it's an arbitrary hoop. It'd be like saying "I always know which processor cache my variables are sitting in". In modern languages it may be literally impossible to look at a block of code and know what's sitting in the heap vs. on the stack, and the heap is often broken into many different components only fully understood by the compiler/interpreter/VM writers. We want to abdicate responsibility of this kind of memory management to the interpreter, just like we want to abdicate responsibility for handling processor cache levels. If you can articulate how all the data you use is laid out in memory all the time, you are majorly micro-managing the runtime.
Not sure if you're familiar with Mike Acton (as mentioned in the article). One of his key points of focus is data-oriented design, and that when designing software, ignoring the architectural realities of the hardware is ignoring one of your responsibilities as an engineer to deliver performant software.
Now, it's possible to argue that writing performant software is not important. The prevailing modern sentiment definitely seems to be "The compiler / interpreter takes care of that". But given his track record of delivering high performant running software, and the trend in computing towards sluggishness, I'm trending more and more towards his camp, than the "don't micro-manage the runtime" camp (which is starting to feel more and more like a thinly-veiled "I don't want to have to think about it").
I don't think it's thinly veiled at all. Some people might be deluding themselves into thinking they aren't losing something by abstracting away the intricacies of the runtime but I imagine most are actively thinking they are okay with this tradeoff. Instead they are allowed to focus more on the domain problems and let their users tell them when it gets to be too slow. Whether it's the right trade off probably depends on what their goals are.
> Now, it's possible to argue that writing performant software is not important.
The quote I'm arguing against is "I can articulate how all the data I use is laid out in memory." Indeed, writing performant code is not important, most of the time. It is critically important a small amount of the time (actual percentages heavily dependent on the type of software), and yes, in those times, understanding the architectural realities of the hardware is somewhere on the list of things you need to understand to do so, just below a solid understanding of complexity analysis, a wide knowledge of useful data structures, proper design of queries and use of indexes (if relevant), etc. A good software engineer does not say "I always know exactly how all my data is laid out in memory", they say "I know when and how to care about that, and the rest of the times I ignore it." Just like they do with many, many other concerns. Anything else is just premature optimization. The most important problem solving skill by far is knowing what you can safely ignore.
> We want to abdicate responsibility of this kind of memory management to the interpreter, just like we want to abdicate responsibility for handling processor cache levels. If you can articulate how all the data you use is laid out in memory all the time, you are majorly micro-managing the runtime.
Unfortunately, in practice, it's not possible to be oblivious of the layout of data in memory if we want to write fast code. The CPU/memory speed disparity graph [1] shows the new reality for programmers: the slowest part of a program is bringing data from RAM into the CPU registers. Fortunately, modern CPUs have very fast caches that help amortize this cost and it's the responsibility of the programmer to organize data to take advantage of that fast hardware—the compiler cannot do it. That's why two functions, with the same algorithmic complexity, and which compute the same result, can have an order of magnitude of difference in performance between them [2]. The famed sufficiently smart compiler that can do those transformations does not yet exist as far as I know.
If the slowest part of your program is waiting for RAM to get into your CPU registers, then you have one of the most blazingly fast programs ever written. Grats! The vast majority of software has much, much, much lower-hanging fruit than that: things like O(n^2) algorithms where O(n) exists, SELECT N+1 issues, missing database indexes, missing easy wins with caching, repeated work, threads blocking each other, loading more data than necessary, throwing and then suppressing exceptions everywhere, bad netcode, doing work serially that could be paralyzed, etc. In this software, fixing these issues will be 10x easier and result in 10x larger speedups than worrying about organizing your data to get loaded from RAM into CPU caches more quickly. That makes this advice counterproductive for most programmers to hear.
You want to (I do too!) , but the abstraction eventually leaks (you get unexpected behavior in long running processes because of gc/libc/kernel issues, or you need very specific control performance wise, etc) , and you end up having to know about it.
Knowing how your data is structured in memory is a trivial exercise for anyone working on a game engine like the author of this talk. You don't even need to think about it. The layout of every data structure used by the engine is known and chances are you have implemented a bunch of them yourself.
I really love these points as a North Star, but these would be mostly aspirational for teams I've been on.
I work on a team now where we get huge scopes of work and a couple people will tear that work down and would answer these questions as they do so. However, most teams I've been on deal with far more interrupts than the team I'm on now does. Those interrupts are lemented but justified by the business and definitely impact an engineers dedication to a given projects. Interrupt driven work is a sort of split brain problem that I think gets in the way of answering these kinds of questions because it removes the time that an engineer would otherwise spend entrenching themselves enough to know the answers and problemscape.
It sounds like you don't think these expectations are unreasonable in a healthy environment. Is that what you're getting at? I believe it's possible to hit 50/50, and I think I personally do (with the caveat that I don't do all of these things explicitly, every time, on every project), BUT I'm also on a high performing team in a healthy work environment.
> with the caveat that I don't do all of these things explicitly, every time, on every project
which is totally fine, most of those things don't need to be done explicitly. (as such they take less time than many commenters seem to think they would.)
being able to articulate something doesn't mean that you've taken the time to do so. just that you've generally thought about your task enough before starting it that if your boss asks you that question, you can produce a coherent answer in a timely fashion.
I think the expectations are fine. In a sense I actually judge the engineer less than Mike does according to these criteria. If you can answer these questions I think it says more about how your team organizes and accounts for work more than the diligence and knowledge of a single engineer.
I didn't have time to go over all 50 but I do like #3, "I have confirmed that someone else can articulate what problem I am trying to solve."
Too often I see devs go down some deep rabbit hole to solve a problem no user has ever had, and I wish they'd have talked to me first about it.
But the author of this article could have generalized a lot here, many of these could be summed up by the (admittedly less interesting) "I know how to communicate clearly and do so frequently with my team."
Items 1-11 (+ a few others) are essentially, "show me the ROI (in $) of the work that you are doing". That is fair but ROI driven organizations tend to have very little appetite for risk (or tend to be command and control driven). Sometimes however you can't connect the dots looking forward; you can only connect them looking backwards. So, you have to trust that the dots will somehow connect in your future. You have to trust in something - your gut, destiny, life, karma, whatever (I might have heard someone else say this once, not sure...).
I view these less as actionable items, and more as character hallmarks. No you don't need to actively be making sure you're doing All The Things by iterating through this every day. Yes these are things that should be captured by your company's best practices, and your own professional integrity.
Expecting an engineer to do all of this even when they are at odds with rest of the execution machinery is simply not practical. It leads to heroic behaviours and burnout.
I hope this guy has a complimentary list of 50 things for other roles.
You only implement the Plan B if Plan A fails. Plan B doesn’t have to be elaborate. It can be as simple as, “I’m going to look into using this third-party service to solve 99% of this problem, but if it isn’t a good fit or it’s too expensive, I think we can put together a home-grown 80% solution in a couple of sprints.”
Well one case where it can make a lot of sense is when Plan B is a quadratic/exponential solution that can be written in one day but there are complex to write/research item solutions that are much faster for Plan A.
Which, if any of the list of 50 do you think is unreasonable? I don't always explicitly do all of these things all the time on every project, but I'm certainly _capable_ of doing so. For example, working in backend web dev, I generally don't go into it thinking explicitly about, say, memory usage, as long as my monitoring tools are not telling me there's a problem. Of course, there are exceptions, which is why I said "generally" (for instance, if I know I'm making a query to the DB that's going to yield 100k results, I'll definitely be thinking about memory usage), but I also don't just YOLO it and see if things crash.
None of them. But all of them combined. And they never are all part of the same job description; except perhaps in an organization small enough to not have job descriptions.
I disagree. I think the list of 50 is an entirely reasonable set of expectations for a senior (or at least staff+ level) engineer, with the caveat in my previous comment that not all of these things need to be done explicitly, every time, on every project. Like another commenter said, being able to articulate something isn't the same as explicitly doing it.
I also realized, I'd put a small asterisk on "I have recently profiled the performance of my system," as well. I would expect that an engineer would do at least some minimal profiling of their code in order to make sure it's not too slow in itself and doesn't excessively slow down any calling code, but for the most part, with a running production system, it's generally safe to assume that performance is good enough unless you've been shown or told otherwise. And I think that all still fits into the spirt of the expectation as well.
At the end of the day, the people signing paychecks care more about their priorities than they do about these 50 points. But they can be useful to keep in the back of your mind when making decisions that aren't governed by anyone else.
> I can articulate precisely what problem I am trying to solve.
It's appropriate to put this one first because it shows up in so many of the problems that follow.
Easier said than done when there's a deadline pistol pointing at your head. You gotta get stuff done. Show progress. Writing any kind of specification is just a slippery slope to Waterfall, geez. There's no time of any of that namby-pamby talking to and watching users BS. They don't know what they want anyway! Real developers ship!!
"I can articulate precisely what problem I'm trying to solve" is not just on that person. It is a function of their collaborators – product/program/engineering managers and other engineers around them.
If you are in a culture where it is acceptable to send tasks at each other without much context, and with harsh deadlines, and with a perf management system that rewards execution under such conditions, then you will get precisely the opposite of someone who can articulate what problem they are trying to solve.
In fact you will get a culture where asking questions for deeper 'why' understanding will be seen as disruptive.
Excess time pressure is behind so many problems. It's the thing I'm always most concerned to manage when I start a new gig. That said, shipping early and often is one of the best ways to mitigate excess time pressure, because it helps build trust between stakeholders and the team. The longer the release cycle, the more likely it is that business stakeholders will get antsy and start doing harmful things.
> I am not actively avoiding any uncomfortable (professional) conversations.
> If there’s something wrong at work, don’t put off talking about it.
> I am not actively avoiding any (professional) conflicts.
> If you’ve noticed something is going wrong, whether technically or communication wise, get those conflicts out in the open. Letting them stew never helps.
This requires a prerequisite of trust which I think is more of a rarity than commonplace. People will speak up about stuff, but not all stuff.
I guess that's inspired by the famous "plan to throw one away (because you will, anyway), from Freed Brooks if I remember correctly.
IMHO it makes sense if Plan B is much simpler, but not as efficient as Plan A (remember, this is a game developer, everything is about efficiency). You build Plan B is a prototype, and could still fall back on it (and iterate on it) if Plan A fails.
If Plan B is more complicated than Plan A, doing it first sounds... dumb.
If the point was "write a prototype/less efficient version prior to the more time consuming but better quality solution" then maybe they should have written that.
The connotation with plan B is you do plan A, then if it doesn't work, you do plan B. If you are supposed to implement plan B first then really plan B is plan A.
I would add: can explain the level of complexity being introduced and how it will be managed.
More complexity raises the cost to support and develop new features in a non-linear way.
If you do features A, B, and then C; feature C might seem to be the most difficult. But if you do A, C, and then B; feature B might seem to be the most difficult. If you manage complexity poorly in features A, B, and C; then perhaps feature D is too expensive to ever accomplish -- not because it's inherently difficult, but because it's difficult given the complexity already present due to A, B, and C.
To me, this is the fundamental challenge of software engineering. Other kinds of engineers must deal with complexity as well, but it's somewhat more contained due to physical constraints. Software complexity is unrestrained.
This is good. It’s been a while since I found an article about software development worth bookmarking.
I think anyone should be expected to write a simple bug report, with reproduction steps and expected versus actual result. There are a lot of people out there who don’t do this simple, necessary task well!
If you are truly working on something innovative, chances are that Plan B is already up and running. It is your competitor's solution that you are trying to replace with a better one.
For example, my new data management system can also do many traditional relational database table operations. If you want to do fast analytics or queries, then it can do it better than the competition. For basic stuff, other RDBMS solutions will surely get the job done; but if you want to build a pivot table against values in a 10M row relational table then this is the tool you want: https://www.youtube.com/watch?v=2ScBd-71OLQ
> 39. I never use the phrase “future proof” when referring to my work.
> Future-proofing is “100% a fool’s errand”. “You can’t pre-solve problems you have no information of.”
You can't pre-solve problems you are unaware of but in most software engineering two problems are very predictable:
1. The system will have to handle more load in the future.
2. Someone will eventually want to change the business logic.
It is wise to write software in a way that you are well positioned to resolve those problems as they arise. When I use the word future-proofing in my day to day work, this is what I am referring to.
I mean yeah this is great on paper, and applicable for bigger projects, but for day to day stuff there is no way I’m gonna pull another engineer in to run through these steps.
This kinda stuff always reminds me of someone making a PR for something like a react component that lets you choose a time zone. It can be a 100 line simple thing or it can be an 800 line monster with files for typescript types thorough tests, etc. I prefer the former but I feel the author would expect the latter.
On one hand, well yes, everything mostly makes sense.
On the other, I can easily imagine the reaction from most project managers if they'd asked me put together an estimate for fully achieving all of these on any given project. Some of them would involve sitting in on business strategy meetings, which is a pretty far cry from the level of involvement I generally get! It's sometimes hard to even get hold of representative hardware to test on.
> I can articulate why my problem is important to solve.
At this point in the list we start to diverge from what engineers/developers are allowed to know in the modern enterprise. Here is where we start to get push back from the Program/Product managers, Scrum Masters et al. To proceed further down the list is to remove the added communication channels between engineering and the business.
All good as far as it goes, but not very common in the industry. Poor management and no mentorship, and this is what you'll get.
Now what are everyone's expectations of Mike Acton? If someone sends me an email like this before I'm about to start on a job -- I just know he must be feeling some pressure.
This is useful a for a whole lot more than SE. I've started teaching a
Research Methods course this semester and I think I might hit the
students with this list just for giggles.
It would make my life so much easier as a product manager if my engineers would feel comfortable to question themselves and me about what is the value of what they are doing .
This is a great resource, thank you! Gives me a lot of actionable concrete issues to work on. Maybe I'm just very early in my career or it has to with the subfield I am in (bioinformatics, machine learning) but I should have been fired 20 times over by these standards.
The reality is that in a healthy workplace, you could whiff on 25-30 of these and not get fired, because a healthy workplace gives employees guidance and room to grow. Eventually with enough time in that place you'd be at 40+. Of course, some of them (like professionalism) are basically non-negotiable. But nobody starts at 100%.
Let's be honest here, a green junior or intern could easily whiff on 45+ of these things and not get fired, as long as they're in a healthy workplace and they have a growth mindset. I literally tell interns at my company that I basically expect them to know fuck all, and that their opportunities at the company are mostly limited by their work ethic, their willingness to ask questions, and their desire to accept and incorporate feedback.
>I have recently profiled memory usage of my system.
If you are building crud please don't get out the profiler unless you have a performance problem. Even then, if it's crud you're probably not going to look at memory usage first.
There are certainly some valuable concepts - but the presentation is fairly incoherent (including the video of the talk he gives) and not particularly helpful.
Many of these items are utterly unrelated: Some are fairly specific to gaming (profiling/memory layout/timing) some are basic professional expectations (I can schedule my time well). In neither case is there real advice for people who actually struggle with some of these areas.
My item number 51 would be... "I can articulate my point in a more compelling manner than an incoherent collection of 50 things I don't like"