Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: When have you taken a decision in code outside your domain of expertise?
145 points by mprev on May 5, 2019 | hide | past | favorite | 99 comments
I'm writing a book about the role of software developers in the global economy.

One of the book's themes is that developers hold a strange kind of power: we get to make decisions in code that affect end-users but only other developers (and sometimes not even then) can really hold that code to account before it goes into production. Seemingly mundane decisions in code can have profound consequences.

I'm gathering stories from people who've had to take decisions like this and especially where it was in a domain for which they had no experience.

I'd love to hear from people on HN who have stories to share. I'm also interested in hearing from people who dispute that this is even a thing.




This happens all the time I've found, particularly in "agile" processes. You are coding away implementing something and realise there is some corner-case or edge-condition that was not considered in the original design and/or UX spec. Stuff that only becomes obvious once you are staring at the code you've just written and are thinking "What should we do if this is null?"

So you unilaterally implement something to handle that condition.

Especially in agile projects with tight deadlines and the idea of continual refactoring etc, rather than block further work on that feature while you wait for the product owner/business analysts/UX team/etc to come up with an answer and get back to you, you check-in your "best guess" implementation and move on, with a TODO or bug left open to revisit it.

A lot of the time (maybe 75%+ in my experience) the developer's instinct tends to hang around as the final solution, even if the developer is not an expert in the context of the users of the application they are writing (this is rare in my experience - generally developers are developers, and not likely to coincidentally be experts in the subject area of the application's use cases unless it is some niche areas - e.g. people writing software for surgery robots are probably also unlikely to be expert surgeons too I would imagine? Not impossible, but I'd want people to be an expert in surgery robot programming, or an expert in surgery and not a half-arsed kinda-ok-done-a-bit-before level of skill in either area!!)


I had an interesting case a few years back... I was supposed to follow the spec exactly. However, in all the UI/UX diagrams myself and team were given, there was no “close window” or “back buttons”. This included things like pop up notifications. I went ahead and implemented it anyway. Then sent them to the design team to ask if I should remove (took under an hour of work). I was later reprimanded for not following the spec, BUT they kept my implementation and design.

This is one experience that sticks out because I was reprimanded. However, I’d say every project I’ve been on has been at least 10% of the the final project designed & implemented by engineers on the spot.

Beyond UI design decisions engineering can have a major impact on UX. Every single function can make-or-break the experience. That’s why we, as engineers, wield a lot of power.


I've dealt with a designer before that said very literally and in no uncertain terms that there was no need for me to be involved in the high level feature design because he only needed to talk to the frontend engineer "actually implementing it" -- deeply ironic, given this feature was 95% core backend engineering, and so was in my wheelhouse. And so I ended up silently building the feature the way I suggested, and although he hemmed and hawed, he (and the rest of the team) later on conceded I was right.

FWIW, I view this more a failure of product management than anything. This should never have become an argument or adversarial interaction between engineering and design, and it's occurrence was to me a sign that things had procedurally failed quite a bit earlier. I'm happy I was able to save the project and get it to a successful launch (and certainly took the commensurate amount of credit), but I do wish some designers and product folks had more of a sense of awareness for how frequently this happens.


I think the most interesting things to create are often the work of groups of people who all have very different areas of expertise. Everybody's working together towards the same final product, but everyone has their own mental model of how that process works, and no one person can really see the full picture (like the story about the blind men and the elephant).

I've worked in different sorts of crossfunctional teams over the years at different companies in different industries- sometimes the other people are designers, or scientists, or people from other engineering disciplines who aren't down in the weeds like I am on the software side of things. Or sometimes they're the customers/clients/whatever, and are making specifications in a technical vacuum.

I think it's 100% pure human nature to always assume that one's own piece is the most complicated or meaningful part of a project, whereas other people's roles are more straightforward implementation details. I don't think it's from selfishness or conceit, rather it's plain old cognitive bias that we're all susceptible to. You know much more about all the stuff that needs to be done in your part of the system, and are painfully aware of the risks and challenges. You generally don't know nearly as much about the tasks other people are doing in other layers or fields, and from the outside looking in, it seems like everything is always more simple and straightforward than it actually is.

Even people like managers or designers who generally aren't concerned with minute technical details are susceptible to this type of cognitive bias. They play much more creative roles, in which they're literally deciding what the final thing is going to be, but with little or no guidance for how it should be done. That's up to the worker elves who have to assemble the component parts into the desired gizmo. Since the worker elves' reason for employment is assembling parts into gizmos, the people playing more purely creative roles always face a moral hazard of assuming that problems assembling their visions are due to failings of the worker elves themselves, rather than inherent weaknesses in the vision.


I kinda left off my conclusions from all of that: obviously you don't want a situation where (like in the designer vs web developer example), designers are empowered to dictate designs which would naturally result in poor implementations, and you don't want a situation where designers are subject to the whims of what the engineers do and don't feel like doing, and do or don't know how to do.

So how do you avoid those nonconstructive relationships? IMO/IME, you either need your specialists to have at least some intuition for what the other specialists do, or, failing that, you need to make sure that one group isn't politically empowered over any of the other ones. No one should be dictating terms to anyone else, everyone should have some ability to push back on things they disagree with, but at the same time if there's a clear consensus on something, one lone holdout can't stop the train. Pushing back on pusher-backers has to be possible too, when appropriate and necessary.


I don't think it's a bad thing to have specialists that have intuition about what other specialists do; in fact, that's a core responsibility for being a senior individual contributor. But, (and I covered this in my original comment), I think that this mediating labor is the role of product management, and there is a reason why product management is 100% a full-time job that cannot be skimped out on. I never understood why it needed to exist when I first started in my career, or I thought it was mostly a status-signaling BS role. However, the only way to mitigate that natural tendency of individuals to assume "it's 100% pure human nature to always assume that one's own piece is the most complicated or meaningful part of a project" is for a technical product manager to actually own the ramifications of all design and engineering decisions that are made and orienting the resultant product properly to the longterm strategic goals of the company. There is no substitute. Engineering and design direction are full-time roles in and of theirselves, and so too is product.


I would've gone against them: either you reprimand me and we take out the feature, or we keep it and then you thank me.


At that point it's best to find a new employer. It sounds like there was some weird ego game being played.


Exactly my thinking, totally irresponsible behavior on the part of design team


On small things like this it’s easy to just gaslight people into thinking they told you it was ok. You can just explain it was such a small detail they probably don’t remember you brought it up and they said it was ok. If they say no, just double down and keep insisting and as long as there are no records of your conversations you will win.


Lol the problem with gaslighting is not that it’s not easier in some contexts but that it is an actual act of aggression


For a more cynical take on why GP's advice is bad, it also breaks down trust in communication. If I catch you trying to gaslight me once I have to run all of your future statements through a more strict truthiness parser in my head, and I'll place less trust in statements you make that are actually true. People who perceive themselves as smarter than the average bear can sometimes believe that they possess the mental prowess to lie without being caught, and this is often true, but the one time it isn't true it can have devastating effects on interpersonal relationships and communication. I don't even judge people negatively for all lies, if you lie about a detail of your personal life while in a work setting there could be totally understandable privacy reasons for that where a non-answer would have a different effect than a hard "no." But if you lie about prior decision making processes, that breaks down trust in the exact domain that we need trust in. It poisons the well, and that isn't worth a little less friction in one specific decision making process.


Trust goes both ways, if you are going to reprimand me in the future for implementing a needed feature but then go ahead and use it anyway and even take credit for it then I will gaslight you next time the same problem comes up.


I don't know all the details of your specific situation, but from what I've read it does seem that your superior was in the wrong. As others have noted, this is a situation where it might be clever to look for a new employer. That being said I don't think passive aggressive lies and manipulation are a smart response. I also don't think that the actions of your superior break down communication in the same way that gaslighting does. Reprimanding somebody while still eventually using their idea can be recognized as an asshole move whether or not it affects "trust." Being an asshole isn't a great trait in a work environment, or life in general for that matter, but I know plenty of assholes who I still probabilistically trust not to lie to me. Finally, and I'm not saying this is the case in your specific negative experience, I can imagine scenarios where some light reprimand might be useful even if the idea implemented is ultimately accepted, like, "hey, that was a good idea, but it affects something above your paygrade and we really want you to go through this process[0] when making these sorts of decisions in the future." I don't think that applies to your back buttons, but I can imagine scenarios where accepting an idea while lightly reprimanding a failure to communicate during the decision making processes could make sense.

Edit: I can see how the "and even take credit" part breaks down trust in a similar way. I still don't think further trust-destruction is a clever response. I would leave, call them out on the lie, or (if not in a position to risk unemployment by pissing off a superior) bite my tongue and try to rise through the ranks.

Other edit: typo.

[0]: https://news.ycombinator.com/item?id=19832774


Retribution is a terrible concept to bring into your workplace relationships. Two wrongs don’t make a right, try to take the high road. Cliché but true.


Half of my current job as the EM of a team is making my manager think he came up with critical ideas and tech specs for the things we work on. This is a useful skill to manage up with in engineering organizations that have severe organizational and leadership problems.


Reminds me the funny/painful pic of 'Profile updated successfully' inside an obvious error dialog.


If I was writing surgery robot software I very much would want a surgeon sitting next to me, or at the very least available for a chat on a reasonably short order. Collaboration is a force multiplier. Unavailability of domain knowledge is a major reason why so much domain-specific software sucks.


"Theoretically" agile means that you have an easy access to the customer to help you with decision making and changing requirements.

Unfortunately its VERY rare to meet that requirement for agile (multiple reasons).

For mission critical projects like the one you mention, I think, company should pay good money even to few surgeons to be available for the devs.

Problem often lies in management to understand that..


Yep, and that’s one of the reasons I feel I’m pretty much done with programming as a day jon.


I'd want a surgeon who extensively used the previous version of the surgery robot. :)


I've never had a spec that would thoroughly describe how to handle every case. The ability to understand and interpolate vague business requirements is expected to scale with your level; if you take a "not my job" approach to such decision-making you are perpetually entry-level.


I've often seen the exact opposite of this: entry-level devs being all too happy to 'fill in the gaps' in a spec based on their own (very limited) understanding of the problem domain (not even being fully aware of where the spec ends and their interpretation begins), and more senior people recognizing when they don't actually know the answer and realizing that it's important enough to clarify with the domain expert. Basically Dunning-Kruger effect.


> agile projects with tight deadlines

If you're not deciding on the sensible deadline, you're not self-managing your project, you're not doing agile.


My power? Total access to entire VISA and Mastercard databases of real life citizens credit cards/debit cards. The US company I wrote an entire solution (not just a simple application) also had a part where processing payments was a requirement. Read the user CC, take his credentials, put it in database, start transaction pre-authorization process and later finalization. So to protect the user data from prying eyes I've encrypted the CC data that was read by the magnetic card reader. But for proof of concept I used a simple encryption scheme, which was never meant to be used in production. Countless mails were exchanged between me and the manager regarding the encryption scheme, to upgrade to a modern one, like once per month. Nevertheless this weak encryption entered the production despite my many, countless by now, warnings. Eventually things fallen apart between the upper management of the company and my manager and civil suit ensued. In the end, FBI was involved too and I had to write a affidavit to them regarding this. Offered all my mail exchanges to them which proved that while I was not an US citizen, I had more privacy concerns then the usual US citizen and businessman. Dunno what happened in the end, as I exited the project around 2014, but looking at their site it seems that my code is still in production. Talking about why so many holes and security fails happens, I know first hand how "careful" the average US manager is with sensitive data.


How often does an average US manager deal with a civil suit and the FBI?


no idea. my experience was this one only in 10+ years of freelancing


The rule of thumb is: Describe the problem, ask for clarification, offer a default solution.

Most often, people just won't be interested in the problem, and will ignore you, hoping you and the problem go away, at which point your default solution wins and is documented. But every now and then, it'll set alarm bells ringing up and down the chain of command, and THAT is why you bring it up.

Of course it's also important to get a feel for what kinds of issues should be brought up, and what issues should just be quietly solved. This skill is half-technical, half-political, and comes with experience.


> and what issues should just be quietly solved

As a pedant, it sometimes makes me uncomfortable to make these decisions "quietly".

While I wholeheartedly agree that this is an important skill for all developers, it begs the question: Why should the dev be responsible for deciding when an issue needs to be brought up? Why should the dev be making these quiet decisions, is this not evidence of incomplete requirements?

There should be protocol here instead of relying on the dev's subjective sense and opinion.

Generally speaking, I feel like this goes outside the bounds of the developer's responsibility. Not every dev has honed this skill; it could be dangerous. I would choose to err on the side of caution and apply your rule of thumb above in almost all situations.


Yes, it's not ideal, it could be dangerous, someone could launch nuclear missiles by mistake.

But the reality is that we live in an imperfect world, where imperfect things can and do happen all the time, and we have to deal with them. You can't have a rule for everything; when you do, nothing can get done because the rule makers can't anticipate everything, and often get the things they DID think about wrong (rule making is remarkably similar to program design, with the same drawbacks and limitations).

So our imperfect world demands that we exercise our own judgment in deciding what to do. If you don't trust your developer's judgment, you shouldn't put them in charge of things that can cause a lot of damage. The alternative is a rule for everything, which is guaranteed to collapse under its own bureaucratic weight.


> The alternative is a rule for everything, which is guaranteed to collapse under its own bureaucratic weight.

That's an important point that's too often missed. There's a cost to creating rules, maintaining them in a list of rules somewhere, and having people actually read the whole list and apply those rules to real situations, handling appeals processes when you discover something that the rules handle badly, etc. You need some rules, but the can't ultimately fix the need to trust people.


> Why should the dev be making these quiet decisions, is this not evidence of incomplete requirements?

If the requirements were a complete specification of what the program must do in every circumstance, you could directly compile them to create a working program. Requirements documents are not intended to specify everything—only enough that a competent person would get the important things right. Filling in the trivial details with reasonable behaviour is basically the programmer's job.


> is this not evidence of incomplete requirements?

Yes! But there will never be a complete set of requirements — indeed if you think the requirements are complete, you spent too much time on them and you aren’t looking at the problem carefully enough.

There’s a balance between calling for more complete requirements and being able to work with less complete requirements. The more you can correctly choose to do the latter, the more you “hone this skill”, the more effective you can often be.

(As a bonus, when you do call for more complete requirements, in my experience people will be more open to doing that work. They know you wouldn’t ask if you didn’t “really need it”.)


There are always unforeseen requirements or some requirements are very loose because there is really no way to know something will work until is tried.

If there were any protocol that requires discussion about every little thing you would never ship nor be productive.

So, companies relies on programmer's judgment on what to do when there is no clear resolution or something is not perfectly spelled out.


Sure, in a perfect world, programmers wouldn't have to make decisions outside their sandbox. But humans get hired to be more than robots, and the same goes for everyone, in every job, ever. That's why you hear cliches about "above and beyond", "extra mile", etc. etc. so much that you probably don't hear them anymore.

And if you are good at operating outside your sandbox when you need to, you should get a gold star (and think about if you want to look for more responsibility). If you're not, your manager should try to baby-proof that corner.

This is also no different than every other job, ever.


I think I'd rather have high-level specifications than ones with all of the detail in it.


It is indeed indicative of incomplete requirements. Unfortunately, the real world consists of projects with incomplete requirements.

You are going to need this skill as a developer since you’ll end up using it every day.


From my experience, it's more of an accident. For example, one senior backend developer was introducing new feature, and to do that, he created a POC UI. It looked fine. At the time, the project had very little UI, and it was focused more on the developers, so nobody would use the UI a lot. Little did he know the POC UI would not only ship, but become de-facto the face of the product for the next 6 years, since when other UI pages were added, they'd take his simpler design. After a while, pretty much all the UI followed his design.


Nothing more permanent than a temporary solution eh?


I would award it The Best UI for being so usable it spread like a weed. I trained juniors fresh out of college who self-learned perfect web UIs in Bootstrap. But when they use custom design (based on Bootstrap) I discover a handful of bugs in minutes. Some design changes seem fine to everybody else but causes your burdens to multiply.


Ah, good old path dependency.


So basically, the proof-of-concept UI proved its concept.


I'm a UI developer who specializes in designing and developing tooling for SASS products.

My last company was in ad tech, and our UI was for setting up ad campaigns. A big campaign consisted of 1 campaign, 30 line items, 900 tactics, and 2,000 creative assets. We offered managed service, meaning the account managers were in house and I could observe them work.

When you're working with quantities like this, every UI choice is hit by a multiplier equivalent to the campaign size.

My favorite was a request for table sorting. Makes sense, users want to sort 2,000 creative assets, and sorting is something every UI should have However, asking why they're sorting revealed that they were trying to identify "orphan creatives", assets which had no assigned tactic.

They'd open each creative in a new tab, and assign it a tactic. Also not a big deal, until you multiply that action by 2,000.

They'd also need to spot check the assets to ensure they weren't accidentally assigned to incorrect tactics, a feature that no one thought to request.

Ultimately, the request from our AMs and Product Management was: "Please add sorting to tables." What I ended up building took over a week, and took the shape of a nested folder browser that allowed bulk actions on multiple entries. All because of a sort request that was hiding an issue.

So what was the impact? We had lots of large campaigns, which could take up to two stressful days to set up. The new tool minimized errors and took at most an hour (thanks to some smart generator tools I added later on). We had 5 full time technical account managers who went from extremely stressed during Christmas to fairly calm. Errors decreased, resulting in better campaigns.

One thing remained, and that was the muscle memory senior account managers had developed for setting things up incrementally. I learned that when you build a tool people use for hours a day, every small task is important. The motions embed themselves in a user's brain. The mistakes or inefficiencies of a UI, seemingly insignificant during development, can become someone else's rote action, stressor, and even source of unhappiness.

I don't affect the global economy, but questioning a sort feature made five of my coworkers happier.


I can relate. These days I think any feature request should be followed by a discussion about what the client/user is actually trying to achieve. It's my experience that most users, unsurprisingly, have no idea about the possibilities and limitations of the apps/services they work with and will limit their imagination to the tools they already know - whether it's the the idea of faster horses as mentioned by Henry Ford or the tables from your story.


Great story! It is incredible what kind of impact a competent developer can have if they are given the chance and if they understand the problems users are having. A little automation can eliminate majority of repetitive and mundane work of users, leaving them time and energy to cope with more important aspects of their jobs. Everyone wins, including the company customers.


The 5 Why's.... I try it to the extreme... just ask why until people explain the actual issue.


I had to add a search engine to an app and I didn't know much about search engines. But whatever, it had to be done and I was the one to do it. So, you educate yourself. You read about all the options, pros/cons, and try to anticipate future needs. Switching search engines isn't quite as difficult as databases, but it's hard, so I knew it was a big decision to make. After learning about the subject, weighing the options, I made a decision. Looking back, it was still the best decision.

The moral of this story is that if you're tasked to do something outside your expertise, you make it your expertise.


>The moral of this story is that if you're tasked to do something outside your expertise, you make it your expertise.

I don't think the OP is talking about learning some search engine, library or programming language. With 'outside of your expertise', he meant outside of programming.

Say you are writing some medical software that has to diagnose patient based on some inputs. You, as a programmer, are not a doctor and you can't "make" it your expertise within the time constraints of the project.


> Say you are writing some medical software that has to diagnose patient based on some inputs. You, as a programmer, are not a doctor and you can't "make" it your expertise within the time constraints of the project.

Yes you can, and in fact you must. Any programmer that has to write medical software needs to understand the underlying medicine at play. Sure, medical professionals can provide guidance, but you can't trust them blindly. You need to understand their rationales and assumptions, and to do that, you need to understand the underlying material. A programmer developing a heart monitor needs to have a good understanding of cardiology.


Still, the rule is the same - you make it your expertise. You can't competently develop something unless you know how it will be used.

> Say you are writing some medical software that has to diagnose patient based on some inputs. You, as a programmer, are not a doctor and you can't "make" it your expertise within the time constraints of the project.

Either you need to partner with someone who knows the domain (a doctor) and discuss every detail with them, or you have lots of learning to do. :)


> Either you need to partner with someone who knows the domain (a doctor) and discuss every detail with them, or you have lots of learning to do. :)

Right, so your answer from before wasn't correct.

And sure, if you give me 8 years I can do a medical study and make the expertise my own, but in reality there is not a single customer on the planet who is willing to wait 8 years and pay millions for some programmers to become medical professionals.


>> Either you need to partner with someone who knows the domain (a doctor) and discuss every detail with them, or you have lots of learning to do. :)

> Right, so your answer from before wasn't correct.

No, it's still correct. Regardless of whether you learn something on your own or consult with others, you still need to become an expert in the subject matter. Using outside guidance can help accelerate that process, but you still need to build that expertise yourself.


Pretty much all of the code I write is outside of my domain of expertise; my training is as a biologist. I have a computational background only as a result of my own (lifelong) amateur interest. This is the case for most people in biology doing computational work, since there are not many good programs for integrating study of biology and computing (despite the fact that biology is now 100% dependent on computing and statistics to understand experimental results).

As a result I ended up taking on the task of creating software infrastructure to support biology work and fill in these holes. This means I'm creating applications from the ground up, handling every aspect of it - front end, back end, authorization, calculation, storage, deployment, etc. I have zero training in any of these things, which is sometimes harrowing. I make the best choices I can, but ultimately I think I'm pretty hampered by my limited understanding of the available methods. I.e., I can write CSS, but I don't know how to write a grid layout engine using flex. I have read some Bruce Schneier books, but I don't know how to design or audit login protocols. I at some point learned how to use a relational database but don't know all the fancy new map/reduce type datastores that are available that might be more appropriate to my work. Etc.

I suspect that if you look in any domain outside of computing, you'll find people like me who are writing code by making things work without much specific training.


When I write engines that implement business rules, I NEVER just "take the liberty". If it's not in the spec, then I either get clarification, or I throw an exception. In my opinion it's better for the program to fail.


I see that all the time with internal systems. Let's say the devs didn't add an option to export data a file. This omission can later on trigger other departments to create very expensive workarounds but due to internal organization/politics it's almost impossible to change the original system to add an export option.

Especially with new stuff often nobody in the organization has any expertise so it's common that the devs take a first cut at the problem.

Another common thing is that even if the devs ask for clarification they get none but the deadline still ticks so you just take your best guess.


Probably not exactly what you are looking for but the topic reminds me of the cookie expire date.

About 15 years ago I learned about cookies and their expiry date. At the time it was totally up to you as a developer if you wanted to have a login that lasted 10 minutes or three years. While relevant for security, it was just a number you had to define. So it didn't feel like a big thing.

When I learned about concepts like 'remember me' I was a bit surprised, as in my world it was just about increasing the number for the cookie lifetime. In most cases, that is not entirely true as the modern 'remember me' implementations are more complex (e.g. to support re-authentication for modification of data), but the core principle is still the same (using a long living cookie for authentication).

So what was just a simple number back then, became a complex topic with legal implications nowadays.


That's a decision I have made without a spec many times.


Look at the series of articles titled "falsehoods programmers believe in X". Those articles exist because someone at some point made an intuitive assumption about how the world works, which was convenient at the time but eventually ended up biting them in the back side.


As someone who works in capital markets and seeing more and more power moving to the dev, I totally gel with this theme. Sounds like a really good idea for a book. Anyway that I can follow along from home....release/blog?


At my previous job I wrote a quotation wizard that customers could use to get quotes. They could select what options they wanted, what kind of maintenance contract and for how many work places and that kind of thing. In the end the quotation wizard would calculate how much this would cost.

For the bigger decisions I would consult the sales person but I also made quite a few of the smaller decisions myself. It did turn out that some of the things that I had decided were not that much according to how the sales process actually would go in practice. In particular regarding maintenance contracts. For instance, my quotation wizard would allow a maintenance contract to start at any date while in practice they always start at the beginning of the month.


> For instance, my quotation wizard would allow a maintenance contract to start at any date while in practice they always start at the beginning of the month.

I'd say you made the right call; in my experience, a statement like "in practice they always start at the beginning of the month" is, in practice, quickly followed by "except when they don't".


Yeah, in my opinion people often enforce business logic way too strictly. My rule of thumb is “If your boss told you to, would you?” If yes, the data model should support it.


Not directly as a developer, but I've wrangled with this a few different times. The most impactful:

I was standing up an analytics function at a growing company. There were several internal systems that had been organically created to handle different parts of the companies internal processes, with little structural support in the form of project or program management. Each system was developed by a single, but different, developer. The developers knew their systems really well, but there was little formal documentation on anything.

As part of creating an analytics/BI function, I had to start digging into the data models of the different systems and ETL'ing the data into a data warehouse, and was the first one outside of the initial developer to really dig into the databases for their systems. Each developer used company terminology for their entities that made it relatively easy to intuit the data model. But each one had been left to interpret business needs and definitions themselves, and had done so differently. And neither one of them matched up to how the actual business users defined such things/processes, or presumed the software was defining them. Yet because they were using the same terms, each developer was presuming/projecting their implementation logic for a given process or entity onto the other system, without actually confirming.

I spent well over a year finding and having inconsistencies corrected, and getting system processes aligned with what the business actually expected to be happening. By the end of it, we improved our production efficiency[1] by about 30-40%, which had previously been getting silently lost in the void as the two systems would have subtly incompatible definitions of what was valid and what wasn't. Funny enough, the primary value that came out of the analytics and BI team I stood up wasn't the actual analytics work, but rather the operational discipline and system refactoring that was necessary as a prerequisite to support the analytics.

[1] What was being produced was digital products, not physical. And there wasn't previously any end-to-end analytics in place, so it was non-obvious that stuff was getting lost in the mix.


I work in the digital agency space and have had projects across a huge variety of domains (ecommm, finance, health care, retail) and different types of deliverables including web, mobile, kiosks, IoT or VR.

Anytime a developer on my team had the power to deliver something without checks and balances, it's a red flag. We always determine the expected behavior before writing code and always check that behavior before delivery by at least one non-coder and usually more than one. A decision in code that affects the experience is always a bug and it usually gets noticed before anyone sees it.

So, I'm honestly not familiar with the situation you're describing.


On the business logic side, speaking mainly from experience in enterprise software consulting (and some related research work): the domain is often complex enough, and the knowledge of how the business actually works (as opposed to what your TOGAF/Zachman diagrams tell you) is tacit and distributed enough, that the code you're writing is often the first time that everything in a particular process/subdomain has been made explicit. (This is especially true during 'digital transformation' at e.g. old manufacturing companies, where individual IT systems have been pretty nicely decoupled at a technical level and work together only via people systems.)

In these situations, certain 'core' parts of the code quickly become the only accurate spec. (Whether or not it's worth updating the spec docs is a management decision. But the test scripts will pass if the code is correct, and under sufficient time and money pressure etc..) Other developers then treat these 'core' parts of the code (usually some fairly high-level classes, but usually something more concrete than an interface) as the true documentation of the business requirements. If the company respects developers enough, this means that the developers that worked on those 'core' bits of code are also treated as domain experts in future business discussions.

On the purely technical side: sheesh, how much heat is generated by horrifyingly algorithmically inefficient or vastly I/O-wasteful or just redundant design (for instance religious/unnecessary use of immediate-mode GUI) -- stuff that quite possibly the IT managers don't care about at all (because of e.g. cheap horizontal scaling and inadequate measures of software project success)? The heat is bad for ecological reasons (locally at least), but also intrinsically (why are you destroying information, O Information Worker?? -- and again e.g. Toffoli gates fix this only locally). Based on code I've seen and, sadly, written (laziness, time pressure and all that) -- there must be many, many orders of magnitude of unnecessary heat/information-destruction happening because of purely technical decisions that on-the-ground developers (not even architects/designers, I mean the people that write the stuff that gets compiled/interpreted) make. @OP if you know some way of measuring this I'd love to hear more.


By immediate mode GUI do you mean the game dev variety or the Web browser DOM type i.e. React etc.?


Thanks. You describe pretty much one of the points I want the book to make: the code is the description of the solution and often you have to address problems in code that no one else has considered. I think that’s a hard concept for non-devs to grasp.


> I'm also interested in hearing from people who dispute that this is even a thing.

My intuition is that it's definitely a thing, but I appreciate that you're engaging with this question!

The first argument that I can think of on the other side is this:

Systems always effectively make decisions about how to handle every case, even if the rules about how to handle some cases are tacit, implicit, ambiguous, unacknowledged, disputed, or typically punted to some other system. Someone might be mad at a programmer for explicitly handling some situation that had previously not been addressed explicitly (or just skeptical or curious about whether the programmer did a good job), but the programmer's solution might not be worse or a more inappropriate exercise of power or judgment than whatever was happening before.

This ties in to a lot of other issues about formalizing procedures and interactions. You might want to look at James C. Scott's Seeing Like a State and perhaps Michael Polanyi's Personal Knowledge for examples of people who are skeptical about doing this -- but I'm sure there's another side there.


Almost every day I guess? I mean writing new code always needs seemingly subtle decisions which will affect users sooner or later.


That always happens. You work on something new, so it is outside your expertise. Then you learn and it becomes expertise. :)


This is an interesting premise, but .. in many economically-driven situations, a programmer is a team member, who then has a technical lead, who then has a project or product manager, who then answers to management via objectives. The details of the code are serious, but within a context.. because basically, economic activity can be very social, and also rule-based.

This makes the problem different.. instead of a coder directly writing an IF-THEN sort of decision, intended outcomes of code behavior are controlled .. BUT if the game rules are such that deception or more often, exerting control over others, is profitable, then a very powerful system is being built to execute a morally-ill process.

There are many, many divergent cases, however this sequence is very much at the core of quite a lot of economically-driven programming IMO.


I recommend you focus on the management of technology rather than the mythical powers of software developers


Working title “Over-Under: how the tech industry gambles by overpromising and underdelivering, sometimes with disastrous results.”


Perhaps in our modern environment it is quite rational to be more fundamentally against the notion of well delineated domains of expertise. All should be open to questioning, in particular from fresh perspectives honed in alternate experience. Everything should be questioned: perhaps not in every project, but at least in each generation.

It went as an unspoken, unquestionable assumption that telephony was the right model for data networking. - Van Jacobson

There are lots of "old and fundamental" ideas that are not good anymore, if they ever were. - Alan Kay (2016)

Living in the present: man, you're just out of it. - Alan Kay (2017)

... via http://github.com/globalcitizen/taoup


I can see benefits to getting input from outside your of area of expertise but what you’re describing feels like the attitude that led to Theranos.

How do you avoid ending up in a Dunning Kruger situation?


I'm pretty confident Theranos was partly a product of a company being founded by an attractive young woman surrounded by powerful men cooing at her instead of holding her accountable or giving her the kind of constructive criticism they would give a male founder.

Women get personally attacked and dismissed a lot while no one tells them "x needs to be done differently." They get inured to hearing everything "mansplained." They start tuning out the ugliness.

We don't have well developed good paradigms for how to do this effectively.


I think the "woman founder" bit might have impacted media coverage, and even investor decisions, but to make a gendered narrative out of the blatant fraud ignores Balwani's role in the affair. There was at least one high-ranking male who knew exactly what was going on and where the bodies were buried. To me this is a clear case of "fake it till you make it, and defraud investors in the mean time hoping you actually make it."


I'm a woman. A lot of men find me attractive, though my youth is long gone and it took a lot of my beauty with it when it left.

Men mostly either coo at me or treat me like an idiot who desperately needs a heavy heap of Mansplaining.

I've been on HN nearly a decade. I appear to be the only openly female member to have ever spent time on the leader board.

I am endlessly mocked and belittled for pointing out how differently I get treated from the guys on the leader board. People go out of their way to make it clear that expecting this to be a professional networking opportunity that enhances my career and bottom line is point-and-laugh worthy, never mind the overwhelming evidence that participation here routinely enhances the careers and bank balances of countless men.

So that's the lens through which I view the Theranos debacle where the world imagined the company wasn't merely a billion dollar unicorn but, instead, valued it at ten billion and called it a dekacorn. Then, overnight, it's valuation dropped to zero.

I don't think the same scenario would have flown for so very long and gotten so crazy out of hand with a charismatic man as the front person.

I desperately want good constructive feedback and mostly can't get it. I'm quite confident that Holmes was largely starved of honest and factual feedback about life, the universe, the company and how business is done.

The most recent article I read indicated she shacked up with one of the investors. Her public narrative was that she was completely celibate out of single-minded devotion to the company.

I have never seen an article that really questioned that. The media has bent over backwards to be respectful of this known fraud.

I'm still waiting to hear that the real secret of her success was sleeping with multiple powerful men who then backed the company in exchange. I think the one other time I said that on HN, it was downvoted.

Sexual politics. Can't give the obvious answer about how a college drop out with no business experience managed to "fake it" so long without ever making good on any of those empty promises, even after it has come out that she moved in with some old guy who invested millions in her company.


Theranos had (almost) all the boxes ticked: cool founder, noble goal, opposition to "medical mafia" - you name it.

Like a foam strawberry which looks like it's edible and smells like it's edible. Enough to fool my one year old daughter.


Regarding Theranos, I think we can all agree that the worst capitalist excesses are the unique products of an excess of unchecked greed and incompetence at multiple levels and have relatively little to do with personal efforts or domain expertise.

Dunning Kruger, IMHO, is a straw man / false dichotomy. Sure, none of us are perfect, but if you disable your focus by worrying unhealthily about perception by others and where you line up, or if you are ultimately some sort of imposter, then you've missed the boat by definition. The old story of 99% perspiration, enough personability to acquire funding and at least reasonably effectively manage others, and the resulting chain of stubborn achievement will get you almost anywhere... just come up for breath now and then and check you're not failing versus any competition, and/or if you're going blue ocean/greenfield and having doubts make sure people you trust can reassure you you're not completely insane, or the financials are secure enough to justify an onward march.


So, if you just have enough pluck and funding then you can do anything? Years, if not centuries, of learning and specialism be damned, right?


Yep, basically. You can hire that stuff when necessary.


If (relevant) decisions are made and make it into production, then it's a management failing. Specifically one of testing since at that point (at least) you should know what the system _should_ be doing.

Of course, I'm in management utopia la-la land here and, in practice, if nobody objects then it goes in and stays. The very high majority of the time this is fine and has no impact, but every now and then someone decides it's OK to only read the angle of attack from one sensor and ....


Hmm, if I am an expert in a domain, then there is no decision to be made - it is obvious what the right answer is to everything, you don't have a decision to make - there is no communication to others. All decisions are outside my domain of expertise! The outcome of these decisions are usually compromises based on the needs of a stakeholder, manager or the least stable third party.


Spoken like someone at peak experience..

Real experts have nothing but questions about their own assumptions.


"Real experts do $XYZ" where XYZ is your opinion.


While the OP may have inadvertently committed a "No True Scotsman!", I think he's nonetheless correct: I think experience often comes with a better appreciation of what's out there, what's still unknown to you. Before, you might have taken X somewhat for granted, but if you delve into and research X, you learn it's a complex interaction of A, B and C; now you have 3 things you're taking for granted / need to learn. A fair number of people have felt stupider after learning, which is of course the opposite of how it should be. Imagine having never seen the inside of a modern car hood, and you open it for the first time. Perhaps all you need to do is "change the battery" (simple, right? You've changed the battery in other things before) and upon seeing the inside of the hood for the first time, one might reasonably be overwhelmed by the amount of stuff crammed inside there.

Questioning your own assumptions, I think, falls out of repeatedly learning that often things are not simple, and the experience making your own mistakes and getting burned: it teaches you when to proceed (you don't want the project to get bogged down with "analysis paralysis"), but with caution and the knowledge that you made an assumption. (Whereas a less experienced person might not realize they made the assumption at all.)

I would also point to the Impostor Syndrome[1] as a sort-of evidence of that, though it's certainly possible for someone to be an expert and not feel that way.

[1]: https://en.wikipedia.org/wiki/Impostor_syndrome


I didn't say I was an expert.


I think I'm less interested in those decisions having an effect on the global economy and more interested in the effects (a person dying because of a poorly thought out condition, a food delivery person that gets paid less BECAUSE of a large tip, the decision to NOT mask for passwords and its effects).

Maybe that _is_ what you mean by global economy, but that is probably more interesting.


> we get to make decisions in code that affect end-users but only other developers can really hold that code to account before it goes into production.

Honestly if that is in an important product then there is a serious management problem.

Its also another reason why devs usually specialize in an industry eg healthcare, aeronautics, robotics.


I can't remember any time where I made a "buisness" decision without being the customer myself or asking the customer. Even when I was the domain expert.


Sometimes, maybe rarely, the software people are the only ones who have a different enough take on things to do something productive, even if they do not start out as experts.


What a funny question to ask around here. For true HNers, there is no such thing as “outside their domain of expertise”. Witness the discussions in this very thread.


Speaking from an end-user experience POV:

When I first switched to a mac from windows more than a decade ago, the instant-on feature was one of the most delightful experiences.

I have often wondered what the cost of not having that on Windows meant to the world - in lost productivity and green-house gas emissions.


Every single damn morning when I have to decide on what clothes to wear.


Always buy multiple sets of clothes then wear them the whole month. This way you only need to make a decision on first of the month. :)



Every decision I make in code is outside my domain of expertise


I think a lot of those are hard to see and you're going to have a hard time tracking them down from the horses mouth. Youll have a better time looking at the symptoms and tracing back to the source.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: