Hacker Newsnew | past | comments | ask | show | jobs | submit | austin-cheney's commentslogin

A good PM is challenging to define because the definition is highly subjective to company culture and the collective limitations of the supporting development team.

Let’s assume the goal is to hire a PM that will transform your shitty product into world class dominance and the company is not the primary limiting constraint.

A good PM will be ambitious about product quality. It’s the hill they are willing to kill their own career on. Before new features are added the existing features must work without error as widely as possible and must execute quickly. If that means going to war with the developers then so be it. Developers not willing to achieve the quality vision of the PM are a problem to be removed. Developers do not pay the bills, sales do.

Secondly, the existing features of the product needs to achieve business results superior to the competition. This may require original technology solutions, but more often than not it will require building new business partnerships, or repairing existing ones. For example e-commerce solutions require superior inventory, better incentives, and cheaper prices.

A good PM sets ambitious numeric targets. This can include sales growth, execution speed, support costs, inventory quantity, latency, headcount size, and more. A PM will not control any of this but nonetheless knows their numbers and is willing to use this evidence to destroy all internal obstruction.

A good PM is fully aware of their products current operating state. A product is never perfect and requires periodic downtime for maintenance. A good PM is willing to work with operations to establish necessary redundancies and schedules that minimize business disruptions.

There is more, but this is a start.


The most misunderstood statement in all of programming by a wide margin.

I really encourage people to read the Donald Knuth essay that features this sentiment. Pro tip: You can skip to the very end of the article to get to this sentiment without losing context.

Here ya go: https://dl.acm.org/doi/10.1145/356635.356640

Basically, don't spend unnecessary effort increasing performance in an unmeasured way before its necessary, except for those 10% of situations where you know in advance that crucial performance is absolutely necessary. That is the sentiment. I have seen people take this to some bizarre alternate insanity of their own creation as a law to never measure anything, typically because the given developer cannot measure things.


> I have seen people take this to some bizarre alternate insanity of their own creation as a law to never measure anything, typically because the given developer cannot measure things.

Similar to the "code should be self documenting - ergo: We don't write any comments, ever"


It is to me incredible, how many „developers“, even “10 years senior developers” have no idea how to use a dubugger and or profiler. I’ve even met some that asked “what is a profiler?” I hope I’m not insulting anybody, but to me is like going to an “experienced mechanic” and they don’t know what a screwdriver is.

It’s because in most enterprise contexts:

1) Most bugs are integration bugs. Whereby multiple systems are glued together but there’s something about the API contract that the various developers in each system don’t understand.

2) Most performance issues are architectural. Unnecessary round trips, doing work synchronously, fetching too much data.

Debuggers and profilers don’t really help with those problems.

I personally know how to use those tools and I do for personal projects. It just doesn’t come up in my enterprise job.


If you don't have personal examples of using a profiler to diagnose an issue like "too many round trips" and identify where those round trips are coming from, then you've never inherited a complex performance problem before.

Doesn't really change the picture. If you don't know the basics of a car, then you absolutely shouldn't be driving in traffic either.

yeah but that analogy is sort of false. A better analogy...but then it would make you look absurd...would be "if you don't know how to take apart and re-assemble the engine of a vehicle you shouldn't be allowed to drive it on the road". You get a driver's license if you can remember a few common sense facts and spend a bit of monitored time behind the wheel without doing anything absurdly illegal or injuring/killing somebody

You don't use like Datadog or something at your enterprise job?

That is surprising. They have come up in every enterprise job i have had. Debuggers and profilers absolutely do help although for distributed systems they are called something else.

I once interviewed at Microsoft. The hiring manager asked me how I would go about programming a break point if I were writing a debugger. I started to explain how I would have to swap out an instruction to put an INT 3 in the code and then replace it when the breakpoint would hit.

He stopped me an said he was just looking to see if I knew what an INT 3 was. He said few engineers he interviewed had any idea.


Did you get the job ... or were you overqualified?

I guess I was overqualified. Didn't get the job.

What is an int3


The last time I interviewed (around 10 years ago) I was surprised when 9 of the 10 senior developers didn't know how many bits were in basic elemetary types.

(Then, shortly afterward I also tried to find a new job, realized the entire industry had changed, and was fortunate enough to decide it wasn't worth the trouble.)


> 9 of the 10 senior developers didn't know how many bits were in basic elemetary types

That's likely thanks to C which goes to great pains to not specify the size of the basic types. For example, for 64 bit architectures, "long" is 32 bits on the Mac and 64 bits everywhere else.

The net result of that is I never use C "long", instead using "int" and "long long".

This mess is why D has 32 bit ints and 64 bit longs, whether it's a 32 bit machine or a 64 bit machine. The result was we haven't had porting problems with integer sizes.


It's substantially worse on the JVM. One's intuition from C just fails when you have to think about references vs primitives, and the overhead of those (with or without compressed OOPs).

I've met very few folks who understand the overheads involved, and how extreme the benefits can be from avoiding those.


Conversely I've met many folks who come into managed environments and piss away time trying to wrangle the managed system into how they think it should work, instead of accepting that clever people wrote it and guidelines when followed result in acceptable outcomes.

The sort of insane stuff I've seen on the dotnet repo where people are trying to tear apart the entire type system just because they think they've cracked some secret performance code.


>on the dotnet repo

You mean the .net compiler/runtime itself? I haven't looked at it, but isn't that the one place you'd expect to see weirdly low-level C# code?


In what way is it worse? The range of values they can contain is well-specified.

And you have a frame with an operands stack where you should be able to store at least a 32-bit value. `double` would just fill 2 adjacent slots.

And references are just pointers (possibly not using the whole of the value as an address, but as flags for e.g. the GC) pointing to objects, whose internal structure is implementation detail, but usually having a header and the fields (that can again be reference types).

Pretty standard stuff, heap allocating stuff is pretty common in C as well.

And unlike C, it will run the exact same way on every platform.


My favourite JVM trivia, although I openly admit I don't know if it's still true, is the fact that the size of a boolean is not defined.

If you ask a typical grad the size of a bool they will inevitably say one bit, but, CPUs and RAM, etc don't work like that, typically they expect WORD sized chunks of memory - meaning that the boolean size of one but becomes a WORD sized chunk, assuming that it hasn't been packed


". While it represents one bit of information, it is typically implemented as 1 byte in arrays, and often 4 bytes (an int) or more as a standalone variable on the stack "

That's a reasonable answer. But, I meant they seemed to have little understanding or interest. I don't interview much, and I'm probably a poor interviewer. But, I guess I was expecting some discussion.

I ran into some comp sci graduates in the early 80's who did not know what a "register" was.

To be fair, though, I come up short on a lot of things comp sci graduates know.

It's why Andrei Alexandrescu and I made a good team. I was the engineer, and he the scientist. The yin and the yang, so to speak.


Oooh, saw Andrei's name pop up and remember his books on C++ back in the day .. ran into a systems engineer a while ago that asked why during a tech review asked why some data size wasn't 1000 instead of 1024.. like err ??

Even more fun is pointers, especially when windows / macos were switching from 32-bits to 64-bits (in different ways).

Microsoft tried valiantly to make Win16 code portable to Win32, and Win32 to Win64. But it failed miserably, apparently because the programmers had never ported 16 bit C to 32 bit C, etc., and picked all the wrong abstractions.

> Even more fun is pointers, especially when windows / macos were switching from 32-bits to 64-bits (in different ways).

And yet even more of a fun time with porting pointer code was going from the various x86 memory models[0] to 32-bit. Depending on the program, the pain was either near, far, or huge... :-D

0 - https://en.wikipedia.org/wiki/X86_memory_models


Why did they design it like that? It must have seemed like a good idea at the time.

In ancient computing times, which is when C was birthed, the size of integers at the hardware level and their representation was much more diverse than it is today. The register bit-width was almost arbitrary, not the tidy powers of 2 that everyone is accustomed to today.

The integer representation wasn't always two's complement in the early days of computing, so you couldn't even assume that. C++ only required integer representations to be two's complement as of C++20, since the last architectures that don't work this way had effectively been dead for decades.

In that context, an 'int' was supposed to be the native word size of an integer on a given architecture. A long time ago, 'int' was an abstraction over the dozen different bit-widths used in real hardware. In that context, it was an aid to portability.


Was it possible to write a program taking into account this diversity, and have it work properly?

C is a portable language, in that programs will likely compile successfully on a different architecture. Unfortunately, that doesn't mean they will run properly, as the semantics are not portable.

So what’s the point of having portable syntax, but not portable semantics?

C certainly gives the illusion of portability. I recall a fellow who worked on DSP programming, where chars and shorts and ints and longs were all 32 bits. He said C was great because that would compile.

I suggested to him that he'd have a hard time finding any existing C code that ran correctly on it. After all, how are you going to write a byte to memory if you've only got 32 bit operations?

Anyhow, after 20 years of programming C, I took what I learned and applied it to D. The integral types are specified sizes, and 2's complement.

One might ask, what about 16 bit machines? Instead of trying to define how this would work in official D, I suggested a variant of D where the language rules were adapted to 16 bits. This is not objectively worse than what C does, and it works fine, and the advantage is there is no false pretense of portability.


I mean, as a senior developer, the number of bits in an "int" is "who the hell knows, because it has changed a bunch of times during my career, and that's what stdint.h is for." And let's not even talk about machines with 32-bit "char" types, which I actually had to program for once.

If the number of bits isn't actually included right in the type name, then be very sure you know what you're doing.

The senior engineer answer to "How many bits are there in an int?" is "No, stop, put that down before you put your eye out!" Which, to be fair, is the senior engineer answer to a lot of things.


How many bits are in an `int` in C? What do you mean "at least 16", that's ridiculous, nobody would write a language that leaves the number of bits in basic elementary types partially specified‽

It is a good idea - most of the time you don't care, and on slower systems a large int is harmful since the system can't handle that much and it cost performance - go to the faster system with larger ints when you need larger intw.

On the one hand, in today's world asking how many bits is in an int is exactly as answerable as "how long is a piece of rope"

On the other, the right answer is 16 or 32. It's not the correct answer, strictly speaking, but it is the right one.


An 'int' is also 64 bits on some platforms.

It's the wrong question. How many bits is uint64 is a much better question, if we're at a place where that's relevant.

I had one tell me all ints are 16 bits, and then they said 0xffff is a 32bit number.

Maybe I'm wrong but I suspect this might be partly due to the rise of Docker which makes attaching a debugger/profiler harder but also partly due to the existence of products like NewRelic which are like a hands-off version of a debugger and profiler.

I haven't used a debugger much at work for years because it's all Docker (I know it's possible but lots of hoops to jump through, plus my current job has everything in AWS i.e. no local dev).


On the other hand, I had to debug a PHP app in Docker using XDebug and it was mostly painless. Or, to be more precise, no more painful than debugging it on local Wamp/Xampp.

> "code should be self documenting

It should be to the greatest extent possible. Strive to write literate code before writing a comment. Comments should be how and why, not what.

> - ergo: We don't write any comments, ever"

Indeed this does not logically follow. Writing fluent, idiomatic code with real names for symbols and obvious control flow beats writing brain teasers riddled with comments that are necessary because of the difficulty in parsing a 15-line statement with triply-nested closures and single-letter variable names. There's a wide middle ground where comments are leveraged, not made out of necessity.


You misunderstood the GP - they were criticizing the way some programmers use "code should be self-documenting" as an excuse when they actually mean "I’m too lazy to write comments even when I really should". Just like "premature optimization is bad" may in fact mean something like "I never bothered to learn how to measure and reason about performance"

Updated my comment to refine my rhetorical intent. Thank you for the call-out.

At a minimum they should comment their GOTO’s

Laziness in moral clothing.

> Similar to the "code should be self documenting - ergo: We don't write any comments, ever"

My counterpoint: Code can be self-documenting, reality isn't. You can have a perfectly clear method that does something nobody will ever understand unless you have plenty of documentation about why that specific thing needs to be done, and why it can't be simpler. Like having special-casing for DST in Arizona, which no other state seems to need:

https://en.wikipedia.org/wiki/Time_in_the_United_States


This isn't a counterpoint, it's just additional (and barely relevant) information.

It's a counterpoint to the maxim, not the post I'm replying to.

Documenting it in a way that ensures it satisfies the example case would be preferred. You know, like with a test.

"Why is this person testing that Arizona does such bizarre things with time? Surely no actual state is like that! Such complexity! Take it out!"

Language conventions aside, i have rarely found a comments to be and more often they have lied to me. AI makes this both worse and better.

I know it may be hard for me to understand the need for writing in english what is obvious (to me) in code. I also know i have read a stupid amount of code.

My rule is simple, if the comment repeats verbatim the name of a variable declaration or function name, it has to go. Anything else we can talk about.


Because it reads like permission not to think and for a group of supposed intellectuals we spend a lot of fucking time trying not to think.

Even 'grug brained' isn't about not thinking, it's about keeping capacity in reserve for when the shit hits the fan. Proper Grug Brain is fully compatible with Kernighan's Law.


(this is the correct answer, parent needs to understand this better)

In particular I've seen way too many people use this term as an excuse to write obviously poor performing code. That's not what Knuth said. He never said it's ok to write obviously bad code.

I'm still salty about that time a colleague suggested adding a 500 kb general purpose js library to a webapp that was already taking 12 seconds on initial load, in order to fix a tiny corner case, when we could have written our own micro utility in 20 lines. I had to spend so much time advocating to management for my choice to spend time writing that utility myself, because of that kind of garbage opinion that is way too acceptable in our industry today. The insufferable bastard kept saying I had to do measurements in order to make sure I wasn't prematurely optimizing. Guy adding 500 kb of js when you need 1 kb of it is obviously a horrible idea, especially when you're already way over the performance budget. Asshat. I'm still salty he got so much airtime for that shitty opinion of his and that I had to spend so much energy defending myself.


Reminds me of a codebase that was littered with SQL injection opportunities because doing it right would have been "premature optimization" since it was "just" a research spike and not customer facing. Guess what happened when it got promoted to a customer facing product?

Now that's an stupid argument. I'm with you. Removing SQL injection has little if anything to do with performance, so it is not an optimization. I guess we will get more of this with the vibe coding craze.

We'll see. It's easy enough to ask Claude to red team and attack the system given the codebase and see what holes it finds to patch up. It's good enough now to find blatantly obvious shit like an SQL injection.

tbf that's not their fault, as long as they were open about the flaws. Business should not have promoted it to a customer facing product. That's just org failure.

I disagree. If you merge code to main you immediately lose all control over how it will be used later. You shouldn't ever ship something you're not comfortable with, or unprepared to stake your professional reputation on. To do so is profoundly unethical. In a functioning engineering culture individuals who behave that way would be personally legally liable for that decision. Real professions--doctors, engineers, etc.--have a coherent concept of malpractice, and the legal teeth to back it up. We need that for software too, if we're actually engineers.

Profoundly unethical? Ok so wtf is this formatting in your comment. You DARE comment, online where people can see, where you start a new sentence with two dashes "--". What are you thinking? Where's the professionalism? Imagine someone took that sentence and put it on the front of the biggest magazine in the world. You'd LOOK LIKE A FOOL.

OR, perhaps its the case that different contexts have different levels of effort. Running a spike can be an important way to promote new ideas across an org and show how things can be done differently. It can be a political tool that has positive impact, because there's a lot more to a business than simply writing good code. However if your org is horrible then it can backfire in the way that was described. Maybe business are too aggressive and trample on dev, maybe dev doesn't have a spine, maybe nobody spoke up about what a fucking disaster it was going to be, maybe they did and nobody listened. Those are all organisational issues akin to an exploitable code base but embedded into the org instead of the code.

These issues are not the direct fault of the spike, its the fault of the org, just like the idiot that took your poorly formatted comment and put it on the front page of Vogue.


Grammatical errors, formatting mistakes, or bad writing in general aren't something the magazine publisher can be held liable for, it may be embarrassing but it's not illegal or unethical. Publishing outright falsehoods about someone is though--we call that defamation. Knowingly shipping a broken, insecure system isn't all that different. Of course the people who came along later and chucked it into prod without actually reviewing it were also negligent, but that doesn't render the first guy blameless.

If it was only supposed to be a spike then it does render the first guy somewhat blameless. Especially if the org was made aware of the issues, which I imagine they were if someone had raised the issue of the exploits in the code base.

I mean I could take a toddlers tricycle and try to take it onto the motorway. Can we blame the toy company for that? It has wheels, it goes forward, its basically a car, right? In the same way a spike is basically something we can ship right now.


That is the giat of the leftpad history, isn’t it?

This is crucial detail that almost everyone misses when they are skimming the topic on surface. The implication is that this statement/law is referenced more often to shut down the architecture designs/discussions

Even moreso . I like the Rob Pike restatement of this principle, it really makes it crystal clear:

"You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is."

Moreso, in my personal experience, I've seen a few speed hacks cause incorrect behavior on more than one occasion.


This is true but doesn't help.

Parent is talking about building software that is inherently non-performant due to abstractions or architecture with the wrong assumption that it can be optimized later if needed.

The analogy is trying to convert a garbage truck into a race car. A race car is built as a race car. You don't start building a garbage truck and then optimize it on the race course. There are obvious principles and understanding that first go into the building of a race car, assuming one is needed, and the optimization happens from that basis in testing on and off the track.


Ha! -- Allow me to introduce you to the US Diesel Truckin Nationals! Here are some dump trucks drag racing https://www.youtube.com/watch?v=aqxpOPeImkw

lol. TIL. But they're not racing Formula 1.

Knuth certainly writes better than Dijkstra, even if he lost the "goto" argument in the end.

> Basically, don't spend unnecessary effort increasing performance in an unmeasured way before its necessary, except for those 10% of situations where you know in advance that crucial performance is absolutely necessary. That is the sentiment.

Which is pretty close to just saying "don't do anything unless you have a good reason for doing it."


Computer scientist here. I love Donald Knuth, but he never maintained production systems :)

I’m being a bit provocative here, just to make two points:

a) Software development back in the day, especially when it comes to service, reach, security, etc., was completely different from today. Black Friday, millions of users, SLAs, 24-hour service... these didn’t exist back then.

b) Because of so many conditions — some mentioned in point (a) - prematurity ends when the code is live in production. End.


> except for those 10% of situations where you know in advance that crucial performance is absolutely necessary

Yeah like, NOT indexing any fields in a database, that'll become a problem very quickly. ;)


Those 10% situations will be identified by the business requirements. Everything else is an unmeasured assumption of priorities in conflict with the stated priorities. For the remaining 90% of situations high performance is always important but not as important as working software.

Those days, what I see as "premature database optimization" is non-DBAs, without query plans, EXPLAINS and profiling, sprinkling lots of useless single column indexes that don't cover the actual columns used for joins and wheres, confusing the query planner and making the database MUCH slower and therefore more deadlock-prone instead.

My own personal law is:

When it comes to frameworks (any framework) any jargon not explicitly pointing to numbers always eventually reduces down to some highly personalized interpretation of easy.

It is more impactful than it sounds because it implicitly points to the distinction of ultimate goal: the selfish developer or the product they are developing. It is also important to point out that before software frameworks were a thing the term framework just identifies a defined set of overlapping abstract business principles to achieve a desired state. Software frameworks, on the other hand, provide a library to determine a design convention rather than the desired operating state.


Yii frameworks were full of that jargon.

I had a hard time learning the whole mvc concept


* superior written communication

* leadership

* data structures

* task/project management

* performance/measurements

* data transmission techniques

Honestly, if you really have to ask the question then none of this matters because it sounds like you are already delegating your career to AI which would make this list unapproachable.


I'm interested to know why you think data structures are important. AI is pretty good at reasoning out data structures problems.

LLM does not "reason".

This should be clear by the fact that it can solve complex math problems without understanding how to count.


Forget what AI can and cannot do. What can you do?

If you are only doing data entry into an LLM without understanding how any of this actually works then what do I need you for? I can just promote the janitor at half the cost to do your job.


Yeah, even 2 years ago you could tell it to make a service with minimal instructions and it would usually guess the right data structure.

Often better than many developers I've worked with come up with.


Do the boring parts first. I mean this seriously. You need to build personal routines of accomplishment for the more administrative tasks. Some of that can be worked in parallel with other things.

You cannot teach life skills using games unless the game is explicitly designed to terminate and delete all play data the moment user persistence degrades, and the game cannot have a pause function.

So far the official answer from the US Secretary of State, Marco Rubio is:

Israel was going to attack and US bases and US allies would be the primary counter attack targets, so the US preemptively bombed Iran.

https://theintercept.com/2026/03/03/rubio-trump-iran-israel-...

The real answer, that came out only last week, is that everybody in Trump’s administration, with possible exception to Hegseth, strongly advised against attacking Iran regardless what Israel says. Trump chose to attack Iran believing regime change was probable despite all guidance, from his administration, that it was not possible without a ground invasion.


Holy fuck, another young person give up on life thread.

Yes, as a software developer you will still need to learn to write software. The most important skill learned from this, speaking from 20 years experience writing open source and corporate software and 30 years military, is systems of organization. Call it architecture if you want. You won’t know how to put the Lego pieces together until you have done it yourself many different times.

AI can write software for you, as can a rookie, but only an experienced developer can determine what’s crap and how to do it better according to evidence.


Of course they do. They also want us to equally dump money into the likes of Kalshi and Polymarket.

So I guess when employers force AI use by their developers those developers progress towards worthlessness in that they will produce wrong code, not know the difference, not care about the resulting harm, and finally not even try to course correct if AI is removed.

This sounds like something I have seen before: jQuery, Angular, React.

What the article misses is the consequence of destroyed persistence. Once persistence is destroyed people tend to become paranoid, hostile, and irrationally defensive to maintain access to the tool, as in addiction withdrawal.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: