Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
On the Folly of Rewarding A, While Hoping for B (1975) [pdf] (web.mit.edu)
133 points by Someone on June 18, 2020 | hide | past | favorite | 48 comments


I find this article... strange. Near the beginning, it talks about widespread mutiny in Vietnam, and attributes it to the reward system: "What did the man at the bottom want? To go home. And when did he get to go home? When his tour of duty was over! This was the case whether or not the war was won."

So, in other words, the war was lost because management did not properly align incentives for soldiers with desired outcomes?

I find the following article more persuasive:

https://libcom.org/history/1961-1973-gi-resistance-in-the-vi...

It identifies three important factors for the same phenomenon:

-- First, that new recruits discovered immediately that their recruiters had lied to them: "Guarantees of special training and choice assignments were simply swept away."

-- Secondly, for the many Black GI's, a growing consciousness of their own oppression within American society.

-- And, finally, the sheer futility and meaninglessness of the war itself: "a seemingly endless ground war against an often invisible enemy, with the mass of people often openly hostile, in support of a government both unpopular and corrupt."

In my opinion, the article overstates the importance of management. I work as a tenured professor at a research university -- another profession discussed in the article. Frankly, there are few rewards or disincentives for behavior of any sort -- at least not of the sort that come from my bosses. And yet I do a good job anyway, for the simple reason that I believe my job is worth doing well.


> I work as a tenured professor at a research university

Isn't the reward tenure itself? And it requires you to put in enough effort for your department to consider to approve you. Behavior that would prevent approval is implicitly discouraged (might be certain views held academic or not, or pedagogical approaches) and requirements (publishing, funding brought in, student feedback, favor your peers) depending on the department and field.

> And yet I do a good job anyway, for the simple reason that I believe my job is worth doing well.

Once tenured, you've already been selected for the traits that are desired in the role and proven your ability to execute them well.


> Once tenured, you've already been selected for the traits that are desired in the role and proven your ability to execute them well.

I agree that the OP has a bit of self-selection bias. Similar to the “why don’t you go to college and get a real job. I did that, so can you” that internalizes the success while ignoring externalities


I don't think OP is saying "I got tenure, why can't you go through the same process" or anything, I think they're saying "I got tenure, and having been tenured I continue to do good work because of intrinsic motivation and not management-driven incentives, I think we should extend this level of autonomy to everyone and get rid of the barriers to having a tenure-style job."


I am a bit confused; what is your criticism?

My only real point is that my job performance is mostly independent from what my superiors incentivize or hope to see.

Imagining myself as a soldier in WWII, I would like to think that I would have been motivated less by whatever my commanders incentivized, and more by the prospect of beating Hitler. Whereas, in Vietnam, I can't see any motivating large-scale goal or purpose. I think that Art Hoppe summed up the prevailing sentiment in his 1971 newspaper column: "I don't give a damn anymore who wins the day."

That said, hindsight is 20-20, and my argument is totally hypothetical: I have never been in any situation even remotely similar to war.


It's certainly true that incentives matter a lot before tenure. Unfortunately, there are some tenured faculty that accomplish little.

I, and some of my colleagues, have pushed for there to be more incentives and disincentives, mostly without success. But we continue to work hard anyway.


> So, in other words, the war was lost because management did not properly align incentives for soldiers with desired outcomes?

The article doesn't posit any reasons why that war was lost, only that the goals pursued by the mutinous soldiers, due to the actually rewarded behavior, were not aligned with the administration's goal of winning.

It could be that the war was intractable regardless, but widespread mutiny hardly contributes to victory.


More recently, this concept was linked from Lesswrong/SlateStarCodex, https://www.lesswrong.com/posts/n86TNJS8k2Shd84kn/the-asshol... , which I would recommend skipping in favor of summaries given it wanders and you can only read the same word over again so many times.

The basic dynamic is: if you advertise a rule, and then don't enforce it, you in-effect reward people who transgress the rule and directly discourage people who obey it - and then wonder why you are "surrounded by assholes," when what you've done is set up a filter that only assholes manage to permeate. I think of this as a variant of, the "Hock Principle: Simple, clear purpose and principles give rise to complex and intelligent behavior. Complex rules and regulations give rise to simple and stupid behavior."

More than post-hoc explanation, it can predict a pattern for how large organizations and bureaucracies have clusters of terrible people at the top, how an individual may develop a pattern of being subject to abusive relationships, how some social programs will just sustain the problem they are mandated to mitigate, how corruption spreads in institutions, how some people always seem to end up in terrible jobs, and even how discussion forums can turn toxic.

It comes down to the maxim, "what you reward, you get." If you do not set boundaries and then enforce the boundaries you set, the only people who will pass through them will necessarily be the kind of people who don't respect you or your boundaries. Set boundaries for yourself and how you relate to the world and others, and be absolutely aware that when you choose to relate without them, you are rewarding people for not respecting you. Do the same in your companies, products, services, and teams. If you have ever worked with someone who was an asshole and wondered why and how they were able to succeed, it is because the people around the person let it work for them.


I would add to your statement

>If you advertise a rule and don't enforce it, you in-effect reward people who transgress the rule... and when why you are surrounded by jerks

that transgressing the rule needs to provide some sort of competitive advantage for the selection to work. Some people may transgress a dress code but I am doubtful that sort of thing will actively help them in the workplace.

It may be in a culture lacking enforcement of minor rules results in more people feeling free to transgress rules that actively benefit them, but that is a second order effect.


> It may be in a culture lacking enforcement of minor rules results in more people feeling free to transgress rules that actively benefit them, but that is a second order effect.

Not just that, but for example, it will leave distaste in the abiding workers mouths. "I came here every day, respect the dress code, and that lazy bastard does not?".

So, don't really make rules that take much effort to enforce, or leave more open interpretation for small things, "common sense".

I don't know the answer, but a lot of minor things makes a big bad thing.


> Not just that, but for example, it will leave distaste in the abiding workers mouths.

Or, in other words, there is a competitive advantage here: obeying the rule is a hassle. Following a dress code is more annoying than wearing whatever one is comfortable in. The abiding workers are rightfully angry, because they're paying a price, however minor, for apparently no reason.


IMHO your summary obscured the link's point as much as it clarified it, because when you said "if you advertise a rule, and then don't enforce it" the context I thought of was "Employee car park speed limit 10mph" and the rest of the post made no sense in that light.

The example given in the link - asking people to e-mail a slow-to-respond team instead of their fast-to-respond boss - makes the idea of a filter that means the boss only talks to rulebreakers a lot more logical.

I do agree the article uses far too many words to say this though...


Charles Munger says it really well: "Show me the incentive, and I'll show you the outcome."


Along the same lines, one of my biggest pet peeves is rewarding using Metric C, which is used not because it actually correlates with your goals, but because it is easy to measure and get a concrete number. An example would be using lines of code written per day as a measure of developer productivity.


My current favourite Metric C:

"If you feel like showing off, average everything into everything else and call it the Gross Index of Total Enemy Morale. This won't fool anyone who knows the propaganda business, and you won't be able to do anything with or about it, but you can hang it on a month-by-month chart in the front office, where visitors can be impressed at getting in on a military secret. (Incidentally, if some smart enemy agent sees it and reports it back, enemy intelligence experts will go mad trying to figure out just how you got that figure. It's like the old joke that the average American is ten-elevenths White, 52% female, and always slightly pregnant.)"


Or of course Measure D, which just so happens to be high for yourself and people you like (or is easy to optimize for you / them).


Not to mention once metrics become targets, they cease being useful as metrics because they're apt to be gamed.

It seems you simply can't win when you need to guard against mindlessly selfish behaviour.



Protip: Instead of saying which metrics you use you could use a opaque evaluation system that seems arbitrary for the subjects.


Sure! And have them all report each other for thought crimes and maintain secret files on everyone.

Either that or let AI sort it all out.


I enjoyed this read. It reminds me of a few things I've observed recently:

* If you give a little, you will be subject to increased disapproval and if you don't care at all, you'll be subject to distant tut-tutting. Examples: the tech industry faces widespread condemnation for its gender ratio while hedge funds face little - only 1 in every 9 senior hedgies is female[0]. I'm not complaining about the gender ratio condemnation, merely commenting on the difference. Another example is that Bernie Sanders's speech was interrupted by Black Lives Matter protestors. Donald Trump's wasn't.[1]

* Internet commenters frequently remark on how they would pay for good news, yet they prefer reading free low-quality news sources over paid high-quality news sources. Similarly with clickbait vs. informative writing. They share the former, they comment on the former, and they enjoy the former. Effectively, they only pay the former.

* Voters and plans (brought up in the article) are my favourite. You cannot attack someone without a plan but every plan has flaws. Therefore, offering a plan is disincentivized while offering mere ideas is incentivized by people who claim the opposite. I'm glad this was example 1 in the article.

* Internet commenters will frequently be unforgiving of errors corrected. They reward hiding errors and punish honest corrections. No apology is sufficient. Often this manifests as upvotes or likes for those who stick brutishly to their positions while merely abandoning to indifference those honestly convinced. Strictly this is not really only Internet commenters. The fact that Red Cross having to not give away free doughnuts resulted in animosity towards them is a lesson for future players: do not give away anything.

* YC's ill-fated attempt at crowd-sourcing a funding target was another amusing result. They intended to support something that they wouldn't have heard of, but they rewarded popularity, which resulted in their having to fund something they'd heard of a million times.

0: https://docs.preqin.com/reports/Preqin-Special-Report-Women-...

1: https://time.com/3989917/black-lives-matter-protest-bernie-s...


A related blog post calls this the Copenhagen interpretation of ethics (your ethical burden increases by interacting with the problem in any way, even positively): https://blog.jaibot.com/the-copenhagen-interpretation-of-eth...

There are also other analogies:

At work, if you touch something, you are now eternally responsible for maintaining it. Be it code or process or equipment. Changed the toner in the printer? Great, you can now do it all the time. "But you already know how to do it, it will just take a minute for you". The best tactic at work is to be really competent in the main job and push all energy into furthering the skills and experience and recognition in that and dodging all the crap work by appearing badly suited to them. You want your colleagues to say "ah he's a good guy, really competent, but this type of work is really not his kind, he'd forget it, mess it up or something". Basically the goofy distracted professor meme. Great at focusing on his specialty we pay him for but cannot be tasked with these everyday mundane shit tasks.

On a larger scale, big tribal feuds are often among groups with small differences, like Christian denominations and wars over dogmatic debates on the details.


I'm reminded of the classic Emo Phillips joke:

> Once I saw this guy on a bridge about to jump. I said, Don’t do it! He said, Nobody loves me. I said, God loves you. Do you believe in God?

> He said, Yes. I said, Are you a Christian or a Jew? He said, A Christian. I said, Me, too! Protestant or Catholic? He said, Protestant. I said, Me, too! What denomination? He said, Baptist. I said, Me, too! Northern Baptist or Southern Baptist? He said, Northern Baptist. I said, Me, too! Northern Conservative Baptist or Northern Liberal Baptist?

> He said, Northern Conservative Baptist. I said, Me, too! Northern Conservative Baptist Great Lakes Region, or Northern Conservative Baptist Eastern Region? He said, Northern Conservative Baptist Great Lakes Region. I said, Me, too!

> Northern Conservative Baptist Great Lakes Region Council of 1879, or Northern Conservative Baptist Great Lakes Region Council of 1912? He said, Northern Conservative Baptist Great Lakes Region Council of 1912. I said, Die, heathen! and I pushed him over.


> On a larger scale, big tribal feuds are often among groups with small differences, like Christian denominations and wars over dogmatic debates on the details.

Did you have a big Christian war in mind which was specifically about dogmatic differences, and not primarily about monarchs exerting their power over some group or another?


Some that come to mind

The Byzantine iconoclastic controversy 8-9 centuries weakened the Empire.

Some bits of the ECW (bishops wars) where religious.

The northern Crusades (much more savage that the eastern ones)

The Hussite Wars


I think the modern version is Islamic- you see many Sunni nations more willing to work with Israel than Shia Iran.

Christianity is not an explicit part of any modern state outside of the Vatican that I'm aware of (with a small technical exception for Britain's monarchy.)


Mount Athos has special autonomous status within the Greek Republic; the Orthodox equivalent of Vatican City for the Catholic world.


(Not the OP)

How about the Albigensian crusade?

https://en.wikipedia.org/wiki/Albigensian_Crusade

Also, possibly, sectarian violence in Ireland.


Ireland's feuds aren't sectarian, they're ethno-nationalist. We call them "Protestants" and "Catholics," but really what those names mean are "the group in Ulster and especially in Belfast, whose ancestors colonized the land for England, and a few who cooperated with them" and "the rest of the country whose ancestors lost their war and had to put up with being second class citizens oppressed by the British military and police well into the twentieth century."

And even if the British monarch is a figurehead today, the legacy of violence goes back to the Tudors.


>> The best tactic at work is to be really competent in the main job and push all energy into furthering the skills and experience and recognition in that and dodging all the crap work by appearing badly suited to them.

You can just say "no" and avoid all this. I figured that out as a junior programmer, when a middle manager asked me to create a graphic on a day when everyone else, including all the design team, was in a hackathon (I was stuck working because junior programmer).

It wasn't even a thing I had to think very much about. I was so annoyed to be asked to do work nobody hired me to do that there was no doubt in my mind about the correct response. I dont' think it even crossed my mind to do the job but bodge it. That would mean I'd actually have to, you know, do the job.

So while I recognise the situation, that once you pick up a task you're stuck with it forever, like it or not, I disagree with your tactic. Spending time doing work you don't want to do but badly is just wasting your time and it should be avoided.


No good deed goes unpunished!


> Internet commenters frequently remark on how they would pay for good news...

This is a classic observation in the real word, and well-known to marketers.

What people tell you they would buy and what they actually buy are two different things, and we observe that everywhere in the marketplace.

Actions speak louder than words. Measure actions.


This is precisely the problem I have with how one is supposed to approach looking for product-market fit in a startup: contact customers and ask "would you pay for this? how much? if not, what would make it so you'd pay for it?" I don't feel like that's a good way to find things for customers to buy, but it's the advice from the VCs.


Perhaps this can inspire https://www.pretotyping.org/


Is there an article somewhere describing the last point (YC crowd sourcing)? Sounds like an interesting read, but can't find any thing with just these terms.


Side node: I like how you use the term "Internet commenters". It is a much better and neutral term than most I've seen. Like "XY-crowd", "hivemind", "the Internet" and so on.


[flagged]


Since you are right that I have no idea what you are talking about, could you explain what you are talking about?


I stumbled on this paper very early on in my career and it significantly shifted my attitudes toward work, politics, and society. In short: people follow incentives.

Of course, the suggestion of "altering the reward system" breaks down a bit in the presence of Goodhart's law, but it's still very common to see (especially in large or unusually bureaucratic organizations) policies which create incentives completely at odds with the stated goal or metric. One of my favorite examples is the "budget game" many companies play, where each department/division/etc. puts forward a proposed set of projects and budget; the corporate planners then allocate the overall corporate budget. Departments obviously "sand-bag" and ask for more than they think they need (knowing they won't get everything they ask for); there's often a "use it or lose it" policy too, which leads to what I can only describe as an orgy of spending at the end of the fiscal year. This can be great fun for vendors, consultants, and so on!


> people follow incentives

Maybe the less they care about the company's overall goals and vision, the more they follow KPI:s and bonus incentives. When there seems to be no overall purpose and meaning anyway (except for making rich people richer?), then why not game the system?

But there can also be intrinsic motivation, just "unselfishlessly" doing what seems like good for the organization, if one likes its goal and reasons for existing.

Edit: You wrote: "significantly shifted my attitudes toward work, politics"

I wonder, how did this change how you thereafter did things at the workplace? (And politics somehow?)


It just made me more cognizant that people respond to incentives rather than intentions. It's not enough to propose new "corporate values" or state that a policy proposal must be passed because it "addresses" some problem - show me how the proposed change incentivizes the desired behavior. Are there perverse incentives? Principal-agent problems? And so on.

Make no mistake, I think people are "intrinsically motivated" too, at least to a point. I'd like to believe I am. But it's hard to deny that people (myself included) do things for some reason, some gain - even if that gain is something intangible like "a feeling that I'm doing good" or "perceived status in my organization".


> people respond to incentives rather than intentions

I like that way of phrasing it. I suppose it's good to keep in mind in ... all? parts of life. E.g. also if raising children?

> Are there perverse incentives? Principal-agent problems?

If I setup incentives some day in the future -- I'd talk with the people involved, and describe the overall goals and intentions. And then all of us together can try to figure out problems with the incentives

(Of course in some rare? case some people might not want to do that in an honest way.)

This was interesting to read, "Principal-agent problems" was a new phrase to me: https://en.wikipedia.org/wiki/Principal%E2%80%93agent_proble...

> I think people are "intrinsically motivated" too [...] do things for some reason [...] intangible like "a feeling that I'm doing good"

Yes, totally agree


Totally happening in the corporate world. Rewarding employee mentality while hoping for not employee mentality.


There's a strange point that is made on page 771. Toward the bottom of the page, it says:

Such a conclusion would be wrong.²

The "conclusion" being that doctors want to minimize both false positives and false negatives.

The footnote is:

²In one study (4) of 14,867 films for signs of tuberculosis, 1,216 positive readings turned out to be clinically negative; only 24 negative readings proved clinically active, a ratio of 50 to 1.

Can someone explain to me why the ratio matters? Surely the ratio would also depend on the actual incidence of tuberculosis in the population, and would not be determined solely by doctors' choices, right?


The ratio matters because the second type of error is significantly worse ie: it is much worse to have tuberculosis and be told you do not have tuberculosis. They are looking at type 1 vs type 2 error.

I think that they are saying three things 1) real tests aren't actually trying to minimize both types error of which is the opposite of the hypothesis "It might be natural to conclude that physicians seek to minimize both types of error" 2) illustrating with a very dangerous disease (to give the reader an idea of risk) the difference in types of error. 3) Demonstrating potential trade offs that occur, Type 1 error is so much less risky they are ok with 50x as much error.


Cynically, if doctors are financially rewarded for performing treatments, doctors may prefer a test with significant false positives over one with minimal false positives. (If your knee jerks and you start saying things like 'doctors never allow financial incentives to change their medical decisions!', let me respond by saying there are many studies that say that they do, and also you can replace 'doctors' with 'health-adjacent corporations' if you like.)


If nobody at all has tuberculosis, then even a test heavily biased toward false negative will still produce nothing but false positives.

In that situation, only two kinds of test will avoid false positives: (1) a perfectly reliable test, or else (2) a broken tests that never reports a positive.


This does look like a clear misinterpretation/misuse of statistics, and a blemish on the otherwise fine article.

Suppose we have a perfectly reliable test for tuberculosis, but we foul it into having an equal false positive or negative rate as follows: we perform the perfect test, the flip a fair coin, and if it is "heads" we invert the test's outcome.

Then if we apply the test to a population in which the incidence of tuberculosis is low (which is actually the case), most of the false results will be false positive.

In other words, a high false positive rate in a screening test relative to a low false negative rate does not prove that the test is biased toward false positive identification.

Consider that if nobody at all has tuberculosis, the only kind of false result possible is a false positive.


Surely making everyone come back into the office will boost productivity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: