I recall this happening as a high school kid. I couldn't tell from the arguments who was right or wrong, so I wrote a simulation, which was fairly easy, and proved her right. I found it interesting that writing an simple program in this case could confirm the right answer easily whereas having a PHD in math was no guarantee that you would reason it out correctly
I wrote a simulation as an exercise (not to convince myself, just to illuminate the logic). I recall becoming extra-convinced when I spotted an optimization that would have been paradoxical if the correct solution hadn't been correct. I don't remember exactly what it was; I think it had something to do with collapsing nested if() into a single if() based on p vs. (1 - p) for the original door choice.
edit: the more I think about it, the more I suspect that the actual optimization was not randomizing both the door choice and the location of the car, which then allowed simplifying the win logic. If door 0 is always considered the door with the car, then the whole loop body can be collapsed down to:
if (rand() / (RAND_MAX / 3) == 0) noswitch_wins++;
No threat of a paradox as I was thinking before, but a fairly clear statement that the odds of winning by not switching are 1 in 3.
This is actually a good example for a problem where simulation does exceedingly help in gaining insight. I once did a webpage [1] comprising both logic and simulation to sort out a dispute. (The debating parties each had a PHD in philosophy, implying some courses in logic absolved – which apparently didn't help much with the specific problem.)
The key aspect, I think, is in writing the simulation oneself. At one point, I was attempting to explain the Monty Hall problem, and wrote a simulation out of frustration. It failed to convince the other person, who accused me of subverting the simulation such that it would give the answer that I wanted.
I'm wondering if gambling would be a good way to go about it. Start with $10 each, and alternate who plays the "host" of Monty Hall. One person is only allowed to switch, and another is only allowed to stay. For each correct answer, take $1 from the other person.
Hmm, maybe to sweeten the deal, the person who switches gets $2 for a win, but the person who stays gets $3. That way, it would show that even with 50% better payout, it doesn't overcome the 2x difference in odds.
Don't sweeten the deal. With the regular odds, after 12 games you will only be ahead ~61% of the time. You have to play a larger number of games to be relatively sure you win and the other accepts they must be wrong. The number of games increases by quite a bit if you sweeten the deal.
This is certainly true, if you're able to do the simulation on your own, you may also play with parameters, remodel the problem, etc.
Another problem which is best explained by simulation is the random distribution of wealth [1]. (I played a bit with this one, introducing further parameters like liminal damage, debt, asymmetric amounts depending on investors trust, etc, to see how basic economic assumptions would reshape the model. Answer: only to the worse.)
I don't think I'm a sociopath, but often I use monte-carlo or agent based simulations to help me understand what happens, because writing and running the code is simpler (for me) than closed form analytic solutions. IF I then need to convince others, I guess I can do the closed-form solution (based on the prior intuition), but since I don't really care whether anyone agrees with me or not (I go develop 'engines' in my corner of "the system"), I guess their opinion is immaterial to my/group/company progress?
Do you think that agent-based computational simulations (say of micro-economics bubbling into creating emergent macro-economics) are ever used in debates?
And I have written a simulation which proves her wrong, doesn't matter. I mean, when we deal with ambiguous questions like this a simulation doesn't really help since you had to add information not originally stated in order to write it. It just happens that most people make assumptions which aligns with this answer, but you could just as well make other assumptions and get another answer.
I recall this happening; in fact, I though she was wrong when I read the original article.
After many years, I finally understood where I made the mistake. It's not stated explicitly in the wording of the problem, but if you point this out explicitly, nearly everybody changes their mind:
When Monty Hall opens the door, he has to pick one of them, and he has knowledge of what's behind it.
The way the problem was described, I didn't appreciate that.
It wasn't until one of the groups that wrote in to support Vos Savant explicitly said "since some of the students in the class were skeptical, we wrote a simulation of the problem, which output the probabilities you would expect if Vos Savant was right." I inspect the source code, and viola- instantly understood my false assumption about the problem description.
It's an open problem whether the Monty Hall problem should explicitly state the implication of which door Monty opens. To me, I think the whole point is that it leaves one of the consequences unstated, and the reader is expected to make that indirect inference.
>Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?
There's actually some ambiguity here (I'm sorry, I'm a really pendantic person). What it doesn't say is that he always picks the door with the goat. What it says is that in a single trial, he picks the door with a goat. I assumed that in half the trials, he'd pick the door without a goat. I see now that was a wrong (and dumb) assumption.
This is why, rather than reading text, I think people should describe these problems with code. Unambiguous code. With test cases. Then everybody can inspect the unambiguous code rather than having to parse human text.
Since there's ambiguity, you should assign prior probabilities to the two types of behavior Monty can have. Then the overall probabilities are the average of the probabilities for each of the possible cases, weighted by this prior. So of you think there's 1/2 chance Monty always reveals a goat (in which case swiching wins 2/3 of the time) and a 1/2 chance Monty picks randomly (in which case switching wins 1/2 the time), then your overall chance of winning if you switch is 1/2×2/3 + 1/2×1/2 = 7/12. So you should still switch.
Indeed you should switch if you think there's any chance at all that Monty always opens doors with goats. (Unless of course you think that there's also a chance that Monty always reveals cars whenever he can, but that wouldn't make for very good TV.)
We are told that the host knows what's behind the other doors, which implies that the door he opens is deliberately chosen and is not random.
Additionally, to open the door with the prize would terminate the game, so to open the door with the goat is the only action that makes sense given what we know.
I agree, those are reasonable interpretations made from somebody who has done a close reading of the text.
Or you know, somebody could just write a computer program that described the rules unambiguously so everybody could inspect them and not have to make reasonable interpretations.
So, in retrospect that's obvious, but it's also not stated. I just assumed half the time he opened a prize door (I had watched the TV show a few times and... they did weird shit).
The letters quoted in the article are frustrating to read. It doesn't really matter that she was right, people shouldn't have sent those letters even if she was wrong. Especially in probability and statistics, but also in pretty much every area of knowledge, it is common for human beings to hold strong beliefs that are completely wrong or seriously misguided, and we all need to recognize that weakness in ourselves.
I think the world would be a much better place if we all dropped our egos and approached these sorts of disagreements with kindness and benefit of the doubt. Even if we're confident that we know better than the other person, we should approach it as a genuine attempt to understand and clear up the misconception as a way of improving the other person (and be willing to admit that the "misconception" may end up being true after all, as in a case like this). As far as I can think of, condescension is just some obnoxious human habit and doesn't actually provide any value.
In presentations/workshops I now find vanilla-flavoured Monty Hall to be of no real value - nearly everyone says "SWITCH!!" without thinking. So I offer this variant.
I have 10 cups, and under one is a prize. I get a victim - ahem - volunteer to select two, and place markers on them.
I then say clearly that I will remove 6 of the other cups and, to retain the mystery, will not reveal the prize. There are now four cups, the prize is under one, and two of them are the volunteer's original choice.
And now the offer: They may retain their original two, or they may surrender their two and select only one of the other, currently unselected two.
So:
* What would you do, and why?
* What interesting further variant can you think of?
There's a 20% chance that the volunteer was initially correct. In the 80% case, one of the two remaining cups has the price beneath it, so choosing one of the remaining cups gets you a 40% chance of payout instead of your initial 20%. Switching doubles your chance of payout.
This should be straightforward to anyone who actually understands the Monty Hall problem.
Personally, I find that completely uninformative. Others find it helpful and I won't deny their experience, but I find that no more convincing that proper arguments about the 3 door version.
My variant was not intended to assist explaining the original, it's a variant intended to make people think again. It often exposes non-understanding in those who will blindly and automatically say "Switch".
Here's another variant. 12 doors, I let you choose 3. I then open 4, leaving your chosen 3 and 5 others.
I let you keep your original choice, or give them up to choose only one of the others. Should you switch now?
I can't understand how you can quote my text and still ask that question. Seriously, given that I say I won't reveal the prize, how could anyone believe that I would be removing 6 cups at random?
> I then say clearly that I will remove 6 of the other cups. And, to retain the mystery, I will not reveal the prize.
It wasn't a criticism.
I just wanted to draw attention to the point that in the random game (I.e. player marks two, you reveal 6 random unmarked, if no prize is revealed, player decides to switch to one or to keep two) it is the better strategy to never switch.
Do you think it would be an interesting variant to play the game like that?
> You say you don’t reveal the prize, but one can remove six cups at random without revealing their contents.
One can, but one cannot guarantee it.
Actually, one can. One can choose 6 at random from the ones that are neither selected, nor hide the prize.
My point is that my phrasing is, I believe, sufficient to allow the listener to determine that I will turn six cups without revealing the prize. In particular I say that I will remove 6 of the other cups and ... will not reveal the prize.
I honestly don't see how that can be interpreted in any way other than to say that I will deliberately not turn a cup that reveals the prize.
I think I should still switch, and for the same reason as in the original problem. The fact that you chose not to remove those two cups makes it more likely that one of them contain the prize.
My intuition would be that the evidence in favour of the 5, given by the fact you chose not to reveal them, doesn't come close to outweighing the 3-to-1 advantage granted by sticking.
Let me check with the actual calculations. If I stick I have a 3/12 chance of winning. If I switch and I originally did pick the winning cup then I always lose, and if I didn't then I win with probability 1/5. So my overall probability of winning if I switch is 3/12×0 + 9/12×1/5. So sticking gets me 1/4 and switching gets me 3/20. Sticking is better, but by a smaller margin than I expected.
You need spaces around the splats, otherwise they are taken as formatting. I assume you mean 1/2 * 1/3 + 1/2 * 2/3 = 1/6+2/6 = 3/6 = 1/2. You can also reformat that as:
I wasn't explaining, I was just talking about the formatting which was originally wrong, and is now corrected.
And sure you can find it weird, lots of people do, but the reply by DoctorOetker was trying to point out reasoning which, when properly internalised, can make it feel less weird. Some people - possibly you included - never lose the sense of weirdness. I have.
But in truth, sometimes we never really understand things, we just get used to them. For me, maybe this is one of them.
I'm not annoyed at all and added the smiley to try and soften my post. I got the Monty puzzle straight away (brought to my attention by Cecil Adams when I followed the SDMB) but I find it interesting to understand why so many intelligent folk get it wrong.
As for understanding; the stack exchange question on whether it's coincidence that the value of G is approximately pi squared is one of these things I bear in mind whenever I think I understand something. :
)
Edit: and if I'm not mistaken, you once spent far more time than I would have the patience for explaining how this worked to a rather offensive young person. I admired your patience there. It was more than I had and had better results.
I think people get it wrong because they assume that with two options the odds are the same, and you can't know anything to make it otherwise. That's why it can help to explain about where the knowledge is.
One reason why g is close to pi^2 is related to the original idea to define the metre as being the pendulum length required to give a 1 second half-tick. If that's your definition of the metre then g is exactly pi^2. So in some sense it's not entirely coincidence. I should go and find the stack exchange discussion, but I don't have time just now.
And thank you for the compliment about my patience - I appreciate it.
You can distill the disconnect down to one question.
Does Monty know where the car is? (The original article says he does, but this often gets lost in the version people read.)
Suggest Monty doesn't, and he opens door #3 to reveal a goat only because it wasn't the door you picked (and it was merely luck the car wasn't there). In that case, it is genuinely a 50/50 shot between the remaining two.
Now if Monty did know where the car was, and he wouldn't have opened door #3 if the car had been there, then the 2/3 percentage to switch is intact.
To many this seems like a minor (or incorrect) distinction but it's little assumptions (Monty knows) that underpin these gotcha questions. That's one reason why Google-style interview questions irritate me so much. In many of them, there's an implicit assumption that is necessary but never stated.
Many of the responses Marilyn got seemed to come from this form of irritation, even if some of the people writing in couldn't express a logical or justifiable basis for their irritation.
It's an interesting puzzle, but it's too easily rephrased like a con.
I didn't think what you say is true, but it is. Here's what misled me:
Go back to the 100 door version. You start out by opening 1 door, so it's 99/100 that the car is behind another door. If monty just happens to open 98 of those doors and reveal all goats, then that 99% probability that you should switch to one of them combines with new knowledge of which to switch to. Its exceedingly rare, but if it happened, then you should still switch.
The above logic is wrong. I simulated and you're correct:
I'm kinda astounded. I do stats for a living, yet without writing out the math, my intuition misled me. I thought I had a framing of the problem that allowed me to use a quick shortcut in my thinking, and that framing was wrong. Two takeaways:
1) Probability is really hard to get the right intuition about. Reasoning by analogy/shortcut problem framing is dangerous. You have to write out the math.
2) The "100 doors" explanation for the usual monty hall problem is correct for subtler reasons than are immediately obvious. You could probably set up a counter monty hall problem to trick people where there are 100 doors and he just happens to show 98 goats.
You are right, to make this easier to understand, imagine replacing Monty Hall with a street peddler.
You are playing a game with a street peddler. There are 3 cards, two are duds and one is the prize. He lets you pick one card. When you picked a card he flips one of the other two revealing it as a dud, and gives you the option to switch. Do you switch?
A person who have heard the naive explanation to the Monty Hall problem would say yes. A smart person would say no. Why? Because it is not in the street peddlers interest to let you win, he has mouths to feed and need the money! So the only reason he reveals another card and asks you to switch is because you picked the right one from the beginning, so staying in this case means 100% chance of winning, and switching is 100% chance of losing.
For the naive Monty Hall interpretation to be correct we need two things to be true: Monty Hall always opens a door and the thing behind the door is always a goat. If any of those two are not true then the popular explanation is wrong, and if you look around most explanations of the problem leaves those facts out.
For example, in this article they forgot to say that the game host always opens a door and when he opens it it is always a goat. This shows that the author of the article (and almost everyone here at HN) doesn't understand Monty Hall. Instead they talk like non mathematicians like "Why would the game not open a door sometimes?" or "Why would he open a door with not a goat? It said he opened a door with a goat this one time!" etc. The problem with those things is that the explanation only says what happened this one time, so you can't possible write down the event tree without making a lot of extra assumptions about the game.
I always start explaining it like this. "There's a one in three chance you guessed right on the first try."
And that just doesn't change, even if Monty Hall opens a door.
So not only is there a two-in-three chance you guessed wrong on the first try. Monty is helpfully offering you a chance to switch to the correct/winning choice in the second round.
But Monty's only being helpful if I guessed wrong. If it actually was behind door #1, Monty's trying to entice me into picking a goat, and basically being a dick. This is also assuming he could've just opened door #1 right away if it was the correct answer and obviated this whole mess.
Monty's free will (an unknown) underpins this entire argument- what choices does he have, and why does he even make me choose a door instead of giving me the car (and the goats) outright?
Maybe from a statistical point of view it's better to have Monty pull this shtick, but something that's helpful 2/3 of the time and harmful 1/3 of the time doesn't make the cutoff for helpful.
I feel like you're still missing something. The probabilities come out of a table. It's just math. There's no need to bring "free will" into the explanation.
No, it is you who are wrong. You can't write down a table if you don't have a proper explanation of the possible event chains, and in this case you don't. Instead when people write down these event chain they add information that isn't there and then think they actually solved the problem. But all you really know is that this one time the host opened a door with a goat, you have no idea if the host always does this since the problem statement doesn't actually say it outright. It might be a good assumption to make, but it isn't in the description so it isn't a mathematically rigorous answer.
But once you do have a proper explanation of the possible event chains, free will still doesn't play a part of it. It's a game with rules. You seem to be keyed in on some aspect of the rules being unclear, but even with the rules perfectly clear or not clear at all, the probabilities are the same regardless of the psychological state of the contestant.
If Monty would open the door where the car is, the decision whether to switch wouldn't be much of a gamble and there wouldn't be much entertainment in this part of the game show. So of course Monty knows where the car is. There is nothing mysterious or unclear about that.
This is just a hard question that is hard to get right and there is no shame in getting it wrong if you have to come up with the answer from scratch, never having heard such a puzzle before. There is no shame in not 'getting' it either. There is shame in allowing not 'getting' it to determine your conclusions.
Your initital guess (which was 1/3 probability) would have had to have been right, to make it not correct (or not the right choice) to move over to the other door. So moving to the other side (essentially you are just moving over to the 2/3 "probability block"; you were originally on the 1/3 "probability block"), inherently moves your odds over to 2/3. Staying only makes sense if you think you guessed correctly initially, and what are those odds, well those odds are 33%! And what are the odds of anything other than that initial guess (eg moving over) ? Well those odds would be 2/3!
This is the only way that I was able to wrap my brain around it. We actually did it by just doing an A B C guess three times in a row. My wife picked a letter. Then I picked a letter. It worked three times in a row, because I never initially picked the letter she did. So three times in a row, me moving over to the remaining letter (after the letter that neither of us picked had been eliminated; obviously we kind of got lucky in me not picking her letter in three tries), ended up of course as being the letter she had picked. Moving over doesn't work only when you actually choose correctly initially, which would only be 1/3 of the time. So again, moving over makes your odds 2/3.
It's brillantly simple really, yet extremely difficult to get to a method of actually understanding it, and this is the best way I've found.
Ignore the fact that it was the Monty Hall problem- everyone on HN has played that topic back and forth ad nauseum.
But look at the real story here: thousands of men going out of their way to write letters- not quick 5 minute emails, but paper letters with envelopes and stamps- to tell a woman she was wrong. If it had been a man writing the article, would it have gotten the same reaction? I doubt it.
In my view, this is the crux of the gender problems we see in tech today. Certainly not as strong (one hopes) but certainly the same weird psychological problem that so many of us seem to have to some degree requiring men to tell women when they're wrong, but not care when men are. 'GamerGate' and the various witch hunts around that topic is a great example. It's fine if a man sucks at his job, but if a woman does and it's a role that society has labeled 'for men', suddenly it's an emotion-driven attack that must be defended vigorously.
We need to see this and watch for it if we're ever going to end it.
> But look at the real story here: thousands of men going out of their way to write letters- not quick 5 minute emails, but paper letters with envelopes and stamps- to tell a woman she was wrong. If it had been a man writing the article, would it have gotten the same reaction? I doubt it.
My experience with the internet is that people just can't help themselves to point out when somebody is wrong - man or woman. Sometimes going as far to write what could be considered full essays complete with citations. Its just another anecdote but I don't think its at all obvious its because of gender. People just like feeling smart.
Yes, but most people don't cite a man's gender or appearance as a reason for his wrongness. Just from the sampling in the article we have two good examples of criticisms a man will never see:
> Maybe women look at math problems differently than men.
We don't know the samples in the article are representative of the feedback. In fact, I would expect the authors to cherry pick the feedback for entertainment purposes.
I believe the OP is right, but providing uncontroversial evidence of that is hard. You need a thorough classification of the feedback in a number of sufficiently comparable cases, involving both men and women, to provide hard evidence. And even with hard evidence in hand the conclusions could be ignored; cf. climate change.
This isn't a debate that can be settled by rational evidence.
Have you ever seen a woman's inbox? This scientist got 10,000 pieces of hate mail. Going by what I know about the kinds of e-mails women receive on the daily, it's safe to assume a good 2000 of those messages were misogynistic.
It's just how the world works. We can all pretend like we don't have uncontroversial evidence and that we'll never know if her identity as a female really affected the response. That's pretty much the status quo. Or we can not be blind and see what's happening right in front of us.
My experience with the internet is that people just can't help themselves to point out when somebody is wrong
You can actually use this to your advantage. There are plenty of knowledgeable people who would rarely or never respond usefully to a request for assistance who are quite happy to spend time pointing out your errors (and flaws) if you post something incorrect.
I'm honestly not sure if this is a dark pattern. Is it wrong to take advantage of the negative behaviors of poorly socialized people?
I think that these sorts of situations tend to have a mix of factors, and that "people love giving lengthy explanations about how other people are wrong" and "people have an implicit bias against women, especially in areas like math" likely both were contributing factors to the backlash. It's probably impossible to know how much each one contributed, but I think it's likely that both were significant.
Ascribing this all to gender seems like too broad of a brush, and with very little evidence to back it up either.
There were likely a multitude of motivations for people to write in and correct her, but isn't it natural for us to want to take someone down a peg when they are advertised as "all that" (in this case - smartest person alive), and they seem obviously wrong about something?
This is a statistics problem that many people who have had statistics training have been wrong about.
Personally, I don't think the larger response was because of gender. Well, a couple clearly were, but there's always outliers in any group. I think the vast majority were due to her status... the smartest person in the world. That leaves people chomping at the bit to disprove it and correct any little mistake.
That exists in any field, sadly. Top draft picks are scrutinized every year in sports. Every presidential gaffe gets its own 'mightier than thou' correction article. Anecdotally, I knew a mechanic who couldn't wait to read ClickNClack to find any little mistake and claim superiority.
From the article I think I only saw one comment that involved gender - I think it’s pretty unfair to ascribe gender motivation to many of the responses (as embarrassing to read as they are).
Secondly, what is the implied norm that the author wants - you can’t disagree with someone of a different gender who has a higher IQ than you?
Yes, this was quintessentially sexist bullshit. It's so ubiquitous that there's a word for it -- mansplaining.
But it's not just about gender. Consider the racism faced by so many Black mathematicians and scientists. And Indians, such as Jagadish Chandra Bose. Even Srinivasa Ramanujan, before his genius was recognized in the UK.
But recall that Black men got human rights in the US before women did. And that corporations also got them before women did. So it's arguable that sexism runs deeper than racism.
On the other hand, I cringe at "smartest woman". What does that really mean? It's like describing some hugely multidimensional thing with a few numbers. All we know is that she did extremely well at whatever tests were used. And we also know that many such tests are culturally biased.
Anecdotally, I suspect this pattern extends all the way back to childhood. The tendency I've seen, repeatedly, is that misbehavior and horseplay is tolerated in boys, or even encouraged, whereas retribution is swift when little girls step out of line. I've found this tendency in myself, sadly. When my girls misbehave, take stupid risks, boss other kids around, tear around the house and cause general mayhem, I have to consciously skew my responses back to the mean. Not that I think the world would be a better place if we raised everyone up as misbehaving man-children, but at least I want to level the playing field somewhat for my own girls.
It wasn't just any woman, it was a threatening woman -- the "smartest" woman. They were so eager to prove her wrong, they didn't stop to even spend 5 minutes to brute force the problem. She seemed intuitively wrong, they couldn't even consider that the "world's smartest woman" saw something they, exalted PhDs, did not.
Indeed, and we still see the same behaviour in many male-oriented public forums, including this one. And of course the subsequent denial of their own sexist and misogynistic behaviour.
"It was nothing to do with her being a woman!" they will cry, ignoring the unconscious bias that led them to such an angrily disproportionate response in the first place.
Paper letters with envelopes and stamps were as ubiquitous in 1990 as email is today, and there have been plenty of articles in The Straight Dope where Cecil Adams - a fictional, but decidedly male identity - claims to have received letters telling him he's wrong "from every unemployed Ph.D. in North America, plus a few who aren’t unemployed."
Actually yes, "a man" would. This is the same damned groupthink ignorance anybody intelligent gets when dealing with the less intelligent. It's fundamentally not about getting the right answer, but preserving monkeysphere group cohesion and hierarchy.
In fact, I'd say the zeitgeist of casting everything in an identity politics narrative is in the same exact vein. It's a nice, simple, and wrong answer, but it does sort out who has committed to the group (ie religion).
I'm certainly willing to posit that women are quite often on the receiving end due to many men immediately writing them off as lower status (likely in an attempt to keep from sliding further down the scale themselves), but I'm not signing up to drink divisive kool-aid that ultimately obscures the problem.
Gamergate wasn't about telling a women that she was wrong. It was gamers reacting to disdain and collusion from the journalists that were covering their industry.
The "Gamers are Dead" articles were authored by many different journalists, not just women. Perpetuating the narrative that it was an attack on women is exactly they type of behavior that keeps Gamergate alive.
Gamergate was most definitely about harassing women. "Ethics in games journalism" was the excuse for lengthy, focussed harassment campaigns against specific women who dared criticise videogames. If it really was about ethics in videogame journalism it wouldn't have started with a review about a free game.
Fully disagree. That review was done without the disclosure that the reviewer and reviewee were at the very least good friends. That is ethics in journalism 101.
The circling the wagons of journalists defending this poor reporting and actually attacking gamers is what Gamergate was about.
Harassing women was just another game for these troublemakers. A reasonable person would have stopped and reconsidered their behaviour, but these were not reasonable people.
It was nothing to do with ethics anyway, that was just the excuse. How ethical is it to participate in a sustained campaign of harassment?
Death threats, rape threats, releasing targets' private information, threats of violence, stalking, and other such unsavoury behaviour is harassment though. And that's what Gamergate was all about.
Please do post proof of these things. You do realize all of this got investigated by the FBI and they didn't find a single credible threat. Stop listening to what the media tells you about this. That is the whole point of Gamergate.
So, technically, the World's Smartest Woman could be wrong (not in this case, because of the mathematical proof, but according to the title). Being smart doesn't mean you're always right, especially when there is no good answer with the available information.
But clearly, the responses show sexism at its finest, or worst. Partly, though, many of the men refusing the argument probably honestly disagree with the counter intuitive answer, but they seem to rationalize it with their negative beliefs.
She was claiming something more fundamental (and less sophisticated) than that. Nigel Boston and Andrew Granville wrote a review [1] of her book that distills some of the problems with her argument. A sample quote:
> In fact, her central theme is that non-Euclidean geometry, and indeed any mathematics related to non-Euclidean geometry, is nonsense.
well thats not really similar, since in the first case, a large portion of the respondents turned out to be mansplaining (referencing her sex for example).
I don't think the situations are similar...
Also, her problem and solution are concise to verify, I have yet to see a mechanized version of Wiles' proof. (I am willing to believe it when I see that, but until then I too have a hard time accepting the proof... but of course if an oracle put a gun to my temple and I had to guess, I'd be with Wiles...)
What I don’t get is in the first table, game 3 and game 5 no longer exist after he opens doors 3. The auto can’t be behind door 3, because he revealed there is a goat there. So yes, relative to the original choices, she is correct. But we have been presented with new information, so if you reset the problem based on the new information, don’t we now have a choice between 2 doors, one with a car and one with a goat?
The information you have gained is not about the door you've chosen. It doesn't matter what door you choose, the host can always open a door to reveal a goat. So there is no information given about the door you've chosen, so the chances of that door containing the prize remain at 1/3.
But you have been given information about the door he didn't open, because he didn't open it. That's why it's possible for the odds on that door of holding the prize can change.
And yes, we do now have a choice between two doors, one with a goat and one without. The error is in believing that these are equal choices.
I roll a die, and you can choose "1" or "not 1". You have two choices, but they have unequal chances of being correct. Similarly with the doors. Just because there are two choices, they may have different odds.
Odds aren't changing at any point. It's as simple as you were initially sitting in a 1/3 "probability block" (the way I like to think of it)... ; and then you shifted over to a 2/3 "probability block."
The probability on the two you didn't open is 2/3. At that point you have no information as to which of those two doors holds the prize, so they each have probability of 1/3.
Then the host opens one. The probability of the door opened holding the prize goes to 0. But the probability on the set of two is still 2/3, and so the probability of the door that is both unchosen and unopened goes to 2/3.
No. The host could be a robot, and the contestant could be a robot, and the could play the game a million times. The robot contestant who chose to switch would win 2/3 of the time, while the robot contestant who chose to stay would win 1/3 of the time. It's math, not psychology.
I'd be interested to know what it is about my reply that makes you think this is a psychology puzzle. In particular, it seems to be people's psychology that prevents them from understanding the mathematics underneath. People seem to assume that if there are two choices then they must be equally likely. That's a psychological thing, although to be honest, I don't understand it.
But the Monty Hall Problem as stated is about the probabilities, not on the psychology. Computing the probabilities is simple math, once you understand the situation. My explanation was to help the reader understand why the two choices given don't have equal probability.
Hm, then I do not understand it. "But you have been given information about the door he didn't open" was what made me comment.
If I chose a door before, then something happens that leads to only two doors being left, both those doors have the same probability so I could just choose the same door again.
aw3c2> Hm, then I do not understand it. "But you have been given information about the door he didn't open" was what made me comment.
aw3c2> If I chose a door before, then something happens that leads to only two doors being left, both those doors have the same probability so I could just choose the same door again.
The original said this:
CW> The information you have gained is not about the door you've chosen. It doesn't matter what door you choose, the host can always open a door to reveal a goat. So there is no information given about the door you've chosen, so the chances of that door containing the prize remain at 1/3.
CW> But you have been given information about the door he didn't open, because he didn't open it. That's why it's possible for the odds on that door of holding the prize can change.
So let's recap what's going on. There are three doors. For the sake of concreteness let's call them A, B, and C. You choose one of them. For the sake of concreteness let's suppose you choose A.
So now there are two doors, B and C, remaining unchosen by you. Currently those two doors, B and C, each have probability 1/3 of having the prize. The door you chose, door A, has probability 1/3 of holding the prize.
Now the host opens a door, taking care to open a door that does not hold the prize. So the pair {B,C} still has total probability of holding the prize, but you are being shown that one of them certainly does not. This doesn't affect the probability that your chosen door, door A, holds the prize -- the probability that the prize is behind door A is still 1/3.
The pair {B,C} still has total probability 2/3 of holding the prize. You're now given the choice of staying with A, or switching.
Quoting you again, you said:
aw3c2> something happens that leads to only two doors being left, both those doors have the same probability ...
That turns out not to be the case. Just because there are two doors they don't have to have equal probability of holding the prize, and in this case they don't. The probability that your door holds the prize has not changed and is still 1/3. The probability that the door neither chosen by you nor opened by the host holds the prize is now 2/3.
I think the second table is more clear than the first, since it shows that Monty Hall's choice depends on your choice. The main nuance to this problem is that Monty Hall is not picking randomly. There's always some unpicked door with a goat, and he'll pick that one to reveal. So if your original choice is door #1, then when deciding to stay or switch, you're really deciding whether go with door #1 or with the better of doors #2 and #3 (since the worse of the two doors has already been shown to you).
That's how I got to understand it when I first heard about the problem in the early 90s. (Of course, I also wrote a simulation.)
The problem itself can be rephrased in an even simpler manner where Monty doesn't open any doors at all. Once you've picked a door, he then gives you the option of either staying with that door or changing to both of the other doors (where you get to keep the best prize behind them). Given that formulation, almost everybody would switch.
It's interesting that kids in elementary schools were getting this right in many schools. Probably because they worked it trough and did experiments to verify it. Something that was beyond people with many PhD's.
>In the proceeding months, vos Savant received more than 10,000 letters -- including a pair from the Deputy Director of the Center for Defense Information, and a Research Mathematical Statistician from the National Institutes of Health -- all of which contended that she was entirely incompetent
So the Deputy Director of the Center for Defense Information failed to estimate a concise problem in an ideal setting, but somehow the world is supposed to believe in Mutually "Assured" Destruction in a messy high complexity real world setting?
Turns out people have evolved in an environment where it’s really adaptive to be able to manage messy ill-defined unquantifiable problems and not at all adaptive to solve quantitative puzzles where you have to shut up and calculate.