Maybe it's just me, but I find Kahneman frustrating to read and this article really demonstrates the problem: he defines a concept and then gives examples which don't meet the definition.
He defines intuition as thinking you know without knowing how you know. OK, great. Now the example: guessing someone's GPA based on the fact that they started reading early. Like most people, I guessed it would be on the high side. Kahneman calls this "intuition that is generated automatically with high confidence." And that's… just wrong.
I have very little confidence in my guess, and it certainly isn't generated automatically. It's based on a belief—which turns out to be wrong—that early reading has some relationship with college GPA.
So maybe in this example, he is using the dictionary definition of "intuition", which is basically non-conscious reasoning. He thinks this shows the problem with intuition, but again, that's just wrong.
I didn't get the answer wrong because I failed to reason consciously. It's because my belief about the relationship between early reading and GPA was wrong. If I take his advice and don't trust my intuition, and carefully and logically reason my way to the answer, guess what? I'm still wrong because my information is wrong.
His explanation of when to trust your intuition is similarly flawed. He basically says you can trust your intuition when you have good information. Well yeah. That's a generic statement that goes for almost any method of drawing conclusions about the world, and doesn't help us understand when intuition is useful or not useful.
I think at least some of it comes from the fact that Kahneman didn't write the article. The article is composed from both his speech that was referenced and from the book that he wrote, and kind of marries them together in a weird way. I find articles like this extremely thin because they usually synthesize points with very thin lines. Original authors will have hundreds of repeated examples and data sets, of which Kahneman is one of them. He certainly didn't win the Nobel Prize in Economic Sciences for nothing -- he has serious data to back up his research.
If I had to summarize Thinking, Fast & Slow up in one sentence: human intuition shouldn't be trusted when it comes to statistics problems, because humans aren't naturally statistical thinkers.
All of the research in the behavioral economics category shows this pattern very well. GPA distribution among populations is statistically quantifiable. Due to human cognitive biases, however, it appears to be a problem that has an obvious answer by intuition, and that answer is wrong.
The specific three qualities that are outlined as giving way to intuition boil down to having a large body of evidence and a very tight feedback loop, which statistics does not have but things like emotion reading do have. If you get the emotions of your significant other wrong you'll immediately see the repercussions. Casually misjudging someone on the street because of statistics will never see you being corrected.
But really, I think you should just read the book. This article doesn't even come close to doing behavioral economics justice.
I have read the book, and found it frustrating for the same reasons I mentioned. For example, the core concept of System 1 vs System 2. He says somewhere in the book early on that they aren't actually "systems", i.e. a physical brain structure. But then he uses language like "System 1 does X and System 2 does Y," implying there's some grounding in neuroscience, when there is in fact none. They are just a grab bag of functions that don't have much in common, but we're led to believe he's describing something fundamental about the brain.
The other thing that annoys me is his pessimism and nihilism of seeing human beings as fatally irrational. One response that I found interesting was the book The Enigma of Reason by two evolutionary psychologists who argue that reason evolved in humans specifically because we are social animals, and the purpose of reason is to persuade others. It's why we are so much better at catching other people's poor reasoning than our own. The implication is that reason is designed for group settings. Showing that isolated individuals reason badly is like riding a bike on an ice skating rink and saying the bike sucks. We test human reasoning in conditions its not designed for and then come away with an overly pessimistic view of humanity.
I've not read "Thinking Fast and Slow" but long ago I read "Heuristics and Biases"[0] and attended one of Kahneman's talks during his tour promoting his newest book.
I couldn't help but think that TFaS was the product not so much of a desire to share some new insights he'd recently discovered but rather that he saw (or some publisher pointed out to him) how authors such as Pinker, Greene, Tyson, et al were garnering considerable attention and revenue translating sophisticated scientific topics developed by others for a lay audience. As if someone said to him: "Hey, you were involved in discovering some sophisticated and important topics, why don't you translate those to a lay audience?!"
I have great respect for Kahneman and H&B is a wonderful collection. However I find it peculiar that the entirety of TFaS seems, on the surface (again I've not read it) strikingly similar to essay #22 in H&B by Steven A. Sloman titled "Two Systems of Reasoning". In it he uses Kahneman's and Tversky's work to argue for two competing strategies for arriving at the truth: associative vs rule-based.
It's as if Kahneman is borrowing from Sloman who borrowed from Kahneman.
> But then he uses language like "System 1 does X and System 2 does Y," implying there's some grounding in neuroscience, when there is in fact none.
I think saying "System 1 does X and System 2 does Y" is fine even without grounding in neuroscience. It seems equivalent to saying, "The foreach loop will iterate over every item in the collection," when in fact, there are nothing but registers being decremented (or going further, it's merely the presence or absence of electrical circuit).
The view of people as irrational has huge implications of how we think about public policy and even the concept of democracy too. It’s a huge threat to the enlightenment era form of classic liberalism. Unfortunately given the success of the new model from fields as disparate to economics to hostage negotiation (Never Split the Difference, Chris Voss) it seems this is closer to reality.
1) First, the underlying study likely is asking both 'what do you think' and 'how confident are you in your answer?" The conditions of the study are very different than the conditions of reading an article about Kahneman's work: when you read the article, you know you should have low confidence in your answers because he's a master of seeing places where our brains do screwy things.
2) He's defining 'good information' a little bit more precisely though - how do you know your information is "good?" Everyone will say they've got good information if you ask them, right?
Didn't read the article, just commenting on Kahneman. I also found his books very annoying. I was already familiar with most of the research he references and thought that on nearly every other page he slightly misrepresented someone's work. He doesn't just cherry pick, but writes like a Procrustes. If you are interested in pursuing the topic of heuristic thinking, Gerd Gigerenzer is equally interesting but more careful in his conclusions.
This makes total sense. What's he's basically saying is if you've collected a lot of evidence, you can quickly classify a new data point. (And if feedback is not immediate or very noisy, you're not collecting as much evidence as you think.)
If that sounds like a lesson for machine learning, it's because it is.
For the same reason, as a quantitative trader, I look at stuff that has a lot of repetitions historically. And over short time periods, because that both gives you faster feedback and more non-overlapping reps. Which addresses the point he makes about the financial markets.
The other takeaway is that you don't have nearly as much evidence as you think. I mean who wouldn't think that kids who learn to speak early have high GPAs?
Often I get into a discussion with people who pull out something similar, with no specific source. Everyone else is convinced that not only is the hypothesis true, but that there's so much evidence that I'm just being difficult when I ask for it.
There's a lot of traps like this in the social sciences. I'm not saying these are all wrong, just that your intuition is seductive if you don't ask for the evidence:
- "People descended from the cold regions are smarter, because they had to be smart to survive"
- "CEOs who have share options perform better, because they have better incentives"
- "Without patents nobody would bother to invent anything"
- "Reading to your kids is good for them"
Lastly, some things are unlearnable for one person. If you're going to learn, you'd need to aggregate somehow:
- How many times has someone sold their company? Probably not more than a handful. Even people who have n>0 are quite rare. So what does this say about how highly you should weight this person's experience?
- How should I coach the team for the big final? Again, how many finals are there in a year in any given sport? Can you learn any lesson at all that's separate from just increasing the n of the number of games coached (which probably is meaningful)?
My sister studied statistics. Somebody told her his story about having had an accident in a mercedes and how he came out of it unscratched. From now on he would only buy mercedes cars. First question of my sister: how many accidents did you have in a BMW?
Like people's anecdotal perception of recent weather (extremely small sample size, for one) gives them intuition to extrapolate large scale climate trends.
"We never had storms like this 5 years ago, or when I was a kid. Things are getting really bad"
It's not today's weather that gives them that, it's the lifetime of weather experience. If you're talking to someone at least 40 years old then it meets the 3 conditions.
> If you're talking to someone at least 40 years old then it meets the 3 conditions.
If they were like 20,000 years old, they might have some wisdom after going through natural ice ages and other cyclical patterns that take more than several lifetimes to go through. 40 years? When talking about a planet billions of years old with long hot/cold stretches lasting thousands of years each? Polar shifts? I'd wager they don't have enough data gathered during their insignificant blip of a lifetime to make speculative global warming claims.
"The weather is different now than it was 30 years ago" is definitely something that can be judged with some level of confidence by a 40 year old, despite their possible cognitive biases that could confound this judgment. Saying "they can't tell how the weather is changing on a 20,000 year timescale" isn't relevant to that claim.
Your parent comment is talking about people who feel an inuition about climate trends. Climate trends happen over long time periods. That violates the immiediate feedback condition.
How many times has someone sold their company? Probably not more than a handful. Even people who have n>0 are quite rare. So what does this say about how highly you should weight this person's experience?
All your data points don’t have to come from one person. If the same advice is being repeated by multiple people who have sold a business (e.g. find product market fit), then it becomes valid.
Reading to your kids is good for them
Isn’t there actual science backing this claim up? Regardless, there’s a population of anecdotal evidence.
The other takeaway is that you don't have nearly as much evidence as you think. I mean who wouldn't think that kids who learn to speak early have high GPAs?
I’m surprised that was your reaction. Mine was confusion because it seemed blatantly obvious that such a small signal was nowhere near enough information to make an accurate prediction.
Reading to ones kids certainly correlates with good outcomes for kids but as far as I'm aware it isn't causal. The problem is that as soon as a piece of parenting advice becomes known all conscientious parents will start doing is disproportionately more and it will suddenly become correlated with good outcomes whether it was before or not. I believe that in this specific case, though, most of the correlation is through the nature rather than the nurture pathway.
But obviously reading together is a good in and of itself, regardless of its effects on life outcomes.
Even for that evidence would seem to be required. Patents should indeed act as an incentive to invest in an invention. But how many potential investments in inventions are dissuaded by the countervailing thought that someone must already have a patent on it? ... by the thought that getting a patent is too complicated? ... or too expensive? ... or that the invention is likely to lead to patent litigation?
Because it is the last game. In any other game you need to pace your players to ensure that they are rested enough by the next game to play well, but in this game you can throw that all away as they will be rested. In the final minutes having a player do something that will break their leg might win and there is no downside to future games (there are of course ethical concerns).
Your question and tone is falling into exactly the trap that OP is talking about. The point is not that these specific ideas are _wrong_, the point is that it's a lot more complicated than whether some specific thing is "right" or "wrong." Your intuition will generally make you feel good if you just let it completely dominate your belief system with the "easy" answer, like "of course reading to your kids is good for them." It's hard to challenge your intuition, but it's necessary to base it on real data/evidence if you want to develop your intuition to a point where it actually has utility in evaluating _new_ scenarios based on past experience.
The point of the comment you’re responding to is that we don’t have very much evidence even for things that make intuitive sense to us.
Parent is not saying reading to your kids isn’t good, he’s listing things that many people find intuitive even though they might only have a few (if any) data points on which to base their conclusion.
the "reading to your kids is good" statement is problematic because what is "good"? Basically, if reading to your kids isn't harming them, and they enjoy it, it's "good". Perhaps people are assuming that by "good" they mean, your kid will have superior intelligence or emotional development or something, but that's a lot more specific than "good". By challenging the phrase, "reading to your kids is good" so vaguely, the only alternative would be that reading to your kids is bad. And I think that particular statement is one that hits the "intuitive" spot a lot more broadly. I'm going to guess if I googled "reading to your kids is harmful" I'm going to not find much. (googles) yup, all the hits are about research finding positive benefits of reading to kids.
> in order to develop expert intuition (1) some regularity in the world that someone can pick up and learn (2) a lot of practice, and (3) you have to know almost immediately whether you got it right or got it wrong
> “So, chess players certainly have it. Married people certainly have it [about each other]. People who pick stocks in the stock market do not have it. Because, the stock market is not sufficiently regular to support developing that kind of expert intuition,” he explained
Stock markets are a lot more than just "not sufficiently regular"; they actively remove all regularity from themselves (that being the byproduct of how markets calibrate prices of things—regularity is the signal that makes people money, and so it gets exploited away.)
I think the book "Superforecasting" (if I understood it) wants to say that 3 isn't necessary, which is why I wasn't as into it as I was "Thinking, Fast and Slow"
These days, chess players get to find out after the game when they check the all knowing computer. Fifty years ago, even if you played at the highest and most scrutinized levels of chess, you might not find out your move was objectively a mistake for years.
My intuition tells me that these three conditions are correct. Should I trust it?
While these conditions intuitively make sense, the example that he used to support his conclusions feels very weak. If the only information I know about this person is she read fluently at a young age, I don't think I can make a good guess of her GPA in college at all. There's no "right intuition" in this case, because so many other variables are in play.
I think it’s important to establish different kinds of intuition. Intuition =/ stastistics. While I like Khaneman and his books are interesting, his arguments around intuition show a fundamental misunderstanding of intuition as I understand it and apply it.
If you try to use intuition for things which are better suited to statistical analysis, then yes he is correct. But what about all the other kinds of intuion? The intuition of an author knowing how to put together a sentence. A painter intuiting colors. A businessperson intuiting a good hire.
So many times in business, we have to rely on intuition because we simply do not have the time or resources to make decisions mathematically.
For example, I work in an industry (marketing) that is increasingly dominated by a school of thought of A/B testing.
The typical approach is create hundreds of combinations of different ads, copy, and creative and then see which one converts the best. I take an entirely different approach that I find much more successful and efficient that relys almost completely on intuition.
I simply create the ad that I “know” will do the best and run that one. Of course if it doesn’t work, I make changes, I look at data and results as part of it, but the majority of the time I run my business on intuition. Could I squeeze out a few percent here or there? Probably. But the time and money it would cost to come up with statistically significant data-driven
decisions is far higher than having an ad up and running for an extra 2 weeks.
I created a campaign this week with one offer, one landing page, one version of an ad. The landing page converts on day one at 77%. No testing. No looking at data. Getting it right the first time is a culmination of all my experience, all my knowledge. That’s what people pay me for. I could spend weeks A/B testing data, but intuition of what allows me to run my business successfully, and is the reason people hire me.
Re-reading the article, yes you are correct. Although I still feel it’s important to distinguish different types of intuion. The intuition he is referring to here, and the 3 conditions, doesn’t apply well when we are talking about my other examples, like a painter intuiting colors or an author picking one word/sentence over another. There’s no immediate feedback on those. With a book you have to make all the decisions, publish the book, and then people either like it or not. In the case of art it seems to me, there is a different kind of intuion at work. Ideas and creativity are subject to intuiton, but not the kind that the author is taking about here.
> doesn’t apply well when we are talking about my other examples, like a painter intuiting colors or an author picking one word/sentence over another. There’s no immediate feedback on those.
This is most definitely false. Painters constantly look at how the colors they pick work in the larger composition. Very very few painters could produce anything tolerable with the lights off.
Likewise, good authors are constantly re-reading and editing their own work. It's a truism in writing that "first drafts are shit". The experience of reading a sentence you read is very different from what you experienced while selecting the words, and you do get immediate feedback when you do it.
Right, I suppose I was thinking of the fact that a painting success or lack of success is not an opinion of the artist, but rather the critical reception of the work. I’m sure everyone who writes a book or does a painting (for the purposes of public consumption anyway) thinks that on some level it is “good”. But an artist creating a painting that is new or innovative is by definition doing something someone has never done before. Good art must be both original in some way AND “good” (subjectively). What drives an artist to create something new, and combine elements in a novel way is a form of intuition, and my argument is that form of intuition is as seperate from the type of intuion the author is referring to.
Where do an artists ideas come from? No one knows, really. But forming them into a concrete work is a type of intuition, which I think is quite different from what the author is taking about. Hence, how you define “intuition” is important. I believe there are several ways to interpret the concept, and it’s important to distinguish between the types.
1) How a painter learns to get the image onto the canvas that they have in their mind. This requires a ton of intuition. You're literally using your hand muscles, so it's not like you can get out pen and paper and calculate how many Newtons of force to apply to the brush. This intuition very much lines up with the three rules of the article.
2) How a painter learns how the audience will respond to their work. This definitely has a slower feedback loop, since they have to complete a work and show it to people. But otherwise, all three components are there. Almost every successful artist has a long history of making work and showing them to people. Art school is all about critiquing each other's work to train exactly this intuition.
Artist's ideas are just other ideas combined in yet-unseen ways. The originality (the reason they are unseen yet) is a side-effect of combinatorial explosion. Your intuition will generate novel configurations within a narrow domain, while someone else's will generate different novelties just by operating on different material and under different assumptions.
It's pretty easy to generate numbers that nobody has ever thought about before, for instance. It's easy because there are a lot of numbers.
Your examples meet the criteria perfectly. There is a stable fixed palette of colour a to choose from, a lot of practice on the part of the painter and immediate feedback when the paint hits the canvas.
Re: your points about business and not having the time to deliberate on every choice needed to be made. I agree, there isn’t time. Your business would fail long before you complete. However I think what K is saying is that the choices you make may or may not be intuition. And if they’re not, you might be way off, like people guessing the GPA average, you’d do better (maybe not wildly so) by taking the base rate
Beg to differ. Color in picture versus mental image is immediate feedback; color A against color B “looking good” is immediate; and sentiment analysis of the result of the stroke is one step backwards and seconds away for feeling introspection (this introspection is not a philosophical reflexion that takes days, normally).
In Kahnemanns book, using “System 1” (deferring to intuition, not following a slow, rational analysis) is that gut feeling that you use a lot for art. Your other example, literature, might be better, as a book can span more that a year and lots of reflection goes into that.
"So many times in business, we have to rely on intuition because we simply do not have the time or resources to make decisions mathematically."
It can be really easy to forget if one gets on too much of a "science!" kick, but there's plenty of decisions we have to make in life without "science" backing it up; arguably the majority. It's a poor choice to then just throw your hands up and say "well, if I can't use this most reliable decision mechanism, I just won't decide!", which is, both superficially amusingly and profoundly, a decision anyhow.
I know, it sounds obvious when I say it. And it's the sort of error only a particularly sort of educated person could even make, so it's not the world's most common error. But it's definitely an error. And it's not always as blatant as all that; one you can see with some frequency on HN are people insisting that we have to do our programming with more attention paid to science, when in fact there basically isn't any. Maybe that's a problem. Maybe that's something "we" should spend more money on solving. But in the meantime, that's not a reason to throw our hands up and give up; we just have to do our best, and I think the evidence supports the idea that we can learn a lot about our field and get better over the course of decades, even if there isn't a "scientifically-grounded course" for doing so and all we have to go on is modestly-trained intuition (that definitely fails Khaneman's standards here, so the sloppier, conventional "intuition" not his precise one).
About the business person intuitive hiring, from what I’ve learned from my friend in HR that is the most classic mistake people who don’t work in HR do. There is from what I understand no evidence that hiring that way works. CV and intelligence tests have evidence though.
I wonder how those three conditions apply to intuition in software development. The only condition that seems to regularly be unmet is "immediate feedback."
They do apply reasonably well as long as you're solving similar problems. Even if programming language differs...
It is similar with mathematicians. A mathematician may smell a rat in a proof like a programmer a bad pattern in code, but you can only really trust their intuition in their specialty field.
E.g. web frontend programmer intuition won't do in backend, desktop UI, mobile, HPC, video game coding, embedded programming, real time applications or safety critical software.
This does not preclude polyglots and polymaths or generalists.
...which is why REPLs, fast test suites (1-5 seconds), etc. are so important. It's not just a quantitative thing, you get a qualitatively very very different process happening.
People have known this, er, intuitively for some time. Kahneman gives us an handle on why it is: it lets us develop an intuition for the software we are working on.
I think intuition goes downhill as your system increases in complexity. It's harder to develop deep experience or regularity in distributed systems than it is in your monolith that you know inside and out. Feedback can get less immediate, too.
So true. I spent the bulk of my early SW development career in distributed systems that were complex enough that they felt like they had a mind of their own. What I learned back then is that my intuition was worthless. Maybe that's just me but the key to success was always to set aside your expectation of what your code does and instead spend more time instrumenting and collecting data on what was actually going on. This is especially true for completely asynchronous systems like networks or systems that integrate multiple independent subsystems. Things just don't happen in the order you expected when timeouts, aborted operations and retries bubble up and down your OS and application stack.
After 30 years I consider myself pretty capable in debugging such systems but one thing I never use is "intuition" as defined in the beginning of the article.
Agreed. Also, in complex systems, there are emergent properties that are hard to predict, so any intuitions about system behavior from previous observations aren't always valid.
Intuition is a form of pattern-matching on implicit/tacit variables. It only works if there is a pattern to match.
As the system increases in complexity, you need more and more measurement/monitoring in order to gain intuition for how the system works. For example, watching Ganglia graphs of mapreduce clusters make it 10x easier to tune system parameters.
We do have a form of feedback in most languages - whether it compiles / is syntactically correct. Expert intuition gets tougher for higher level constructs partly due to both the regularity issue (with a sufficiently fast-moving problem domain what may work one year may be suboptimal later) and feedback loop.
Basically, consider it a form of Bayesian learning models. The difference is that with humans if the feedback comes too late, we will reject the signal as we have trouble separating actions and consequences.
I can conceptually explain why some commonly bad practices like hard coding variables or not using looping constructs for repetitive statements are to those that keep doing it, but I can’t give quick feedback showing precisely how tough it is to make changes around the code or to even understand it to those that write it because they have developed intuition about their codebase and are now mentally somewhere between Stockholm Syndrome and defensiveness.
I think if you work on a stable piece of software with good test coverage and CI/CD process for a long time, you'd meet all 3 requirements and develop good intuition about that system.
How valuable that intuition is and whether it is transferrable to other system is another question.
As a software developer you tend to get very rapid feedback on whether your code is working compared to, say, engineers or scientists. Often you can test a change in code and see if it works almost instantly. Some languages force minute long compile cycles but even that is pretty fast compared to the months a scientist might have to work to test a guess. There might be more ceremony at the workplace but when you were learning to code you probably had immediate feedback.
Those are very reasonable points to explain the way correct subconscious prediction of success works. I wonder though whether this definition doesn't run the risk of creating a closed system of thought, i.e one where you can always refute opposing evidence from within the system without being aware that the system is making you incapable of seeing something else. Specifically, any genuine cases of intuition not due to Kahneman's reasoning (if such exist), would never be seen and always attributed to survivor bias (as long as they don't become a statistical anomaly).
It's probably difficult to build a case against this reasoning, as it seems quite logical - my first thought as a candidate for a counterexample was claims of "love at first sight". Sure, many of those might end in divorce but does that mean that all the successes are just due to being on the right side of statistics or is there something more going on?
The third criteria of immediate feedback seems too stringent. In chess, feedback is often not immediate. If one were to run a reinforcement learning algorithm on chess with no human coded rewards, the only objective feedback would come at the end of the game (win, loss, draw).
Certainly the more immediate the feedback the better though.
I think it's more important that the input produces an immediate and readily observable change to the state of the world, not necessarily that you must have perfect information in the feedback as well that shows the exact utility of your previous action.
For example, in the stock market it's not even clear to me what the total scope of effects is, immediate or not, that any single action I take as a private investor will have on the market. Before I even can start to learn about the effectiveness of the changes that my actions produce, I am hamstrung by an inability to see an immediate, comprehensive effect to each action.
Sounds much like conditions to a successful solving of a problem when modeled with machine learning. Have enough data, immediate (as opposed to delayed) feedback, and enough structure in the data. That neural networks in our brains might be learning an embedding or two :)
This reminds me of one of my little sayings... "Common sense is just a first approximation of reality". So yeah, common sense is more likely right than not, but it is not reliable. If you need a reliable answer, you need to go deeper than common sense.
Of course, intuition is somewhat different, in that intuition can be wrong more often than right. See Hans Rosling's findings in Factfulness to grok that, and start to understand just how dumb you actually are.
"intuition is thinking that you know without knowing why you do."
That is not a precise definition at all, in fact it is an orthogonal concept. "Thinking that you know" may be happening during intuition, but to claim that that is what intuition actually IS is just wrong.
Intuition is the thought, not the thought about the thought.
A more precise definition might be something like "knowing -- rightly or wrongly -- without knowing how you know."
how many are peeved they didn't get Julie's GPA? How can I accept its a terrible answer with 0 data provided. How much merit can I give to someone giving advice about investing when they gave me no data to indicate they are good at it.
Sorry to mislead you. I hadn't thought of a joke, but if I had thought of a joke and put it in, my posts get a lot more upvotes.
It might be my natural humor, or it might be people reading HN like a laugh however corny. I don't get feedback on why people vote, and am not allowed to ask.
However, he added, people who pick stocks in the stock market do not have it.
“Because, the stock market is not sufficiently regular to support developing that kind of expert intuition,” he explained.
In the short term, definitely. In the long term, however, I disagree. Stock markets follow highly-regular multi-year cycles. The problem seems to be that people forget this during the peak or trough of the moment.
Interestingly, this intersects with the third requirement:
And the third condition is immediate feedback. Kahneman said that “you have to know almost immediately whether you got it right or got it wrong.”
Given the long intervals over which stock market cycles operate, there's little chance to get immediate feedback.
So according to this analysis, most stock pickers can't gain intuition - but not for the reason stated in the article.
It takes someone with: (a) a really good memory; and (b) a very long time to play.
In one his other books DK discussed being hired by a major investment bank to research decision making processes used by their leading investment managers so they could train incoming bankers.
DK discovered that the successful outcomes of the senior bankers were random rather than based on skill or experience. His report was not well received.
This gross oversimplification and attempt to quantify intuition is kind of laughable.
Intuition is the effect of all the different types of knowledge you have coming together in one converging lighting up of your nervous system.
This is basically ancient genetic knowledge, combined with all the previously recognized experience traits that are connected to the current experience, combined with whatever other information that you are receiving.
Trying to estimate a student's GPA is not a good example of how intuition works.
Most of us trust the Nobel Prize to be a mostly-neutral arbiter of excellence in various fields. That means it is not the worst kind of appeal to authority, but rather the best - referencing a well-known and widely trusted source.
Can you give us a reason we shouldn't trust it in the general case, since your claim extends to the general, not the specific?
Daniel Kahneman: Your Intuition Is Wrong, Unless These 3 Conditions Are Met
At the World Business Forum, Kahneman explained when people can trust their intuitive judgment and when they shouldn't.
By Emily Zulz | November 16, 2018 at 01:18 PM
Daniel Kahneman. (Photo: Bloomberg) Daniel Kahneman. (Photo: Bloomberg)
Can intuition play a role in investing? According to Daniel Kahneman, a Nobel Memorial Prize-winning behavioral economist, no.
During a speech given at the World Business Forum in New York City, Kahneman explained when people can trust their intuitive judgement and when they should be wary of it.
“Intuition is defined as knowing without knowing how you know,” he explained. “That’s the wrong definition. Because by that definition, you cannot have the wrong intuition. It presupposes that we know, and there is really a prejudice in favor of intuition. We like intuitions to be right.”
According to Kahneman, a better definition — or a more precise one — would be that “intuition is thinking that you know without knowing why you do.” By this definition, the intuition could be right or it could be wrong, he added.
Because, according to Kahneman, intuition can often be wrong. To show an example of this, Kahneman had the crowd guess the GPA of a college senior he called Julie. He told the crowd one fact about Julie — that she read fluently at a young age — and then asked them to judge how good of a student she had been.
From research, Kahneman, who wrote The New York Times bestseller “Thinking, Fast and Slow,” said that most people guess that Julie has around a 3.7 GPA.
“You might think that this is a good answer,” he said. “It’s a terrible answer. It’s an intuition, and it’s absolutely wrong. If you were to do it statistically, you would do it completely differently. Actually, the age that people read is very little information about what student they will be 20 years later.”
According to Kahneman, this is an example of an intuition that is generated automatically with high confidence, and it’s wrong statistically.
“In general, confidence is a very poor cue to accuracy. Because intuitions come to your mind with considerable confidence and there is no guarantee they’re right.”
There are certain times when intuition can be correct. For instance, Kahneman explained, chess players and married couples generally have accurate intuition.
“Intuitions of master chess players when they look at the board [and make a move], they’re accurate,” he said. “Everybody who’s been married could guess their wife’s or their husband’s mood by one word on the telephone. That’s an intuition and it’s generally very good, and very accurate.”
According to Kahneman, who’s studied when one can trust intuition and when one cannot, there are three conditions that need to be met in order to trust one’s intuition.
The first is that there has to be some regularity in the world that someone can pick up and learn.
“So, chess players certainly have it. Married people certainly have it,” Kahnemen explained.
However, he added, people who pick stocks in the stock market do not have it.
“Because, the stock market is not sufficiently regular to support developing that kind of expert intuition,” he explained.
The second condition for accurate intuition is “a lot of practice,” according to Kahneman.
And the third condition is immediate feedback. Kahneman said that “you have to know almost immediately whether you got it right or got it wrong.”
When those three kinds of conditions are satisfied, people develop expert intuition.
“But unless those three conditions are satisfied, the mere fact that you have an idea and nothing else comes to mind and you feel a great deal of confidence — absolutely does not guarantee accuracy,” he added.
— Related on ThinkAdvisor:
Daniel Kahneman’s Solution for Minimizing Investors’ Regrets
Michael Lewis: A Warning for Index Investors From the Oakland A’s
Emily Zulz
Emily Zulz
Emily joined the ThinkAdvisor team as a reporter in the summer of 2014. She previously worked as a reporter for The Daily Journal in Kankakee, Illinois for a year and as a reporter and editor for The Daily Eastern News in Charleston, Illinois for two and a half years. Prior to joining ThinkAdvisor, Emily worked on Groupon’s editorial team in Chicago as a fact checker for three years. She graduated cum laude with a BA in journalism from Eastern Illinois University, and she has been the recipient of two journalism awards for her news reporting at daily newspapers.
Seems pretty reasonable to me - he gives a definition of what most people probably think of as intuition and then goes on to improve that definition. A hostile interpretation could say it's a strawman but with the additional context I think it's a pretty good article
He defines intuition as thinking you know without knowing how you know. OK, great. Now the example: guessing someone's GPA based on the fact that they started reading early. Like most people, I guessed it would be on the high side. Kahneman calls this "intuition that is generated automatically with high confidence." And that's… just wrong.
I have very little confidence in my guess, and it certainly isn't generated automatically. It's based on a belief—which turns out to be wrong—that early reading has some relationship with college GPA.
So maybe in this example, he is using the dictionary definition of "intuition", which is basically non-conscious reasoning. He thinks this shows the problem with intuition, but again, that's just wrong.
I didn't get the answer wrong because I failed to reason consciously. It's because my belief about the relationship between early reading and GPA was wrong. If I take his advice and don't trust my intuition, and carefully and logically reason my way to the answer, guess what? I'm still wrong because my information is wrong.
His explanation of when to trust your intuition is similarly flawed. He basically says you can trust your intuition when you have good information. Well yeah. That's a generic statement that goes for almost any method of drawing conclusions about the world, and doesn't help us understand when intuition is useful or not useful.