> challenge my feedback in my area of professional certification and expertise by citing, with great confidence, "I asked ChatGPT and..."
This might be a variation on one I've seen a few times: I'm an expert on something, and advising someone who wants to instead go with advice from their friend who isn't even involved.
(And who usually has little-to-no experience in the thing. But it's like calling up your nephew at Google to ask why your computer is slow. After the knowledgeable neighbor who looked at it already told them it's because they have very little RAM, and they're pushed into swap by this one program they installed. But the Google nephew hears "home PC slow", says they probably picked up a bunch of malware, and to reinstall Windows, and please stop calling during work hours.)
I think this can be a psychological quirk, or social dynamics pressure, or an inability to assess competence due to a lack of understanding of the field/subfield.
If I had to guess, I'd say the last one is probably the factor in being overruled by ChatGPT.
One end-run around that is to have some validation of your expertise in the decision-maker's mind, and it might be stupid. For example, in the minds of some decision-makers, if the person has some credential they value ("They went to MIT!", "They worked at Google!", "I'm paying out the nose for their consulting fee!") the decision-maker will put a lot more weight on that person. Maybe even more weight than they give the ChatGPT superintelligence they imagine. It's nice to be listened to, even if it's for the wrong reasons.
That’s the funny part: decision-makers never cause this issue for me. They, generally speaking, are deeply aware of what they don’t know and welcome my input.
It’s always the climbers, the middling intellect crowd that play the petty dominance and status games in a large organization, believe that their advancement is enabled by unwavering confidence. I say they keep rising but there’s a limit—they generally end up at the top of the peon pyramid but never ascend to the very highest leadership roles. But, unfortunately, I still have to deal with them.
If you have the misfortune to deal with that category of corporate specimen, additional reasons for why they might disregard good advice:
* Option A (bad choice for company) has some angle upside for them, such as something they can credit to themselves or that increases their status, but option B (the best choice for the company) would be owned by one of their rivals.
* Loss of face, such the best option for the company would too clearly expose and reverse a mistake they made (when they and/or the org doesn't believe in acknowledging mistakes).
* (Speculating about some weirder ones) They are all about confidence projection, as you say, and further, they've come to believe their own BS. (Maybe this falls under the psych quirks I mentioned earlier.)
So the confidence in ChatGPT, or in anything else, might not be irrational or misinformed, but merely part of their internal sales act for selfish advantage.
Fortunately, I haven't run into any of these problems a lot, but have enough experience to know they can happen. Today, I would recognize some BS quickly, and move to confirm and correct it, and probably leave if there was too much uncorrected BS.
This might be a variation on one I've seen a few times: I'm an expert on something, and advising someone who wants to instead go with advice from their friend who isn't even involved.
(And who usually has little-to-no experience in the thing. But it's like calling up your nephew at Google to ask why your computer is slow. After the knowledgeable neighbor who looked at it already told them it's because they have very little RAM, and they're pushed into swap by this one program they installed. But the Google nephew hears "home PC slow", says they probably picked up a bunch of malware, and to reinstall Windows, and please stop calling during work hours.)
I think this can be a psychological quirk, or social dynamics pressure, or an inability to assess competence due to a lack of understanding of the field/subfield.
If I had to guess, I'd say the last one is probably the factor in being overruled by ChatGPT.
One end-run around that is to have some validation of your expertise in the decision-maker's mind, and it might be stupid. For example, in the minds of some decision-makers, if the person has some credential they value ("They went to MIT!", "They worked at Google!", "I'm paying out the nose for their consulting fee!") the decision-maker will put a lot more weight on that person. Maybe even more weight than they give the ChatGPT superintelligence they imagine. It's nice to be listened to, even if it's for the wrong reasons.