Some days ago, it told me "well, Boost has a function for that". I was surprised that I haven't found that myself.
I took me 10 minutes and opening the Git Log of Boost ("maybe they removed it?") until I realized "well, it just made that up". The whole answer was consistent and convicing enough, that I started searching, but it was just nonsense. It even provided a convincing amount of example code for it's made up function.
That experience was... insightful.
While we often say "If you need something in C++, Boost probably has it" and it's not untrue, ChatGPT seems to exercise that idea a little too much.
And a lot of highly linked forum questions and answers tend to be of the form “how do you do X in library Y?”, “Use the Z function!” - so naturally chatGPT loves to reproduce this popular pattern of communication.
> ChatGPT just matches the most statistically-likely reply based on a huge corpus of internet discussions, it doesn't actually have any ideas
Presumably you think humans have ideas, but you don't really have any evidence that humans aren't also producing the most statistically likely replies. Maybe we're just better at this game.
I'm astonished on how much worth people seem to give this bot. It's a bullshit generator, based on other people's bullshit. The bot does not know right or wrong. The bot does not know what command line utilities are. It just predicts what answer you want. Based on answers already given before. Nothing more, nothing less.
Because people want to believe in the magical AI - they want something for nothing and have yet to grasp not only are they unable to change the immutable laws of the universe (something will not come for nothing), but they are willfully blind to the very real price they are about to pay...
I guess the point is that it generates convincing and consistent texts. That's new and it's a building block for any futuristic AI that actually knows stuff: it also has to generate good text to communicate the knowledge.
Likewise, I spent 40 minutes looking for fictional command line arguments it recommended for Docker. When told the command line options did not exist, it directed me down a rabbit hole of prior versions that was a dead end. It really felt like a arrogant 8-year old with it's continued evasions of being flat out wrong.
The other day I saw someone, who by asking ChatGPT a series of questions, had it carefully explain why abacus-based computing was more efficient than GPU-based computing. It's not your google replacement yet...
I took me 10 minutes and opening the Git Log of Boost ("maybe they removed it?") until I realized "well, it just made that up". The whole answer was consistent and convicing enough, that I started searching, but it was just nonsense. It even provided a convincing amount of example code for it's made up function.
That experience was... insightful.
While we often say "If you need something in C++, Boost probably has it" and it's not untrue, ChatGPT seems to exercise that idea a little too much.