I'll believe that AI is anywhere near as smart as Albert Einstein in any domain whatsoever (let alone science-heavy ones, where the tiniest details can be critical to any assessment) when it stops making stuff up with the slightest provocation. Current 'AI' is nothing more than a toy, and treating it as super smart or "super intelligent" may even be outright dangerous. I'm way more comfortable with the "stochastic parrot" framing, since we all know that parrots shouldn't always be taken seriously.
Earlier today in a conversation about how AI ads all look the same, I described them as 'clouds of usually' and 'a stale aftertaste of many various things that weren't special'.
If you have a cloud of usually, there may be perfectly valid things to do with it: study it, use it for low-value normal tasks, make a web page or follow a recipe. Mundane ordinary things not worth fussing over.
This is not a path to Einstein. It's more relevant to ask whether it will have deleterious effects on users to have a compliant slave at their disposal, one that is not too bright but savvy about many menial tasks. This might be bad for people to get used to, and in that light the concerns about ethical treatment of AIs are salient.
> I'm way more comfortable with the "stochastic parrot" framing, since we all know that parrots shouldn't always be taken seriously.
First, comfort isn't a great gauge for truth.
Second, many of us have seen this metaphor and we're done with it, because it confuses more than it helps. For commentary, you could do worse than [1] and [2]. I think this comment from [2] by "dr_s" is spot on:
> There is no actual definition of stochastic parrot, it's just a derogatory
> definition to downplay "something that, given a distribution to sample
> from and a prompt, performs a kind of Markov process to repeatedly predict
> the most probable next token".
>
> The thing that people who love to sneer at AI like Gebru don't seem to
> get (or willingly downplay in bad faith) is that such a class of functions
> also include thing that if asked "write me down a proof of the Riemann
> hypothesis" says "sure, here it is" and then goes on to win a Fields
> medal. There are no particular fundamental proven limits on how powerful
> such a function can be. I don't see why there should be.
I suggest this: instead of making the stochastic parrot argument, make a specific prediction: what level of capabilities are out of reach? Give your reasons, too. Make your writing public and see how you do. I agree with "dr_s" -- I'm not going to bet against the capabilities of transformer based technologies, especially not ones with tool-calling as part of their design.
To go a step further, some counter-arguments take the following shape: "If a transformer of size X doesn't have capability C, wait until they get bigger." I get it: this argument can feel unsatisfying to the extent it is open-ended with no resolution criteria. (Nevertheless, increasing scale has indeed shown to make many problems shallow!) So, if you want to play the game honestly, require specific, testable predictions. For example, ask a person to specify what size X' will yield capability C.