You’re dodging the question. Have you tried this thing? You can declare yourself whatever, it doesn’t care if it’s a computational system, it just does things which are hard to describe as purely recombining what it’s already seen and not for the lack of trying.
I take issue with this; imo its output looks exactly like what a neural network being fed terrabytes and terrabytes of natural language then recombining it would look like. But either way, you're making the same mistake: looking at behavior and affirming the consequent (namely: it outputs smart-looking text, therefore it must be intelligent). But this is a mistake. Its behavior implies nothing about underlying processes.
My argument is that the underlying processes don’t matter as long as results are classified as output of an intelligence - because that’s the only way I can judge it. What it is under the hood is… less important.
Oh btw you must’ve missed the post in which it was told it was a Linux shell and it mostly worked as one. Complete with recursively calling into a pretend API version of itself. I’m not calling that intelligence, but I’m not calling it regurgitation either.
> My argument is that the underlying processes don’t matter as long as results are classified as output of an intelligence - because that’s the only way I can judge it. What it is under the hood is… less important.
That view is called "behaviorism" and is not really taken particularly seriously exactly because it's not very useful in truly understanding what is happening under the hood (which, as a curious species, we deem important). It's like not caring how electromagnetism works because the behavior of "the positive side of this chunk of rock is attracted to the negative side" is good enough.
We plainly don't know anywhere near enough about what's happening under the hood in our own heads to judge based on that, so in practice, "does it behave intelligent?" is the best test that we actually have available.
> you're making the same mistake: looking at behavior and affirming the consequent (namely: it outputs smart-looking text, therefore it must be intelligent)
Why is that a mistake? What other means do we have of assessing intelligence?
I have tried and I felt like the thing it's quite not as good at is answering questions I didn't know. Or rather, he couldn't explain things in other words or try to tackle the actual conceptual question I had. It would just repeat itself. I think that's a good tell of the lack of actual understanding