>we haven't really begun to explore optimisation possibilities.
So you're questioning the above comment's argument based on a hand-wavy claim about completely speculative future possibilities?
As it stands, there's no disagreeing with the human brain's energy efficiency for all the computing it does in so many ways that AI can't even begin to match. This to not even speak of the whole unknown territory of whatever it is that gives us consciousness.
It is indeed speculative to not just suggest that technology will improve but make specific claims about how and in what way based on no clear connection to any current development.
Also, there's nothing hand wavy about pointing out -aside from all the vastly efficient parallelism and generalist computing that the brain does with absurdly minimal power needs- that it also seems to be where our consciousness is housed.
You can go ahead and naval gaze about "how do we know if we're conscious? How do we know an LLM isn't?" but I certainly feel conscious, and so do you, and we both demonstrably have self-directed agency that indicates this widely, solidly accepted phenomenon, and is very distinct from anything any LLM can demonstrably do, regardless of what AIbros like to claim about consciousness not being real.
Arguments like these remind me of relativist fall-back idiocies of asking "but what is a spoon" whenever confronted with any hard counterargument to their completely speculative claims about X or Y.
So you're questioning the above comment's argument based on a hand-wavy claim about completely speculative future possibilities?
As it stands, there's no disagreeing with the human brain's energy efficiency for all the computing it does in so many ways that AI can't even begin to match. This to not even speak of the whole unknown territory of whatever it is that gives us consciousness.