Unlike the capacity of human brains the capacity of ML models has been growing very fast in the last 10 years. The number of tasks AI cannot do is shrinking fast.
Power consumption is what's relevant to this discussion thread here. We're not talking about possible capacity but possible efficiency for given capacity as implemented in analog vs digital circuits.
If a "given capacity" is the ability to read books and answer questions about the content, then LLMs are more power efficient (4kW for 20 seconds beats 20W for 3 hours). That's on standard power hungry GPUs (8xA100 server consumes about 4kW). If we switch to analog deep learning accelerators, we gain even better power efficiency. There's simply no chance for brains to compete with electronic devices - as long as we match brain's "capacity".
Ok so it's clear you are not familiar with the vocabulary of this domain. "Capacity" means something specific (if extremely hard to measure precisely) about the informational throughput of a system, not just "the ability to do high-level task X as judged by a naive human observer".
I urge you to study more seriously about the things you seem so eager to speculate about before publishing underinformed opinions in public spaces.
Is that what you mean when you say "capacity"? Does not seem very relevant in the context of our discussion. If it's not what you have in mind, I'd appreciate a link to the Wikipedia article on "ML model capacity" or whatever specific term experts use to represent the concept.