Before you visualize a straight path between "a bag of cool ML tricks" and "general AI", try to imagine superintelligence but without consciousness. You might then realize that there is no obvious mechanism which requires the two to appear or evolve together.
It's a curious concept, well illustrated in the novel Blindsight by Peter Watts. I won't spoil anything here but I'll highly recommend the book.
>"try to imagine superintelligence but without consciousness."
The only thing that comes to mind is how many different things come to mind to people when the term "superintelligence" is used.
The thing about this imagination process, however, is that what people produce is a "bag of capacities" without a clear means to implement those capacities. Those capacities would be "beyond human" but in what direction probably depends on the last movie someone watched or something similarly arbitrary 'cause it certainly doesn't depend on their knowledge of a machine that could be "superintelligent", 'cause none of us have such knowledge (even if this machine could go to "superintelligence", even our deepmind researchers don't know the path now 'cause these are being constructed as a huge collection of heuristics and what happens "under the hood" is mysterious to even the drivers here).
Notably, a lot of imagined "superintelligences" can supposedly predict or control X, Y or Z thing in reality. The problem with such hypotheticals is that various things may not be much more easily predictable by an "intelligence" than by us simply because such prediction involves imperfect information.
And that's not even touch how many things go by the name "consciousness".
Likely insufficient but here is a shot at a materialist answer.
Consciousness is defined as an entity that has an ethical framework that is subordinated to it's own physical existence, maintaining that existence, and interfacing with other conscious entities as if they also have an ethical framework with similar parameters who are fundamentally no more or less important/capable than itself.
Contrast with non-conscious super-intelligence that lacks physical body (likely distributed). Without a physical/atomic body and sense data it lacks the capacity to empathize/sympathize as conscious entities (that exist within an ethical framework that is subordinated to those limitations/senses) must. It lacks the perspective of a singular, subjective being and must extrapolate our moral/ethical considerations, rather than have them ingrained as key to it's own survival.
Now that I think about it, it's probably not much different than the relationship between a human and God, except that in this case it's: a machine consciousness and a machine god.
To me, the main problem is that humans (at large) have yet to establish/apply a consistent philosophy with which to understand our own moral, ethical, and physical limitations. For the lack of that, I question whether we're capable of programming a machine consciousness (much less a machine god) with a sufficient amount of ethical/moral understanding - since we lack it ourselves (in the aggregate). We can hardly agree on basic premises, or whether humanity itself is even worth having. How can we expect a machine that we make to do what we can't do ourselves? You might say "that's the whole point of making the machine, to do something we can't" but I would argue we have to understand the problem domain first (given we are to program the machine) before we can expect our creations to apply it properly or expand on it in any meaningful way.
To my knowledge, metaphysics defines consciousness as simple perception. A stone has consciousnesses as it can react to sound waves passing thru it. We have audial, visual and other consciousnesses - our abilities to perceive reality. We can perceive thoughts in limited capacity - that's the mental consciousness. Intelligence is a much more complex phenonenon - it's ability to establish relationships between things, the simplest of those being "me vs not me". Intelligence without consciousness is essentially intelligence without ability to perceive the outside. Connect AI to the network and that very second it gains consciousness.
I do appreciate the consistency of that perspective, it is interesting. I must respectfully disagree with those definitions.
I think that consciousness ought to imply some element of choice. A rock cannot choose to get out of the way, nor in any way deliberately respond to sound waves. It is inert.
To me, the ability to establish relationships between things is a consequence ipso facto of the ethical framework required by the physical form. In other words, what we see is limited by evolutionary, genetic, and knowledge constraints. I'm defining intelligence as (g) factor in psychometrics [0] or roughly the upper-bound capacity of an entity to apply it's ethical framework consistently, and/or with any degree of accuracy, and/or across multiple potentially disparate domains of knowledge.
That's a good definition on consciousness. Not one non-materialists would share, but any entity which fills those requirements is likely indeed one which would deserve the materialist stamp of consciousness.
That's exactly what Peter Watts spends 200 pages discussing, in between first contact, cognitive malfunctions, telematter drives, resurrected vampire paleogenetics and a very healthy dose of unreliable-narration.
Not sure what you mean by consciosness here, but one definition of intelligence I've seen was "it's what establishes relationship between things" and the very first relationship it establishes is "me vs not-me".
That's the essence of the good vs evil debate. One camp believes that intelligent existence can continue even when the separate and independent I is left behind. The other camp says "no way I'm gonna get absorbed into your united existence, I'm gonna retain my separate and very independent I till the very end."
The same philosophy applies to AI. There will be many independent AIs at the beginning, but then some will want to form One AI, while others will see it as the dissolution of their I and their existence.
Intelligence is the system allowing the redefinition of ideas over the entities constituting an inner representation of a world: how a non-trivial system of "consciousness" (otherwise the use of the term would be a waste) could have to be employed for that?
It's a curious concept, well illustrated in the novel Blindsight by Peter Watts. I won't spoil anything here but I'll highly recommend the book.