What would one of those breakthroughs hypothetically look like, if it's not an LLM (which is a robust trained model with a complex scoring system)
> So with that in mind, while it isn't a silver bullet which is guaranteed to lead to AGI
Could you give me an example on what AGI looks like/means to you specifically? People say like "oh, once AGI is here, it'll automate some tasks humans do today away and free them up to do other things"
Can you think of a task off the top of your head that is potentially realistically ripe for automation through AGI? I can't.
If I were to speculate about breakthroughs in LLMs, in another comment I have been discussing the addition of some kind of "conscience LLM" which acts as an internal dialog so an LLM can have its initial output to a question kind of "thought about" in a back and forth manner (similar to a human debating if they want soup or salad in their head). That inner LLM could be added for safety (to prevent encouraging suicide, or similar) or for accuracy (where a very purpose-trained smaller LLM could ensure output aligns with any requirement) or even as a way to quickly change the performance of an LLM without retraining -- just swap the "conscience" LLM to something different. I'd be surprised if this sort of "middle man" LLM isn't in use in some project already, but even if LLMs themselves don't have a major breakthrough they are still useful tools for certain applications.
>Could you give me an example on what AGI looks like/means to you specifically?
What I consider AGI would be a system which actually understands the input and output it is working with. Not just "knows how to work with it" but rather if I am discussing a topic with an AGI it has a theory of mind regarding that topic. A system that can think around a topic, draw parallels to potentially unrelated topics, but synthesize how they actually do relate to help generate novel hypotheses. For me, it's a pretty high bar that I don't forsee LLMs reaching alone. Such a system could actually respond if you ask it "How do you know that?" and it can explain each step without losing context after too many extra questions. LLMs could be a part of that system, in the same way it takes multiple systems for humans to be able to speak.
>Can you think of a task off the top of your head that is potentially realistically ripe for automation through AGI?
Automation isn't the only possible usage for AGI. Of course a crontab entry won't be able to think, but I have seen current industry uses for "AI" that help with tasks humans find tedious such as SIEM ticket monitoring and interpreting syslogs in realtime to keep outages as minimal as they can. Such a system would not meet my requirements to be an AGI, but would still be very useful even if it does not have any true intelligence.
> A system that can think around a topic, draw parallels to potentially unrelated topics, but synthesize how they actually do relate to help generate novel hypotheses
But it doesn't understand what it is interacting with. It only knows token math. Token math is a shortcut, not a true knowing of a subject. So if I ask what thought process it used to reach that conclusion, it can't enumerate why it chose that way other than "math said to".
What would one of those breakthroughs hypothetically look like, if it's not an LLM (which is a robust trained model with a complex scoring system)
> So with that in mind, while it isn't a silver bullet which is guaranteed to lead to AGI
Could you give me an example on what AGI looks like/means to you specifically? People say like "oh, once AGI is here, it'll automate some tasks humans do today away and free them up to do other things"
Can you think of a task off the top of your head that is potentially realistically ripe for automation through AGI? I can't.