Copilots are considered unsexy bc they appear not to aim at AGI (i.e. go for full autonomy). But that gets the game theory completely backwards.
LLMs (and agents) can often do 90% of the heavy lifting, but those 10% failures (or just weak results) can unpredictably pop up anywhere.
Even 1% failures can sometimes render an AI solution uneconomical (see: self-driving cars).
The reasonable path to AGI is to have a tight AI/human feedback loop, and take more and more off of the human's plate over time. That way you can deliver actual value to users right now (not just a cool demo), and provide more and more value as the AI ecosystem matures.
LLMs (and agents) can often do 90% of the heavy lifting, but those 10% failures (or just weak results) can unpredictably pop up anywhere. Even 1% failures can sometimes render an AI solution uneconomical (see: self-driving cars).
The reasonable path to AGI is to have a tight AI/human feedback loop, and take more and more off of the human's plate over time. That way you can deliver actual value to users right now (not just a cool demo), and provide more and more value as the AI ecosystem matures.