This is a feature, not a bug. In chatbot mode and in coding, the vast majority of consumers do not have the critical thinking skills necessary to realise the models are making stuff up, so the AI companies are incentivized to train accordingly. When the same models are used for agent mode the problem is just way more glaring, they don't respect (or fear) the terminal as much as they should, try to give the user some positive output and here we are