Realistically that's the actual headline. Only another AI can replace AI, pretty much like LLMs / Transformers have replaced "old" AI models in certain task (NLP, Sentiment Analysis, Translation etc) and research is in progress for other tasks as well performed by traditional models (personalization, forecasting, anomaly detection etc).
If there's a better AI, old AI will lose the job first.
As somebody who got to work adjacent to some of these things for a long time, I've been wondering about this. Are LLMs and transformers actually better than these "old" models or is it more of an 80/20 thing where for a lot less work (on developers' behalf) LLMs can get 80% of the efficacy of these old models?
I ask because I worked for a company that had a related content engine back in 2008. It was a simple vector database with some bells and whistles. It didn't need a ton of compute, and GPUs certainly weren't what they are today, but it was pretty fast and worked pretty well too. Now it seems like you can get the same thing with a simple query but it takes a lot more coal to make it go. Is it better?
I'm reminded of an Adam Savage video. He ordered an unusual vise from China, and he praised their culture where someone said "I want to build this strange vise that wont be super popular", and the boss said "cool, go do it". They built a thing that we would not build in America.
The small biz scene in unfree communist China is ironically, astronomically better than here in US, where decades of regulatory capture and misleadership have made it difficult and extremely expensive to get off the ground while being protected by the law.