The issue with these AI systems is how incredibly well they function in isolated circumstances, and how much they crash and burn when they have to be integrated into a full tech stack (even if the tech stack is also written by the same model).
The current generation of generative AI based on LLMs simply won't be able to properly learn to code large code bases, and won't make the correct evaluative choices of products. Without being able to reason and evaluate objectively, you won't be a good "developer" replacement. Similar to how asking LLMs about (complex) integrals, it will often end it's answer with "solution proved by derivation", not because it has actually done it (it will also end with this on incorrect integrals), but because that's what its training data does.
The current generation of generative AI based on LLMs simply won't be able to properly learn to code large code bases, and won't make the correct evaluative choices of products. Without being able to reason and evaluate objectively, you won't be a good "developer" replacement. Similar to how asking LLMs about (complex) integrals, it will often end it's answer with "solution proved by derivation", not because it has actually done it (it will also end with this on incorrect integrals), but because that's what its training data does.