This scam is true for all AI technologies. It only "works" as far as we interpret it as working. LLMs generate text. If it answers our question, we say that the LLM works. If it doesn't, we say that it is "hallucinating".
Im sorta beginning to think some LLM/AI stuff is the Wizard of Oz(a fake it before you make it facade).
Like why can an LLM create a nicely designed website for me but asking it to do edits and changes to the design is a complete joke. Lots of the time it creates another brand new design (not what i asked all) and it's attempts at editing it LOL. It makes me think it does no design at all rather it just went and grab one from the ethers of the Internet acting like it created it.
It's not a scam because it does make you code faster even if you must review everything and possibly correct (either manually or via instruction) some things.
As far as hallucinations go, it is useful as long as its reliability is above a certain (high) percentage.
That's the point, nobody really believes there is an intelligence generating Google results. It is a best-effort based engine. However, people have this belief that ChatGPT has somehow some intelligent engine generating results, which is incorrect. It is only generating statistically good results; if it is true or false depends on what the person using it will do with the results. If it is poetry, for example, it is always true. If it is how to find the cure for cancer it will with very high probability be false. But if you're writing a novel about a scientist finding a cure for cancer, then that same response will be great.