With regard to the debate, I think it's good not to engage in too much black-and-white thinking. Science itself is a pretty muddy affair, and we still haven't grown beyond simplistic null hypothesis significance testing (NHST), even decades after its problematic implications became clear.
That's why it's so important to look at the macro implications: I.e. how does this shift costs? As another comment nicely put it, LLMs are empowering good science, but they are potentially empowering bad science at an order of magnitude more.
Having a design background, I agree completely. To explain why design matters in this case, we simply need to look at ergonomic factors: literally the “economy of work.” That’s why I pointed out the "end to end" claim as a lie because it’s impossible to assert such things without thorough testing of the applications and continued analysis of its effects on the whole supply chain. Most of those AI byproducts will likely be laughable in the coming decades, similarly to the recurring weird-form-factor boom surrounding whatever device is in vogue. Refer to the video linked in [1] for good examples of weird PC input devices from the 2000s. It takes considerable time for the most viable form-factors to be established, and once that’s achieved, then the designs of the vast majority of products within a category converge to the most ergonomic (and economic) one. What bothers me most is not the advent of novelty and experiments, but the overconfidence and overpromises surrounding what are merely untested product hypotheses for most of AI applications. The negligible marginal cost of producing derivative work in software, fueled by the high availability of accessible tooling and lack of rigorous design and scientific training, is to blame. Never mind the hype cycle, which is natural and expected. In times like these, it is when we most need pragmatic skepticism. I wonder if AI developers at all care to do the bare minimum due diligence required to launch their products. Seems to be a rare thing in SWE in general.
With regard to the debate, I think it's good not to engage in too much black-and-white thinking. Science itself is a pretty muddy affair, and we still haven't grown beyond simplistic null hypothesis significance testing (NHST), even decades after its problematic implications became clear.
That's why it's so important to look at the macro implications: I.e. how does this shift costs? As another comment nicely put it, LLMs are empowering good science, but they are potentially empowering bad science at an order of magnitude more.