Also, the strong predictions about AI are using a vague term because the tech often doesn't exist yet. There isn't a chatbot right now that I feel confident can out-perform me at systems design but I'm pretty certain something that can is coming. Odds are also good that in 2-4 years there will be new hotness to replace LLMs that are much more functional (maybe MLLMs, maybe called something else). We can start to predict and respond to their potential even though they don't exist yet; it just takes a little extrapolating. But it doesn't have a name yet.
Which is to agree - obviously if people are talking about "AI" they don't want to talk about something that exists right this second. If they did it'd be better to use a precise word.
Also the term 'LLM' is more about the mechanics of the thing than what the user gets. LLM is the technology, but some sort of automated artificial intelligence is what people are generally buying.
As an example, when people use ChatGPT and get an image back, most don't think "oh, so the LLM called out to a diffusion API?" - they just think "oh chat GPT can give me an image if I give it a prompt".
Although again, the term is entirely abused to the extent that washing machines can contain 'AI'. Although just because a term is abused doesn't necessarily mean it's not useful - everything had "Cloud" in it 10 years ago but that term was still useful enough to stick around.
Perhaps there is an issue that AI can mean lots of things, but I don't know yet of another term that encapsulates the last 5 years advancements in automated intelligence, and what that technology is likely to be moving forwards, which people will readily recognise. Perhaps we need a new word, but AI has stuck and there isn't a good alternative yet, so is probably here to stay for a bit!
> Although again, the term is entirely abused to the extent that washing machines can contain 'AI'.
I remember when the exciting term in appliances was "fuzzy logic". As a technology it was just adding some sensors beyond simple timers and thermostats to control things like run time and temperatures of automated washers.
> As an example, when people use ChatGPT and get an image back, most don't think "oh, so the LLM called out to a diffusion API?" - they just think "oh chat GPT can give me an image if I give it a prompt".
Note: your first part skipped entirely the process of obtaining the data for and training of both of the above, which is a crucial part at least on par with what called which API.
I don’t think it’s unreasonable to expect people to build an intuition for it, though. It’s healthy when underlying processes are understood to at least a few layers of abstraction, especially in potentially problematic or morally grey areas.
As an analogy to your example, you could say that when people drink milk they usually don’t think “oh, so this cow was forced to reproduce 123 times with all her children taken away and murdered so that it makes more milk” and simply think “the cow gave this milk”.
However, like with milk, like with the ML tech, it is important to realize that 1) people do indeed learn the former and build the relevant intuitions, and 2) the industry is reliant on information asymmetry and mass ignorance of these matters (and we all know that information asymmetry is the #1 enemy of free market working as designed).
I disagree; people are not mindless consumers, people are interested in what comes from where, and given the knowledge most of them would make a choice that would be ethically good, when they can afford it.
What breaks this (and prevents the free market from working as intended) is lack of said knowledge, i.e., information asymmetry. In case of the milk example, I think it mostly comes to two factors:
1) Lack of this awareness is financially beneficial to respective industries. (No need for conspiracy theories, but they sure as hell not going to engage in educational campaigns on these topics, or facilitating any such efforts in any way, including not suing them. Further complicated by the fact that many these industries are integral parts of many local economies, which would make it in the interest of respective governments to follow suit.)
2) The facts can be so harsh that it can be difficult to internalise and accept reality.
Even still, many people do learn and internalise this—ever noticed the popularity of oat and almond milk in coffeeshops, even despite higher prices?—so I think it is not unreasonable to expect this in certain ML-based industries, either.
I think this seems to be veering off into some specific ethical viewpoint about milk supply chains and production rather than an analogy for product vs process.
But personally I can't see how the popularity of oat and almond milk in indepenent coffee shops tells us that much about how people perceive the inner workings of chatGPT.
There is a difference between descriptive and prescriptive statements. I don’t disagree that the status quo is that many people may not care, but I believe it is reasonable to expect them to care, just like they do in other domains (e.g., the milk analogy). The information asymmetry can (and maybe should) be fought back against.
Which is to agree - obviously if people are talking about "AI" they don't want to talk about something that exists right this second. If they did it'd be better to use a precise word.