>There is a whole industry who is pushing for a couple of years now to tell us that they work, that they replace humans, that they work for search, etc.
Who are you referring to? Did someone tell you that chatgpt "works for search" without clicking the "search" box?
Also are you sure that AI designers intend for their llms to adopt an authorative tone? Isn't that just how humans normally type in the corpus?
Also, you seem to be arguing that, because the general tone you've been hearing about AI is that "they work for search", that therefore openai should be liable for generative content. However, what you've been hearing about the general tone of discussion doesn't really match 1:1 with any company's claim about how their product works
Just an example, read https://openai.com/index/introducing-chatgpt-search/ , see how many mentions there are to "better information", "relevant", " high quality". Then see how many mentions there are of "we don't expect it to be real stuff".
> Also are you sure that AI designers intend for their llms to adopt an authorative tone? Isn't that just how humans normally type in the corpus?
If designers wanted it any other way, they would have changed their software. If those who develop the software are not responsible for its behavior, who is? Technology is not neutral. The way AI communicates (e.g., all the humanizing language like "sorry", " you are right" etc.) is their responsibility.
In general, it is painfully obvious that none of the companies publishing LLMs paints a picture of their tools as "they are dream machines". This narrative is completely the opposite of what is needed to gather immense funding, because nobody would otherwise spend hundreds of billions for a dream machine. The point is creating a hype in which LLMs can do humans jobs, and that means them being right - and maybe doing "some" mistakes every now and then.
All you need is to go on openai website and read around. See https://openai.com/index/my-dog-the-math-tutor/ or https://openai.com/chatgpt/education/ just as a start.
Who would want a "research assistant" that is a "dream machine"? Which engineering department would use something "not expected to say real stuff" to assist in designing?
Who are you referring to? Did someone tell you that chatgpt "works for search" without clicking the "search" box?
Also are you sure that AI designers intend for their llms to adopt an authorative tone? Isn't that just how humans normally type in the corpus?
Also, you seem to be arguing that, because the general tone you've been hearing about AI is that "they work for search", that therefore openai should be liable for generative content. However, what you've been hearing about the general tone of discussion doesn't really match 1:1 with any company's claim about how their product works