Disagree. They claimed it did useful stuff; many people (including ex-scientists like me who are not short-sighted) saw the examples and how easily it was made to produce inaccurate information. In science, truthiness matters a lot and saying you have a LLM that can summarize scientific articles, users can reasonably expect that the LLM produces accurate information at a much higher rate.