Galaxus did nothing wrong. They released a language model to the one community well equipped to handle the limitations of such models. It was advertised as experimental. I think it is far better than releasing a model to the public at large, which in aggregate is less able to understand that hallucinations can occur. They got trashed because (i) it came from meta and (ii) it got targeted by (maybe well-meaning, yet...) short-sighted scientists on Twitter. On the first point, big companies are scrutinised more and tend to be criticised more, it is unlikely that any big company could have released chatgpt without a huge backlash. I'm not sure "move fast and break things" would still work for meta today. OpenAI had a privileged position for thay kind of moonshot. Then there are the well documented Twitter clashes between Yann Lecun and opponents, which I believe made Galactica the perfect target when it was advertised on Twitter by Lecun. It felt a bit ridiculous at the time, like a wave of identity politics reaching science: my enemy did that so I'm going to work hard on trashing it regardless, and I'll double down on my opinions irrespective of new evidence.
> They released a language model to the one community well equipped to handle the limitations of such models. It was advertised as experimental. [...] Then there are the well documented Twitter clashes between Yann Lecun and opponents, which I believe made Galactica the perfect target when it was advertised on Twitter by Lecun.
This is how it was advertised on Twitter by Lecun: "Type a text and http://galactica.ai will generate a paper with relevant references, formulas, and everything."
If he wanted to advertise it as experimental and make clear its limitations he could have written something like "Type a text and http://galactica.ai will generate something that looks like a paper with made up references, incorrect formulas, and anything."
Disagree. They claimed it did useful stuff; many people (including ex-scientists like me who are not short-sighted) saw the examples and how easily it was made to produce inaccurate information. In science, truthiness matters a lot and saying you have a LLM that can summarize scientific articles, users can reasonably expect that the LLM produces accurate information at a much higher rate.