I wouldn't categorize FAIR as failing. Their job is indeed fundamental research and are still a leading research lab, especially in perception and vision. See SAM2, DINOv2, V-JEPA-2, etc. The "fair" (hah) comparisons of FAIR are not to DeepMind/OAI/Anthropic, but to other publishing research labs like Google Research, NVIDIA Research, and they are doing great by that metric. It does seem that for whatever reason that FAIR resisted productization, unlike DeepMind, which is not necessarily a bad thing if you care about open research culture (see [1]). GenAI was supposed to be the "product lab" but failed for many reasons, including the ones you mentioned. Anyways, Meta does have a reputation problem that they are struggling to solve with $$ alone, but its somewhat of a category error to deem it FAIR's fault when FAIR is not a product LLM lab. Also Rob Fergus is a legit researcher; he published regularly with people like Ilya and Pushmeet (VP of Deepmind Research), just didn't get famous :P.
FAIR is failing. Dino and JEPA at least are irrelevant in this age. This is why GenAI exists. GenAI took the good people, the money, the resources and the scope. Zuck tolerates entertains ideas until he doesn’t. It’s clear blue sky research is going to be pushed even further into the background. For perception reasons you can’t fire AI researchers or disband an ai research org but it’s clear which way this is headed.
As for your comparisons, well Google Research doesn’t exist anymore (to all intents and purposes) for similar reasons.
This is why GenAI exists. GenAI took the good people, the money, the resources and the scope. Zuck tolerates entertains ideas until he doesn’t. It’s clear blue sky research is going to be pushed even further into the background
I agree with most this, I just think we have different meanings of failure. FAIR has "failed" in the eyes of Meta leadership in that they have not converted into a "frontier AI lab" like Deepmind, and as a result they are being sidelined (much like Google Research, which I admit was a bad example). But the orgs were founded to pursue basic research and I think it's not the a failure of the scientists at FAIR that management has failed to properly spin out GenAI. Of course, it sounds like your metric is "AI/LLM competitiveness" and we have no disagreements that FAIR is failing on that end (I just don't think its only important or right metric to be judging FAIR).
* Normatively, I think that it's good to have monopolistic big tech firms be funding basic open research as a counterbalance to academia and also because good basic research requires lots of compute these days. It feels somewhat shortsighted to reallocate all resources to LLM research.
* DINO and JEPA aren't particularly useful for language modeling, but are still important for embodiment/robotics/3D, which indeed seems to be the "next big thing." Also, to their credit, FAIR is still doing interesting and useful work on LLMs for encoders [1], training dynamics [2], and multimodality [3], just not training frontier models.
** GenAI took the money, scope, and resources, but not sure about the good people lol, that seems to be their problem.
not affiliated with meta or fair.
[1] https://docs.google.com/document/d/1aEdTE-B6CSPPeUWYD-IgNVQV...