Indeed, in my own area of research there was the resignation from Machine Learning back in 2001 [1] which was one of the final nails in the coffin for publishing with any real restrictions in AI. Notable exceptions would be AAAI and of course regressive forces such as Google DeepMind that insist on publishing in closed journals like Nature, despite the rest of the field keeping their research public and open.
How do you know which papers to read? It seems like it would be overwhelming without some filter, and perhaps quality would be lower without editorial standards?
(I'm not advocating for a paywalled journal, but I'm wondering if a free journal designated as the premier one in the field would be useful.)
Lower quality without editorial standards? The amount of rubbish I see published with editorial "standards" is enormous (in my own field and others). Personally, I think that quality is better judged as to whether work gets used by others and keeping others in check can be done by encouraging authors to publish papers criticising and invalidating bad research.
As for how I know what to read. I talk to fellow researchers and students plenty, read abstracts and I am senior enough to sniff out rubbish rather quickly, etc. If you want to have an amazing, leading, free/open journal you can look at say the Transactions of the Association for Computational Linguistics [1]. But the entire literature of natural language processing is open these days [2]. For the wider area of research: NeurIPS (formerly NIPS) and ICLR are fully open. AAAI is not, but the quality of what is presented at AAAI tends to be worse than the open ones anyway and as I said earlier, no one of note publishes in closed journals other than Google DeepMind. It should be noted that we are very much a "conference driven" field these days and I know plenty of other fields are not, but I am not fit to comment on their situation.
> The amount of rubbish I see published with editorial "standards" is enormous (in my own field and others).
Every human institution is flawed; that doesn't mean the alternative institution, or no institution at all, is better.
What I'm really wondering is, how can you keep up efficiently? And how do you have more objective standards? The system you describe seems very prone to popularity and political contests, and the ubiquitous Internet mob actions. Critiques by others aren't really useful signals unless you critique the critiques carefully - and who has time?
I'm not saying you have no answer, I'm just trying to understand how it works.
> Critiques by others aren't really useful signals unless you critique the critiques carefully - and who has time?
Well, I take the time and about a quarter of my most impactful papers have been such critiques. How do we encourage it? Well, ICLR (or was it NeurIPS?) a few years ran a replication challenge where if you could replicate a paper you got co-authorship. Not sure how much I love that strategy, but I am sure there are ways to create a sane "economy" around it.
As for whether we are better or worse off with the current state in my own field: I do not know. We end up in some sort of social science-esque argument where we simply can not prove the experiment either way as it can only be run once and have to argue on very shaky grounds (it also does not help that the field is exploding unlikely pretty much no other field ever has, which comes with its own issues). I think I am keeping up and I think that I personally have a decently objective view, but I can not prove that to you. What I can say is that there is not a single scientist around me that is not acutely aware of the problems with the previous and current systems. But given how "bottom up" we are without big beasts like Elsevier around that would have a deep financial interest to enforce the status quo, I believe we will arrive at solutions and faster than we otherwise would. There will be pain, yes, but see for example TACL, ACL Rolling Review, ICLR, etc. These are all initiatives that have been fielded by the community and I would argue two have already been great successes and one is struggling, but, could succeed.
The top scholars in a field will know which papers to read and which papers to cite. They'll talk with each other via email, chat groups, and at conferences. If the top scholars are in agreement, it's pretty hard for a journal to maintain its status. Outside of a discipline, laypeople can't tell. But if you're a top scholar inside the discipline, you'll be part of these conversations and you'll know. And thereby so will your discipline colleagues and PhD students who are not yet top scholars.
Not contributing to journals they disagree with and moving their time and effort to journals they do agree with is a win.