Why don't scientists publish anonymously? We already have double-blind peer review. This seems like such an obvious idea, there must be some issue.
Authors can still get reputation, recognition, and compensation for their papers, without people knowing who wrote what paper, via public/private keys and blockchain. Every time an author publishes a paper, they generate a new address and attach the public key to it. Judges send awards (NFTs) and compensation to the key without knowing who holds it, and if the same award type is given to multiple papers, authors can display it without anyone knowing which paper is theirs.
With LLMs even writing style can be erased (and as a side effect, the paper can be written in different formats for different audiences). Judges can use objective criteria so they can't be bribed without others noticing; in cases where the paper is an algorithm and the criteria is a formal proof, the "judge" can be a smart contract (in practice I think that would be a small minority of papers, but it would still be hard for a judge to nominate an undeserving paper while avoiding skeptics, because a deserving paper would match the not-fully-objective criteria according to a wide audience). Any other potential flaws?
1. It's a very small community and peer review is even hard. Think about it this way: what do you think two physicist colleagues talk about at a conference? How do you know who to talk to to collaborate on a problem? (Yes, people still talk voice about problems.)
2. Labs are specialized. You choose a lab to work at based on what they're working on. How are you going to choose where to spend your Ph.D or postdoc if you don't know what the lab is working on and how productive it is?
3. We are all still humans. We are wired to know the social systems around us. This would be an entire charade.
Ok, then scientists can form groups where they know each other, but publish anonymously outside those groups.
It doesn't solve all the issues, but it at least allows scientists to be "activists" (really just share their opinions like any other human) without affecting their credibility. Even if they're doxxed, they can eventually regain anonymity, because eventually other scientists with different views will publish papers on the same subject, and people can only distinguish who published what by its content.
Right now, scientists can share their opinions anonymously. This works well enough, except they can't share them in-person except to others they trust; and if they get doxxed, they can't remove their old posts from the name on their papers.
Those may not be the solutions, but the problem certainly exists. I'm in academia and even I'll admit it has a lot of nepotism. People who are famous or infamous are identified despite peer-review (stylometry and subject) and the reviewers are biased for or against them. Also see comments like https://news.ycombinator.com/item?id=45396377#45396617: "It’s basically impossible to make a career as a scientist these days without constantly promoting yourself and your work unfortunately". If attaching your name to a paper becomes taboo, perhaps promotion will be less important, and if results can be judged algorithmically, definitely so.
Authors can still get reputation, recognition, and compensation for their papers, without people knowing who wrote what paper, via public/private keys and blockchain. Every time an author publishes a paper, they generate a new address and attach the public key to it. Judges send awards (NFTs) and compensation to the key without knowing who holds it, and if the same award type is given to multiple papers, authors can display it without anyone knowing which paper is theirs.
With LLMs even writing style can be erased (and as a side effect, the paper can be written in different formats for different audiences). Judges can use objective criteria so they can't be bribed without others noticing; in cases where the paper is an algorithm and the criteria is a formal proof, the "judge" can be a smart contract (in practice I think that would be a small minority of papers, but it would still be hard for a judge to nominate an undeserving paper while avoiding skeptics, because a deserving paper would match the not-fully-objective criteria according to a wide audience). Any other potential flaws?