This is just back to the old problem: most research is irrelevant to anyone beyond a few researchers and largely inconsequential to the world. This means, there will be no money for replication for most things.
Anything truly critical will (eventually) go through some replication/control of sorts (but it can take a long time).
You can either shut down most of research and then place your bets on what to keep and replicate, or you run broad but with a lot of incorrect stuff in it.
If you go for the former, you run the risk that you keep the wrong things, though. You have to have a way to quantify the direct and indirect costs of all the bad research and see if that trades off vs a much smaller research surface. Not sure if that is the case - empirical data matters much less for a lot of big decisions than people often make it out.
> This is just back to the old problem: most research is irrelevant to anyone beyond a few researchers and largely inconsequential to the world. This means, there will be no money for replication for most things.
If it's inconsequential, then wouldn't that money be better spent on replications or other research that is consequential? I'm not really clear on what you're suggesting. Although maybe I wasn't really clear on what I've been suggesting.
Edit: to clarify, there are multiple ways to reorganize research. Consider an approach similar to physics, where there's an informal division between theoreticians and experimentalists. What if we have two different kinds of publications in social sciences, one that's proposing and/or refining experimental designs to correct possible sources of bias, and another type of publication that is publishing the results of conducting experiments that have been proposed. The experimentalists simply read proposals and apply for grants to conduct experiments, and multiple groups can do so completely independently. Conducting the experiment must strictly adhere to the proposed experimental design, no deviations can be permitted as is so common in social science when they find uninteresting results, otherwise this breaks the reliability of the results. A proposal should probably undergo a few rounds of refinement before experimentalists should feel confident in conducting the experiment, but I think the overall approach could work.
Anything truly critical will (eventually) go through some replication/control of sorts (but it can take a long time).
You can either shut down most of research and then place your bets on what to keep and replicate, or you run broad but with a lot of incorrect stuff in it.
If you go for the former, you run the risk that you keep the wrong things, though. You have to have a way to quantify the direct and indirect costs of all the bad research and see if that trades off vs a much smaller research surface. Not sure if that is the case - empirical data matters much less for a lot of big decisions than people often make it out.