Hacker Newsnew | past | comments | ask | show | jobs | submit | bloppe's commentslogin

> For years, I opposed Universal Basic Income, firmly and reflexively.

Bold to open your economic diatribe by discrediting your own economic reasoning.


Buy rugs?

Makes sense after a rug pull I guess? /s

If you continue to get mountains of slop applications after introducing an application fee, then at least you have a new revenue stream.

On the company side, you have a new revenue stream. On the ANPL side, you have another product you can securitize. Revenue generation and risk transfer, a win-win!

Does this count as news?

I agree that it's unconscionable to consign a centenarian to unwanted labor, but the article says this is not the case for Ginny.

Also, wages in the US have not stagnated at all. Wage growth in the poorest quartile has outpaced inflation and that of the other 3 quartiles.

Perhaps this is a bit of projection by the British Guardian.


They're sandboxed if you use bazel. Not as much as the nix people would like, but bazel tests get read-only access to the host filesystem except /tmp

Right, I should have said there are conventions and libraries you can use to limit the scope of tests but that requires intention and diligence. But fundamentally , “go tests” could run anything a normal go program can.

You could provide decently meaningful and targeted sandboxing using mount namespaces and an overlay FS, while retaining sudo privileges for what you need to do.

The website credits include roles for "decompilation" and "porting". So I guess it was decompiled from the original binary and ported to TS.

Ah, this clarifies the GX references I mentioned on another comment.

I wonder if there's a way to tax the frivolous submissions. There could be a submission fee that would be fully reimbursed iff the submission is actually accepted for publication. If you're confident in your paper, you can think of it as a deposit. If you're spamming journals, you're just going to pay for the wasted time.

Maybe you get reimbursed for half as long as there are no obvious hallucinations.


The journal that I'm an editor for is 'diamond open access', which means we charge no submission fees and no publication fees, and publish open access. This model is really important in allowing legitimate submissions from a wide range of contributors (e.g. PhD students in countries with low levels of science funding). Publishing in a traditional journal usually costs around $3000.

Those journals are really good for getting practice in writing and submitting research papers, but sometimes they are already seen as less impactful because of the quality of accepted papers. At least where I am at, I don't think the advent of AI writing is going to affect how they are seen.

In the field of Programming Languages and Formal Methods, many of the top journals and conference proceedings are open access

Who pays the operating expenses?

Welcome to new world of fake stuff i guess

If the penalty for a crime is a fine, then that law exists only for the lower class

In other words, such a structure would not dissuade bad actors with large financial incentives to push something through a process that grants validity to a hypothesis. A fine isn't going to stop tobacco companies from spamming submissions that say smoking doesn't cause lung cancer or social media companies from spamming submissions that their products aren't detrimental to the mental health.


> In other words, such a structure would not dissuade bad actors with large financial incentives to push something through a process that grants validity to a hypothesis.

That's not the right threat model. The existing peer review process is already weak to high-effort but conflicted research.

Instead, the threat model is closer one closer to that of spam, where the submitting authors don't care about the content of their submission at all but need X publications in high-impact outlets for their CV or grant application. Predatory journals exploit this as part of a pay-to-play problem, but the low reputation of those journals limits their desirable impact factor.

This threat model relies on frequent but low-quality submissions, and a submission fee would make taking multiple kicks at the can unviable.


I'm sure my crude idea has it's shortcomings, but this feels superfluous. Deep-pocketed propagandists can do all sorts of things to pump their message whether a slop tax exists or not. There may or may not be existing countermeasures at journals for that. This just isn't really about that. It's about making sure that, in the process of spamming the journal, they also fund the review process, which would otherwise simply bleed time and money.

That would be tricky, I often submitted to multiple high impact journals going down the list until someone accepted it. You try to ballpark where you can go but it can be worth aiming high. Maybe this isn't a problem and there should be payment for the efforts to screen the paper but then I would expect the reviewers to be paid for their time.

I mean your methodology also sounds suspect. You're just going down a list until it sticks. You don't care where it ends up (I'm sure within reason) just as long as it is accepted and published somewhere (again, within reason).

No different from applying to jobs. Much like companies, there are a variety of journals with varying levels of prestige or that fit your paper better/worse. You don't know in advance which journals will respond to your paper, which ones just received submissions similar to yours, etc.

Plus, the t in me from submission to acceptance/rejection can be long. For cutting edge science, you can't really afford to wait to hear back before applying to another journal.

All this to say that spamming 1,000 journals with a submission is bad, but submitting to the journals in your field that are at least decent fits for your paper is good practice.


Scientists are incentivized to publish in as high-ranking a journal as possible. You’re always going to have at least a few journals where your paper is a good fit, so aiming for the most ambitious journal first just makes sense.

It's standard practice, nothing suspect about their approach - and you won't go lower and lower and lower still because at some point you'll be tired of re-formatting, or a doctoral candidate's funding will be used up, or the topic has "expired" (= is overtaken by reality/competition).

This is effectively standard across the board.

Are you at all aware of how scientific publishing works?

You must have no idea how scientific publishing works. Typical acceptance rate for ok/good journal is 10-20% (and it was like that even before LLMs). Also it's a great idea to make business of scientific publishing even more predatory - now sciencists writing articles for free, reviewing for free and then having to pay for publication will also have to pay to even submit something, with 90% chance of rejection. Also think what kind of incentives it will create.

Pay to publish journals already exist.

This is sorta the opposite of pay to publish. It's pay to be rejected.

I would think it would act more like a security deposit, and you'd get back 100%, no profit for the journal (at least in that respect).

I'm pretty sure the reviewers of those are still volunteers, the publisher is just making even more money!

I’d worry about creating a perverse incentive to farm rejected submissions. Similar to those renter application fee scams.

Pay to review is common in Econ and Finance.

Variation I thought of on pay-to-review:

Suppose you are an independent researcher writing a paper. Before submitting it for review to journals, you could hire a published author in that field to review it for you (independently of the journal), and tell you whether it is submission-worthy, and help you improve it to the point it was. If they wanted, they could be listed as coauthor, and if they don't want that, at least you'd acknowledge their assistance in the paper.

Because I think there are two types of people who might write AI slop papers: (1) people who just don't care and want to throw everything at the wall and see what sticks; (2) people who genuinely desire to seriously contribute to the field, but don't know what they are doing. Hiring an advisor could help the second group of people.

Of course, I don't know how willing people would be to be hired to do this. Someone who was senior in the field might be too busy, might cost too much, or might worry about damage to their own reputation. But there are so many unemployed and underemployed academics out there...


Better yet, make a "polymarket" for papers where people can bet on which paper can make it, and rely on "expertise arbitrage" to punish spams.

Doesn't stop the flood, i.e. the unfair asymmetry between the effort to produce vs. effort to review.

Not if submissions require some small mandatory bet.

Now accepting money from slop companies to verify their slop as notslop

> There could be a submission fee that would be fully reimbursed if the submission is actually accepted for publication.

While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!


Sure, but now we can't even assume that such research is submitted in good faith anymore. There just seems to be no perfect solution.

Maybe something like a "hierarchy/DAG? of trusted-peers", where groups like universities certify the relevance and correctness of papers by attaching their name and a global reputation score to it. When it's found that the paper is "undesirable" and doesn't pass a subsequent review, their reputation score deteriorates (with the penalty propagating along the whole review chain), in such a way that:

- the overall review model is distributed, hence scalable (everybody may play the certification game and build a reputation score while doing so) - trusted/established institutions have an incentive to keep their global reputation score high and either put a very high level of scrutiny to the review, or delegate to very reputable peers - "bad actors" are immediately punished and universally recognized as such - "bad groups" (such as departments consistently spamming with low quality research) become clearly identified as such within the greater organisation (the university), which can encourage a mindset of quality above quantity - "good actors within a bad group" are not penalised either because they could circumvent their "bad group" on the global review market by having reputable institutions (or intermediaries) certify their good work

There are loopholes to consider, like a black market of reputation trading (I'll pay you generously to sacrifice a bit of your reputation to get this bad science published), but even that cannot pay off long-term in an open system where all transactions are visible.

Incidentally, I think this may be a rare case where a blockchain makes some sense?


You have some good ideas there, it's all about incentives and about public reputation.

But it should also fair. I once caught a team at a small Indian branch of a very large three letter US corporation violating the "no double submission" rule of two conferences: they submitted the same paper to two conferences, both naturally landed in my reviewer inbox, for a topic I am one of the experts in.

But all the other employees should not be penalized by the violations of 3 researchers.


This idea looks very similar to journals! Each journal has a reputation, if they publish too much crap, the crap is not cited and the impact factors decrease. Also, they have an informal reputation, because impact index also has problems.

Anyway, how will universities check the papers? Somone must read the preprints, like the current reviewers. Someone must check the incoming preprints, find reviewers and make the final decition, like the current editors. ...


How would this work for independent researchers?

(no snark)


VMs are pretty heavy-weight to run all the JavaScript on a modern page. A proper VM requires a dedicated kernel. Firecracker boots the whole 40MB Linux kernel just to run a "function". A container doesn't have this baggage, but would never be considered secure enough for the web environment.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: