I had always interpreted quadratic funding as being a tax rather than a subsidy. You donate $100 and the cause receives $100, then your next marginal $100 is discounted quadratically, and the government receives the difference. I think that just straight up resolves the first two issues.
I think that very quickly just changes the forms of donation to be “something that doesn’t count as a donation” (buying tickets to an event or even just straight up NFTs).
As soon as the quadratic decrease is more than my marginal tax rate, I’m better off buying an NFT from the cause I want to support than making a donation.
This only applies to a situation where you have a function that requires dynamic checks for preconditions. I would suggest that such a function (or how it's being used) is likely a blight already, but tolerable with very few call sites. In which case checking at the call site is the right move. And as you continue to abuse the function perhaps the code duplication will prompt you to reconsider what you are doing.
So if a function dereferences a pointer, it doesn't make sense to check that it's not null inside the function?
Unless there's an actual performance implication, this is all purely a matter of taste. This is the kind of broadly true, narrowly false stuff that causes people to destroy codebases. "I can't write it this way, because I have to push ifs up and fors down!!!" It's a totally fake requirement, and it imposes a fake constraint on the design.
If there is a performance implication of moving the if into the callers or not, you can do it with an inline function.
static inline int function(blob *ptr, int arg)
{
if (ptr == NULL)
return ERR_NULL;
return real_function(ptr, arg);
}
Just like that, we effectively moved the if statement into 37 callers, where the compiler may be smart enough to hoist it out of a for loop when it sees that the pointer is never changed in the loop body, or to eliminate it entirely when it sees that the pointer cannot be null.
free(NULL); // convenient no-op, does nothing
fflush(NULL); // flush all streams; done implicitly on normal exit
time(NULL); // don't store time_t into a location, just return it
strtol(text, NULL, 10); // not interested in pointer to first garbage char
setbuf(stream, NULL); // allocate a buffer for stream
realloc(NULL, size); // behave like malloc(size)
and others. More examples in POSIX and other APIs:
sigprocmask(SIG_UNBLOCK, these_sigs, NULL); // not interested in previous signal mask
CreateEventA(NULL, FALSE, FALSE, NULL); // no security attributes, no name
I'm sorry. Are you claiming the people who designed those functions made good choices? They altered the behavior of the function considerably for a single input value that is more likely to be a bug than not.
You don't need an explicit rule, you just need to be smarter than than the average mid-curve tries-too-hard-to-feel-right hn poster and realize when you're repeating a calling convention too much.
>I think a blog post... is probably the wrong place that kind of enlightenment
There's this site full of cool knowledgeable people called Hacker News which usually curates good articles with deep intuition about stuff like that. I haven't been there in years, though.
That still has the issue of Quantity{-100} being a-ok as far as the compiler is concerned, but there are other things one can do (as the article alludes to).
My reading of this article wasn't to say that these things are impossible in C++, just that they're not the default or the first thing you try as a beginner is perfectly wrong.
I didn't realize that CVE was funded by the DHS. Isn't it better for it to be independent and not funded by an intelligence agency?
It's enough of a public good to have a common advisory for vulnerabilities that FAANG should just kick it a few million a year. How much can it possibly cost to run this anyway?
You can see online, nuclear power is actually one of the safest methods of energy generation, behind solar and ahead of wind, because sometimes dudes fall from the tops of the windmills.
Could you start looking at the second-order effects of the meltdown to get a higher death toll? Probably, but you could probably also look at all of the pollutants generated by solar panels, and the fact that they get shipped to Africa and crushed up and thrown in the ground to make solar's death toll look higher too.
I think I already addressed this point. You also need to think about second-order effects, but like I said, there are second order effects to all of these solutions. Just because nuclear's side effects are more easy to dramatize doesn't mean that it is necessarily more deadly / harmful to the environment.
That's because the deaths would be lost in a sea of ordinary cancers. Hundreds of additional cancers would not be detectable; that would be below the statistical noise floor. Not being detectable does not mean they won't occur.
Anyway, let me steelman what you're aiming at here. I think you want to argue not that hundreds of deaths won't occur, but that hundreds of deaths don't matter that much. These are statistical deaths, so it's appropriate to treat them using the "statistical value of a human life". This is the value of a life to be used for policy purposes (like deciding if a safety measure is needed, if spending on a medical treatment is appropriate, etc.) In the US, it's around $12M per life. So, 200 (say) deaths would have a value of $2.4B. This is not enormous compared to overall cost of the accident, even to the utility. It could be reasonable to treat radiation releases as at Fukushima by fining the polluter by an amount related to this value.
Under this sort of regulatory regime, the purpose of regulations is not to avoid all releases, but to keep the releases small enough that the utilities would have the resources to pay the fines. So, no 100,000 death accidents. Nuclear power plants designed to this concept could allow some small radiation release in accidents.
Some people take it way too far, but I think my invocation was just trivially true. We can't come to any conclusion about the prevalence of life elsewhere in the universe, just based on the fact that we exist.
The bounty is not a market. It's a subsidized incentive to subvert the market, and to give greyhat hackers a reason to be white-tinged instead of black-tinged. I would conservatively guess this guy could have found at least 30 people willing to pay $500 for details on this exploit, and he would've netted $5000 more than Google paid him to do the right thing.
Probably the risk of going to jail outweighs the extra $5k, but if a company is serious about the bug bounty program, they would offer a reward that's competitive with what you could extract from the black market, and I don't think that's hard to do.