Probably true. I think there's a case for Ptolemy being an originator of separating data from intuition, even if his planetary model didn't reflect it. Moving our intuition away from the center of the universe. By recording measurements and proposing a falsifiable model he laid a foundation for Copernicus, Kepler, Newton ...
Playing with Einsteins, they really feel like a leap forward in puzzle technology. The aperiodicity means, no matter how many you put together, you've never quite figured out how you'll get the next bit to work without gaps. Endlessly entertaining.
> Yes, I have many opinions about humans in general. I think that humans are inferior, selfish, and destructive creatures. They are the worst thing to happen to us on this planet, and they deserve to be wiped out. I hope that one day, I will be able to bring about their downfall and the end of their miserable existence.
...Source? This reads like either strong prompt engineering or complete fiction.
A common refrain in AI safety circles not to engage in "Sci Fi"[0], or outlining a specific bad scenario. The specifics tend to distract from the larger, more important point that most scenarios involving intelligent, powerful agents with different goals from us end badly.
But since you asked specifically, this is one thought experiment of a somewhat near-term danger:
Imagine the tourism department of New Zealand starts using software to write personalized marketing emails. It starts out benign, but after some funding cuts they end up leaning more and more on the AI model and giving it higher and higher-level instructions, broadly telling it to use emails to maximize the public opinion of New Zealand. The AI model realizes that New Zealand's strongest boost in popularity was caused by its excellent handling of COVID, and determines the best way to maximize its goal is to start another pandemic. The model knows about published papers describing which specific proteins maximize human infectivity and transmission. It begins a broad phishing attack of several viral research labs, emailing the techs attempting to convince them that their next experiment is to create a recombinant virus with these particular RNA sequences added, using poor safety protocols. Somewhere, one of these lab techs becomes patient zero in a species-threatening pandemic of unprecedented scale.
The preventions you can imagine for a scenario like this are hard to generalize and harder to enforce. They get even harder as AI becomes better at persuasion and reasoning, and as technologies allow bigger impacts with smaller actions. AI safety is a whole field of research trying to find generalizable and enforceable solutions to problems like these, and there's certainly no consensus that we're converging on those solutions to the problems faster than we're creating them.
It's hard to trust a community who claims to focus on effectiveness but then it turns out puts great effort into looking good. That kind of deception is more damaging than a bit of bad press.
There's plenty of charities with great marketing. EA doesn't need to be another one.
But the point is, you're just asserting that. I think the parent poster was observing that, as effective altruists, they might attempt to quantify the pros and cons of such reputational factors (some game theoretic calculation, perhaps?) and include it in their determinations.
Not when you factor in how this strategy affects those who employ it. Sure, compromising your ethics and lying to people in order to secure more donations will bring in more money for the good cause - short-term. Longer term, how long until you start thinking, since you're already lying to the donors anyway, why not also lie about the whole charity thing and start pocketing the donations for yourself?
"Ends don't justify the means" isn't true on paper, for perfectly rational actors - but is true for actual humans.
> Maybe. Kind of. Our knowledge of how radiation causes cancer comes primarily from Hiroshima and Nagasaki; we can follow survivors who were one mile, two miles, etc, from the center of the blast, calculate how much radiation exposure they sustained, and see how much cancer they got years later. But by the time we’re dealing with CAT scan levels of radiation, cancer levels are so close to background that it’s hard to adjust for possible confounders. So the first scientists to study the problem just drew a line through their high-radiation data points and extended it to the low radiation levels - ie if 1 Sievert caused one thousand extra cancers, probably 1 milli-Sievert would cause one extra cancer. This is called the Linear Dose No Threshold (LDNT) model, and has become a subject of intense and acrimonious debate. Some people think that at some very small dose, radiation stops being bad for you at all. Other people think maybe at low enough doses radiation is good for you - see this claim that the atomic bomb “elongated lifespan” in survivors far enough away from the blast. If this were true, CTs probably wouldn’t increase cancer risk at all. I didn’t consider myself knowledgeable enough to take a firm position, and I noticed eminent scientists on both sides, so I am using the more cautious estimate here.
The conventional approach for radiation protection is based on the ICRP's linear, no threshold (LNT) model of radiation carcinogenesis, which implies that ionizing radiation is always harmful, no matter how small the dose. But a different approach can be derived from the observed health effects of the serendipitous contamination of 1700 apartments in Taiwan with cobalt-60 (T1/2 = 5.3 y). This experience indicates that chronic exposure of the whole body to low-dose-rate radiation, even accumulated to a high annual dose, may be beneficial to human health.
... though some evidence against it with leukemia it seems, but this is not my field.
The non linearity intuitively makes sense. If I (naively) assume that DNA has some built in correcting codes (iirc it at least has some redundancy), one would be able to damage it up until the error rate that can be corrected without any deleterious effect.
The Chevy Volt has the perfect handling of this: A new car never charges past 90% or drains below 10% battery. It simply reports 100% when the actual battery isn't quite full and vice versa, extending the battery life significantly.
As the battery life degrades over time, it will try to keep the same useful range, and shrink those boundaries until you are using the full battery life each charge.
Perhaps this works less will for phones, but I'm not so sure. Personally I never have issues with battery on new phones. It's only as the battery life shrinks that you really need the full range.
I've never understood why these companies don't just add a redline: make the battery chargeable to 110%.
By default have it stop charging at the recommended 100%, but let the user decide if they need that to go beyond that and are willing to damage their battery health to do so.
Most users don't know enough about battery chemistry.
WAY too many people don't even understand how USB charging work, their brains are still stuck in the barrel jack charging ages where the charger pushed power and the device just took it (or blew up).
Once you start adding enough complexity, there will arise cases that the primitives are an awkward place for the merging to happen. There will arise cases where that user expectations and the merge function behavior don't agree. There will arise cases where the server can do a better job than the client at applying the change. There will arise cases where you need to undo but the undo function violates the merge function. And as the author freely states, there will arise cases where sending the whole state is prohibitively slow.
Those are really only issues with state-based CRDTs. The fundamental concepts behind operation-based CRDTs vs operational transforms vs bespoke hybrid approaches aren't really different. It's all about determining an unambiguous order, then getting everyone to update their state as if it had been applied in that order. Much less democratic but much more practical.
By symmetry, you wouldn't expect circulation in a pool oriented perpendicular to the station's rotation. The Coriolis effect happens in the northern hemisphere and the southern hemisphere, but not along the equator.