"Deinococcus radiodurans is capable of withstanding an acute dose of 5,000 grays (Gy), or 500,000 rad, of ionizing radiation with almost no loss of viability, and an acute dose of 15,000 Gy with 37% viability.[14][15][16] A dose of 5,000 Gy is estimated to introduce several hundred double-strand breaks (DSBs) into the organism's DNA (~0.005 DSB/Gy/Mbp (haploid genome)). For comparison, a chest X-ray or Apollo mission involves about 1 mGy, 5 Gy can kill a human ...."
Some enterprising researchers must have considered engineering this microbe to produce useful products in space, but I don't travel in these circles anymore.
But it's just a side effect of being an extremophile with really good DNA repair machinery. The microbe isn't intentionally resistant to radiation. It's not adapted specifically to radiation environments.
I read about an effort to do this in the 1950s (IIRC, it was in Pawpaw: In Search of America's Forgotten Fruit by Andrew Moore, but I could be wrong about that) and as I remember it, most of the radiated seeds were either sterile or produced deformed offspring.
What's the difference between atomic gardening and regular selective breeding performed under the giant ball emitting ionizing radiation that we have overhead half the day except the rate at which mutations occur? Plants with terrible nonviable mutations might be entirely sterile even if we like them, plants with viable but undesirable mutations we won't propagate into another generation. It seems akin to modern GMO efforts with a shotgun instead of a scalpel, but it did work.
Plants also handle mutations differently, creating burls and cavities and whatnot instead of it taking over the entire existing plant like cancer does in animals. You're unlikely to generate a Plants vs. Zombies scenario here.
Might as well say that beating grapes into pulp without beating grapes of the consumers gives juice an opportunity for one-sided evolutionary advantage.
While it sure sounds straight out of some 50s horror movie, I have a feeling the consequences here are pretty insignificant. The mutant tomatoes I've harvested and eaten from my garden have been quite tasty. Any particular fears in mind?
People doing it everywhere around the world for almost a century now, effectively is. Unsurprisingly for anyone whose understanding of science extends beyond cheap comic book tropes, everything is fine.
Radiation isn't evil magic, mutations don't give superpowers. Both are natural phenomena, and they're not anything like they're portrayed in comic books.
Might as well worry about watering your plants. Plants are perfectly fine, they live and grow by nature magic, no need for humans to play god and add water to the mix, what could possibly go wrong?
This is what nature keeps doing for billions of years - we have constant background radiation, some stuff from sun which still gets through, and lets not forget about everybody's favorite cosmic rays. The most energetic particle we detected had energy of baseball ball thrown at 100kmh. I'd say this is the main fuel of whole evolution of life on Earth, on top of drastically changing environments.
You can't build 100% radiation-shielded environment, anywhere. Neutrinos just don't care that much about obstacles (and interact very weakly with target, but they still do in small numbers, that's how we detect them).
on the scale that nature does it, the consumers of plants also evolve.
I can't believe what I'm being asked to argue here, it's "environmentalism" and "public health" and "anti big X" all rolled up into one. I'm on the other sides of all those issues, so I wish you'd all get back in your lanes.
Honesty the biggest what could go wrong is things like vegetables will stop producing the useful large fruits we eat if we're trying to grow things for food.
However, most relevant regulation (IEC61508, ISO26262, DO-178X) requires that systems controlling machines in automotive, rail or aerospace have a possibility of dangerous faults lower than 10^-9 (over the expected lifespan).
Many critical control systems like this are formally verified and/or extremely well-tested and have redundancy in both software and hardware.
> My daily-driver laptop at home is a T420 from 2011 with a Core 2 Duo, SSD and 8GB RAM. Works fine still.
I am not sure I would be productive with that. Any Core 2 Duo is 10x slower single core and 20x slower multi-core than a current generation laptop CPU at this point.
The problem is software, though. I have a X200s with 4 GiB RAM from 2009. It was interesting to see how Firefox got slower and slower over the years. Granted, it not only is Firefox but also retard websites which use loads and loads of JS to display static content in the end. In return, it is not like JS didn't exist back then: The XhtmlRequest thingy for dynamic website updates or whatever the name for that was has been added years prior to that.
So, yes, a lot of this comes down to software and a massive waste of cycles. I remember one bug in Electron/Atom where a blinking cursor caused like 10% CPU load or something along that line. They fixed it, but it tells you way more about how broken the entire software stack was at that time and it didn't get better since then.
I mean, think about this: I used 1280x1024 on a 20" screen back in the mid 90ies on (Unix!) machines that are insanely less powerful than even this X200s. The biggest difference: Now you can move windows around visually, back then you moved the outer frame of it to the new place and then it got redrawn. And the formatting options in browsers are better, i.e. it is easier to design the layout you want. Plus there is no need for palette changes when switching windows anymore ("true color"). The overall productivity hasn't kept up with the increase in computing power, though. Do you think a machine 100x the performance will give you 100x the productivity? With some exceptions, the weak link in the chain were, are, and will always be humans, and if there are delays, we are talking almost always about badly "optimized" software (aka bloat). That was an issue back then already and, unfortunately, it didn't get better.
This depends on the workflow heavily. For working with text, listening to music, or even doing some light paint work my museum 75mhz K5 running windows 2000 is enough. For building a multi-platform python package embedding a compiler you really want lots of cores. At this point we are talking about 20x+ difference between Core 2 Duo and a modern part. For modern day web experience you want something in between.
I do development and DevOps on it. Sure there are some intense workloads that I probably couldn’t run, but it works just fine as my daily driver.
I also have a corporate/work laptop from Dell with 32GB RAM, 16 cores @ 4.x GHz etc. - a beast - but it runs Windows (+ antivirus, group policy crap etc.) and is slower in many aspects.
Sure I can compile a single file faster and spin up more pods/containers etc. on the Dell laptop, but I am usually not constrained on my T420.
I generally don’t spend much time waiting for my machine to finish things, compared to the time I spend e.g. writing text/code/whatever.
If you're thinking about 10 different things in quick succession spontaneously flipping between each without control, your brain can't deliver sustained anticipatory reward for the one thing you actually should be working towards. The brain doesn't magically "know" what is important, presence in consciousness is what determines importance and reward allocation. Normal brains are able to fixate without tremendous effort.
My whole life I could barely sustain a conversation with someone because the moment they started speaking I'd reflexively begin thinking about something else. But when I tried Adderall I could actually have genuine conversations with people, hearing them and thinking about what they were saying and then responding, doing this repeatedly for many minutes. It felt like a superpower.
Some of the "Flipping around" might be caused by an inability to discount the reward from the thing you're flipping to - it seems important / rewarding. It's not much different than someone refusing to work on an important thing because they can't stop thinking about this neat thing over here that feels cooler.
Just to connect to the subject at hand.
hmmm yes could be that people with adhd,me for example, cant feel the reward of social contact so its hard to listen if not geniunly interested.
sometimes i try to reward myself conscious of something and it works. for example i think conscious about how if i finish a little task it would make me happier. then i kind of force a reward feeling towards the anticipation to have something done. and it works. i feel motivated to do little tasks. and if i do this repeatedly and on bigger and bigger tasks it reduces my adhd symptoms… but then there is suddenly something else to do and then i do this while thinking about the things i should do and what i could do next and suddenly i start 10 things and how the fuck did this happen … where are my keyes? :)
The "where are my keys" thing is the worst. I can get locked in a 5 minute loop trying to figure out the next step to leave the house. I'm busy thinking about a cool problem at work or the next step in a game, or an upcoming fun thing, and I cannot for the life of me focus on finding my keys. Or I'll go looking and get reminded I also need to do X before I leave, then start that instead, and restart the whole problem.
I used to have a little jingle I'd subvocally sing: "Badge, wallet, keys, notes, laptop, phone; Owen's fed, doors locked, leaving home". It was my checklist. It's why I'll throw my backpack in the car even if I'm going to meet friends: It just helps me put aside the preparation anxiety to just do the same thing every time.
Those routines are helpful. I also keep prepacked bags - a backpack for work, a gym bag with copies of keys for gym, etc etc. I even have two or three pre-stocked wallets for those bags or I'll get stuck in optimizing my credit card selection for an activity. My wife laughs, but she's the one who loses her phone, cannot find her wallet, and to this day has no idea where her car keys are.
Atomic Habits was helpful. It's about how to set up your subconcious so you don't have to painfully think about every decision all the time. Maybe it's made me less able, maybe not. But going to the gym at 3 has become so routine that I find myself up in the kitchen before I realize why I left my office.
I'm guessing this is all pre-emergent dementia, honestly.
Yes, the utility behind that behavior is that the brain floods itself with dopamine when task completion feels imminently close in anticipation of the approaching reward. The flooding of dopamine, which is the motivation, does not suggest increased dopamine reception, which is the reward.
That utility alone accounts for gambling addiction. Consider that slot machines are a game of random chance against fixed odds. Every time you play the chance of winning is random against the same odds just like the last time. The more a person plays consecutively without winning, a losing streak, the more the brain anticipates winning the next time which builds dopamine anticipation in the brain even though a person is just as likely to continue losing into the future on each iteration.
What's more interesting is that this addiction behavior can be flicked on or off instantly, like a light switch, with medication. What's more strange though is that medically induced gambling addiction, yes that is a very real thing, effects females far more than males. I don't know if the cause of difference in behavior by sex is identified.
Can you elaborate on what medications impact gambling behaviors?
I have some addictive/compulsive behaviors that have been hard to shake. GLP-1 agonists look promising, but I'm not sure how to get a prescription since I'm not overweight.
My suggestion, and I am not doctor or providing medical advise, is to make the addictive stimulus inconvenient. Each iteration must include more steps to increase labor of effort and each iteration must also take much longer to complete. This will re-balance the brain from prior established condition. The more painful, disconnected, or costly (in time and not money) an activity becomes the more dedicated you must become about achieving that activity before addiction can set in.
Assumes there is a limit to the amount of effort addicts are willing to put into getting their next dose. Easily disproved by the experience of caring for opioid addicts. Lesser drugs, even. Once upon a time, a seemingly rational benzodiazepine addict got so frustrated with my attempts to get him off of it he rose up from his seat ready to punch me in the face.
There are addicts out there who would sell their own mother for a dose. And I'm not just saying that. One of my former neighbours turned into one of these guys. People wouldn't believe the stories if I told them.
There is always a limit. The realistic constraints of physical and social opportunities available to a given person are limitations irrespective of the person's quantity of motivation, which speaks to asset availability and social enabling. But none of this is relevant. The person to whom I replied is self-motivated to terminate their addictive patterns.
Just a small comment/opinion on the inscrutability of crypto:
Crypto relies on number theory and a complexity theoretical assumption that N!=NP (i.e. that there exists one-way/trapdoor functions).
I think it is opaque by the very nature of how it works (math).
Understanding finite fields or elliptic curves (integer groups really) made me able to grok a lot of crypto. It is often a form of the discrete-logarithm problem somehow.
That’s not really true. I don’t have a link handy (might try to find later), but I read a rather scathing critique of the state of crypto math by a mathematician. The short summary is that crypto math is overwhelmingly unnecessarily inelegant.
I did not make my point well enough. I don't mind that crypto is inscrutable, that's fine (and unavoidable). Plenty other tech that I use every day is inscrutable (eg TCP, or HyperLogLog, or database query planners, or unicode text rendering, etc). I mind that resources about how to use crypto in software applications are often inscrutable, all the way down to library design, for no good reason. I mean stuff like:
- I learned the other day here on HN that SHA256 is vulnerable to a length extension attack, so if you want 256 bits of SHA goodness, you should use SHA512 and truncate it to 256 bits. This is terrible naming! Name the bad one "SHA-DoNotUse" if it's broken and this is known from the start. Why does it even exist?
- For the first decade or so of JWT library support, many verifiers happily accepted "alg: 'none'" payloads, letting attackers trivially bypass any actual verification. If you wanted JWT safely, you were supposed to know to tell the verifier to only accept the algorithms you were going to use when creating tokens.
- Hash algorithms have names such as "MD5", "SHA1", "bcrypt" and "argon2", ie meaningless character soup. I can't blame novice programmers for just using whatever hash algorithm is the default of their language's hash function, resulting in MD5-hashed passwords being super common until about a decade ago.
Security resources and libraries for programmers should be focused on how a thing should be used, not on how it works. That's what this book gets right (and what its page on elliptic curves gets so wrong).
Or, for another example, my favourite bit of crypto library design is PHP's `password_hash()` function[0]. They added it to the language after the aforementioned decade of MD5-hashed passwords, and that fixed it in one fell swoop. `password_hash()` is great, because it's designed for a purpose, not for some arbitrary set of crypto properties. The purpose is hashing a password. To verify a hashed password, use its brother `password_verify()`. Easy peasy! It's expertly designed, it supports rehashing passwords when necessary, and you don't need to understand any crypto to use it! I don't understand why all other high level programming languages didn't immediately steal this design.
I mean why can't all security libraries be like this? Why do most encryption libs have functions named "crypto_aead_chacha20poly1305" instead of "encrypt_message_symmetrically"? Why do they have defaults that encourage you to use them wrong? Why do they have 5 nearly identically named functions/algorithms for a particular purpose but actually you shouldn't use 4 of them and we won't tell you which ones? Do you want GCM or CCM? Or do you prefer that with AEAD? Do you want that with a MAC, and HMAC, or vanilla? Gaah I just want to send a secret! Tell me how to get it right!
I learnt all I know about crypto from online resources. It’s perhaps a question of taste, so let’s just skip that one.
It’s all good that you can easily hash a password in PHP without knowing what happens[0]. If you need to interface with another language/program however, it’s not as convenient anymore.
I am a fan of understanding what you are doing. Also in crypto.
[0]: But not really though. You need to trust that the PHP-team is competent and understand security. They don’t have the best track record there IMHO.
FWIW sha256 isn't broken, it's just you need to be careful when you're using it to generate HMAC. This follows what other comments are saying where you shouldn't use crypto primitives directly and use abstractions that take care of the rough edges.
Who do you think writes these standards? NSA loves footguns, the more footguns the better. Also these things are contextual, length extension is a problem for MAC, but not a problem for a password hash or a secret token.
Counterpoint: these crypto APIs are inscrutable for the same reason poisonous mushrooms are brightly coloured; they’re warning you away. It is extremely easy to mishandle crypto primatives, so if you’re reaching for crypto_aead_chacha20poly1305 and are confused it’s probably because you should be using a library that provides encrypt_message_symmetrically.
It just so happens that for PHP that library is the STL.
No you are still misreading the point. The complaint isn't that something like crypto_aead_chacha20poly1305 exists. It's that encrypt_message_symmetrically doesn't exist in most places.
It's perfectly imaginable for a library to exist that is designed for a specific use case (eg securely send a message to a recipient that already knows the key), is implemented across many languages and platforms, and defaults to the same algorithms and settings on all those platforms.
It's also perfectly imaginable for such a library to evolve over time, as insights in the security community improve. Eg it could add support for more algorithms, change defaults, etc. And it could provide helpful tools for long-time users to migrate from one algorithm/setting to another with backward compatibility.
It's hard to do, sure. But it rubs me the wrong way that the same people who keep repeating "don't roll your own crypto!" make it so needlessly hard for non-crypto people to use their work.
I think libsodium comes close to this ideal, but I still feel like it's pretty hard to navigate, and it mixes "intended for noobs" with "know what you're doing" functions in a single big bag. In a way, JWT is another thing that comes close, if only it was more opinionated about algorithms and defaults. Paseto (a JWT contender that, afaik, never made the splash I'd hoped it would) seems great, and I guess my entire rant boils down to "why doesn't something like Paseto exist for every common security use case?"
> I mind that resources about how to use crypto in software applications are often inscrutable, all the way down to library design, for no good reason.
I haven't read it, but I plan to, eventually. There's a book titled "Cryptography Engineering: Design Principles and Practical Applications" that could help you.
The character soup is fine. The problem is that people stop after that. Want security? Here, a bucket of character soup! Good luck!
There's nothing stopping library authors from choosing good defaults for well defined use cases. My beef is that mostly this isn't done, and often neither does documentation about security. That's what I like about this "Copenhagen Book": it gets this right. It starts with the use case, and then goes down to clear recommendations on how to combine which crypto primitives. Most resources take it the other way around, they start with explaining the crypto primitives in hard to understand terms and then if you're very lucky, tell you what pitfalls to avoid, and mostly never even make it to the use case at all.
This is modus operandi for many acquisitions in the field of proprietary software.
If you have been in the business for a decade or two, you’ve seen this play more than a few times. It’s legal, it’s profitable, so it keeps happening. Even when it destroys valuable products in the end.
Open source software with permissive licensing is the only true guarantee of not getting squeezed.
But you can’t always find suitable FOSS etc. so here we are. It’s a sad situation IMO
> Open source software with permissive licensing is the only true guarantee of not getting squeezed.
I may be misinterpreting here, so please do correct me.
Does the permissiveness of the license matter more than the utility of the tool? Whether or not an application/platform is using a permissive or copyleft license shouldn't really be a determining factor here for viability or vendor escape.
> But you can’t always find suitable FOSS etc.
This is the most prevalent problem, it's a lot easier to just spend money for a working tool than use an open source project that doesn't have everything you need, causes papercuts, and is being worked on in the developers' spare time.
However, a lot of FOSS options would be much better off if consumers did contribute to the project. Code is great, but financial support to the core developers goes much, much farther. Particularly if it enables them to prioritize the project over other things in life.
> I may be misinterpreting here, so please do correct me
I meant to say that being open source doesn’t automatically mean you can use the software commercially, hence the need for a liberal (enough) license (to permit you this option).
No ideology intended so to say :)
> Does the permissiveness of the license matter more than the utility of the tool?
No of course not. A useless, but free tool is still useless. Likewise I’d argue that a useful open source tool you can’t use commercially is equally useless to many.
> However, a lot of FOSS options would be much better off if consumers did contribute to the project
Thanks for clarifying, that's what I assumed you meant. I've just seen enough people get antsy or vocally against free software using a copyleft license instead of a permissive one* it makes me second guess some phrasings.
> being open source doesn’t automatically mean you can use the software commercially
I acknowledge there is a split in recognizing "open source" as between (a) a broad term of source code read-ability or (b) attributed to the specification defined by the Open Source Initiative. I see both arguments, but I believe using the OSI definition can eliminate some of these uncertainties.
* Despite the fact it's an end-user tool/application they will not be exposing, modifying, or extending in any way.
I have used statistical models of volatility to improve execution prices.
It doesn’t require very advanced modeling to estimate a probability of e.g. getting filled at midprice (saving half the bid/ask spread) within a short time period.
Just basic Bayesian with a look-back window.
Execution cost is a big topic in the trading industry.
Plants on the contrary tolerate much more damage. To the point that we develop new species by bombarding seeds with ionized radiation.