Hacker Newsnew | past | comments | ask | show | jobs | submit | kidel001's commentslogin

I too maintain an older thinkpad! i want to say T430 so probably not as old, maybe got in 2012 or so? In any case, I have replaced: the screen, the battery, the power button (3D printed), the hard drive, the RAM, and the entire internal fan /cooling structure, and probably some more things I've forgotten. Why? Well it's all been over the years. But 1) because you can! like this post describes. Also: it's been nice to have a windows system around that I can remote into for certain tasks that are difficult or impossible on linux, like using the adobe suite. The last time something broke (the fan) I looked up how much a used T430 costs on ebay (~$40-$50) and buying a new fan was still cheaper. So I fixed it. It's been like that every time and it's still here.


To add onto this... some amount of what they are calling edge detection here seems to overlap with what has already been implemented in microscopy using phase contrast... which has been around since 1932.


Technically... they are outlining edges at the speed of light. Detection is a separate process entirely.


Ah yes, the bird-inspired ... nose propeller.


These types of articles are so fundamentally flawed... it beggars belief. Why not ask the opposite question: if bandwidth works the way they describe, why can't H100 GPUs (3TB/s bandwidth) perform sensorimotor tasks 24 trillion times faster than a human? (Spoiler alert: they can not).

<s> Could it be... there is a bit of a straw man argument here? About how much information it actually takes to input and output a complete sensorimotor task? I dare say! </s


bioinformatician here. nobody has intuition or domain knowledge on all ~20,000 protein coding genes in the human body. That's just not a thing. Routinely comparing what a treatment does we do actually get 20,000 p-values. Feed that into FDR correction, filter for p < 0.01. Now I have maybe 200 genes.Then we can start applying domain knowledge. If you start trying to apply domain knowledge at the beginning, you're actually going to artificially constrict what is biologically possible. Your domain knowledge might say well there's no reason an olfactory gene should be involved in cancer, so I will exclude these (etc etc). You would be instantly wrong. People discovered that macrophages (which play a large role in cancer) can express olfactory receptors. So when I had olfactory receptors coming up in a recent analysis... the p values were onto something and I had to expand my domain knowledge to understand. This is very common. I ask for validation of targets in tissue --> then you see proof borne out that the p-value thresholding business WORKS.


> Feed that into FDR correction, filter for p < 0.01

That's the domain knowledge. p-values are useful not the fixed cut-off. You know that in your research field p < 0.01 has importance.


> You know that in your research field p < 0.01 has importance.

A p-value does not measure "importance" (or relevance), and its meaning is not dependent on the research field or domain knowledge: it mostly just depends on effect size and number of replicates (and, in this case, due to the need to apply multiple comparison correction for effective FDR control, it depends on the number of things you are testing).

If you take any fixed effect size (no matter how small/non-important or large/important, as long as it is nonzero), you can make the p-value be arbitrarily small by just taking a sufficiently high number of samples (i.e., replicates). Thus, the p-value does not measure effect importance, it (roughly) measures whether you have enough information to be able to confidently claim that the effect is not exactly zero.

Example: you have a drug that reduces people's body weight by 0.00001% (clearly, an irrelevant/non-important effect, according to my domain knowledge of "people's expectations when they take a weight loss drug"); still, if you collect enough samples (i.e., take the weight of enough people who took the drug and of people who took a placebo, before and after), you can get a p-value as low as you want (0.05, 0.01, 0.001, etc.), mathematically speaking (i.e., as long as you can take an arbitrarily high number of samples). Thus, the p-value clearly can't be measuring the importance of the effect, if you can make it arbitrarily low by just having more measurements (assuming a fixed effect size/importance).

What is research field (or domain knowledge) dependent is the "relevance" of the effect (i.e., the effect size), which is what people should be focusing on anyway ("how big is the effect and how certain am I about its scale?"), rather than p-values (a statement about a hypothetical universe in which we assume the null to be true).


I get that in general. I was replying to the person taking about bioinformatics and p value as filter.


People need tools to filter results. Using a somewhat arbitrary cutoff for what to work with is actually fine because people need to make decisions. Further, papers that report false positives do not tend to lead to huge branches of successful science because over time the findings do not replicate.

But I am curious about something else. I am not a statistical mechanics person, but my understanding of information theory is that something actually refined emerges with a threshold (assuming it operates on SOME real signal) and the energy required to provide that threshold is important to allow "lower entropy" systems to emerge. Isn't this the whole principle behind Maxwell's Demon? That if you could open a little door between two equal temperature gas canisters you could perfectly separate the faster and slower gas molecules and paradoxically increase the temperature difference? But to only open the door for fast molecules (thresholding them) the little door would require energy (so it is no free lunch)? And that effectively acts as a threshold on the continuous distributions? I guess what I am asking is that isn't there a fundamental importance to thresholds in generating information? Isn't that how neurons work? Isn't that how AI models work?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: