Hacker News new | past | comments | ask | show | jobs | submit | more Mathnerd314's comments login

Well, it's half true and half false. There are a lot of "new" fintech-ish banks competing on fees, transaction speed, overdrafts, etc. - bank-type things that matter to consumers. But it's true, there are no fintech banks competing to be "too big to fail" and getting that government bailout money. You have to look at crypto for equivalents of the Federal reserve, and people don't recognize those as banks. Although I would say, Coinbase is getting pretty close to a consumer-level "crypto bank".


There are apps, but they are incredibly inaccurate. For starters, they don't recognize the food right. Usually you have to pick from a menu of 10 items. Then they have to estimate a 3D quantity (volume) from a 2D image, then they have to estimate the density... the amazing thing though is despite all this, they are still more accurate than recall diaries.


So let me try to put the author's argument in order:

(1) The author tried to get homeowner's insurance, but was denied because their home was a significant hurricane risk

(2) The author (maybe?) got insurance through a state-run FAIR program, but then cites news reports that these programs are close to insolvency (As are a significant amount of non-state-run homeowner insurance programs).

(3) The author is like, "well, if it's so hard to insure my house, maybe I should think about living somewhere else." And then generalizes to "a lot of places should be uninsurable and uninhabited - apocalypse here we come"


Makes sense to me. Good comprehension


To quote: "Ironically, Fisher was proven right, albeit in a very limited way: such genes [that increase both the tendency to smoke and the tendency for lung cancer] do exist."

The actual issue was not that (Cornfield wrote a paper in 1959 showing the effect was too small). It was that Fisher continued to repeat one finding in one study, despite that said finding had not been replicated in new studies (namely, lung cancer patients described themselves as inhalers less often than the controls), and continued to obstinately ignore all the other research coming out. But it was only 3 years between Cornfield and Fisher's death in 1962, so perhaps Fisher simply did not have time to change his mind.


> It was that Fisher continued to repeat one finding in one study, despite that said finding had not been replicated in new studies (namely, lung cancer patients described themselves as inhalers less often than the controls), and continued to obstinately ignore all the other research coming out.

Even if that finding were true, it could just mean that the cancer patients had stopped inhaling due to lung problems and underestimated how much they used to inhale.

If you asked me to estimate my caffeine, alcohol, fat, or sugar estimate from even 10 or 15 years ago, or how many steps per day I walked, I’m not confident I could give an accurate answer at all. If you asked me details about how I ate and drank — how fast I ate, how often I ate out, how quickly I replenished empty drinks, what percentage of the time I drank water with alcohol, or how often I cleaned my plate — I’m sure I’d be completely off the mark.


It's kind of the same situation with alcohol now: a lot of people denouncing it, the alcohol industry throwing a lot of shade, and top scientists making confident pronouncements (on both sides).


> you just go to SE asia

I could see justifying a trip like that on a cost-of-living basis. If you go to a place like Thailand, you are going to be spending pennies on the dollar vs. the EU, even after paying incredible amounts (in local currency) for first-world conveniences like clean drinking water and internet. So in that sense if you are going to be coding and living life online, you might as well live someplace cheap IRL. But that's different from a tourist crawl where you are just spending money like water, which maybe was more your idea.


Well, Github explicitly took responsibility. The first action Github did once Lovable reached out for support was "reinstate our app and apologize for the issues it caused us and our users."

And no, you are not responsible for every 3rd party service you use. Some services are unavoidable, some services are just nice-to-have, but if you can't trust a service to perform its advertised function, it is the service's fault.


> Well, Github explicitly took responsibility. The first action Github did once Lovable reached out for support was "reinstate our app and apologize for the issues it caused us and our users."

No, GitHub won't ever take responsibility for what you have between you and your projects/apps/companies users. Nor did they do so in this case.

If I use service X for doing Y, and I write an email asking if it's OK that I upload 1000s of files every day, even if they say OK today, but then next week turn around and say "Nah", I'm still responsible for my users, who trust me and my team. Service X has no responsibility towards those users at all. It sucks though, I agree with that.

> And no, you are not responsible for every 3rd party service you use. Some services are unavoidable, some services are just nice-to-have, but if you can't trust a service to perform its advertised function, it is the service's fault.

Besides DNS and BGP I suppose, what services are "unavoidable" exactly? Git hosting isn't some arcane distributed network technology needing years of reading/experience to understand, the CLI ships with a web ui you can basically copy-paste to have your own Git hosting.

I'd say you are responsible for everything that you use and depend on. And if you think "Ah, I'll just chunk 10K repositories at GitHub a day, they say it's fine" and then don't have any plan in case GitHub suddenly says it isn't OK, you are responsible for the falloff if shit hits the fan.


Well, the app store, for example. Sure, it is of course a good idea to comply with the app store policies to the extent possible, but ultimately there not much you can do to prevent Google or Apple saying "we don't like this app" and pulling it. For example with the UTM emulator. So how then can Google or Apple making such a decision be "your responsibility"?

As another example, let's say you build a house in a hurricane-prone area. It's your responsibility to ensure the owner buys hurricane insurance, as mandated by law. It's not your responsibility to build a nuclear-bunker-grade house that is impervious to hurricanes. It is easy to point the finger and say "you should have thought of that", but in practice it is easier to deal with such catastrophes as they happen.


Besides the fact that you also need packet traversal via a multitude of layer 3 protocols a lot of which would pass through some corp cloud. And the net neutrality is dead in the USA.

As for the repositories, there are no good SLA terms for such storage. Nobody offers this service because it's expensive to offer. Not on this scale. So if you need it, you have to invest a lot in hardware or colocation or specialized clouds. Plus admin. A whole datacenter, and suddenly you're an ant vs three goliaths.


> but if you can't trust a service to perform its advertised function, it is the service's fault.

Your customers don't care if it's the service's fault and will blame you.


I'm curious about price - sure, the speakers were free ($240 value), but I don't think printing up a PCB is cheap, and those are some pretty big capacitors.


JLCPCB have changed the game. Five 2-layer PCBs of up to 100x100mm cost just $3.50 including global shipping. Things get more expensive if you stray from their standard specs, but you're still looking at just a few dollars per board.

https://jlcpcb.com/

The biggest electrolytic caps in this circuit cost $3.29 each in qty 1, but they're fancy "audio-grade" Nichicon caps; a standard-grade capacitor of that size would cost you $1.68 if you want a Japanese brand, or as little as $0.36 if you can settle for a Chinese brand.


And you can even order all or the SMD components soldered for less money that you can buy solder and other consumables for soldering never mind the time.


I recently got 20 reasonably complex 4 layer gold plated pcbs assembled for about $10 a piece by JLCPCB. Maybe 20 items on the BOM, 50 parts total.

It’s insanely cheap. 5 years ago when I was last regularly getting PCBs built it would cost 10x that. And it would be a really manual process - loads of emails back and forth. PCBway have managed to automate basically the whole process.


I think the biggest cost is labor. Dumpster diving for speakers and then spending dozens of hours in high skilled labor to replace the insides is actually hilarious. I wonder how much he'd charge a client for a project this size? Probably many thousands of dollars.

But I can't knock a man for having a hobby. Clearly they're optimizing for fun and nerd cred, not cost.


A PCB like that would cost around $1 each, if you got 10 or so, so it's not expensive at all. I don't know how much assembly costs, but I'd be surprised if the total was over $20.


IIRC there was something about how the scammers don't bother making their scams that legitimate looking, because if they are too legit, they get a lot of people who waste the scammer's time during the next phases.

But yeah, you'd think spam filtering would be more important. I use GMail and I haven't seen a spam message in years, besides when I check my spam folder. Even there, most of the "spam" is false positives.


If you think ANF is great then explain how to deal with the transformation from "E (if x then a else b)" to "if x then E a else E b".


ANF transform will rewrite that to “let a0 = if x then an else b in E a0”.

This isn’t identical to floating the call to E inside the “if”, obviously, but would have (within predictable boundaries) the same performance. If this code went through LLVM to produce machine code I suspect it would wind up identical as LLVM will likely rewrite the duplicated call to be outside the “if”.


What?

Let's say E[h] is "if h == 1 then 2 else 3", a is 1, and b is 2. Then before:

if (if x then 1 else 2) == 1 then 2 else 3

After:

if x then (if 1 == 1 then 2 else 3) else (if 2 == 1 then 2 else 3)

which trivially simplifies to if x then 2 else 3

Your proposed rewrite is "let a0 = if x then 1 else 2 in if a0 == 1 then 2 else 3" which is not a simplification at all - it makes the expression longer and actually introduces a variable indirection, making the expression harder to analyze. The call will require some sort of non-local analysis to pierce through the variable and do the duplication and elimination.


The ANF form is amenable to rewriting the “if” outside the “let”.

    if x then
      let a0 = a in E a0
    else
      let a0 = b in E a0
If a and b are simple variables or constants the new “let”s will be eliminated by ANF.

What happens to E isn’t so straightforward. In your example, though, it’s a small expression tree and would just be duplicated inline with the same obvious constant reductions available.

If it’s ‘large’ (heuristically) it might instead be assigned to a “let” binding outside the “if”, to avoid duplicating a large block of code.

CPS should be making that same determination on E.


Well, with CPS it just works. You start with

  let
    x k = if x then k 1 else k 2
    E k h = if h == 1 then k 2 else k 3
  in
    \k -> x (E k)
and then when you inline E into x it becomes

  let
    E k h = if h == 1 then k 2 else k 3
  in
    \k -> if x then E k 1 else E k 2
which can again be locally simplified to

  \k -> if x then k 2 else k 3
Unlike ANF, no blind duplication is needed, it is obvious from inspection that E can be reduced. That's why ANF doesn't handle this transformation well - determining which expressions can be duplicated is not straightforward.


It looks like the compiler would need to determine it can inline E twice before it can perform a constant case fold on the "if h == 1" expression, right? It's 'obvious' if you understand what E does, but that's not a program transform. Unless I'm very mistaken, this is two steps: inline E twice, then constant case fold (or constant "if" fold, if you aren't translating all the "if"s to "case"s).

If E were a different expression, perhaps "E h = let F i = if z == i + h then 7 else 8 in if y then F 1 else F 2", then the inlining choice (made twice, here) would be very poor - F would be inlined four times all up, and since "z" is a free variable here there's no constant case/if to fold, and everything would wind up worse.

The ANF transforms should look something like:

    let
      a0 = if x then 1 else 2
      E h = if h == 1 then 2 else 3
    in
      E a0
Departing from my previous analysis, I'd say that the next obvious step is the inlining of the now-only-called-once E:

    let
      a0 = if x then 1 else 2
    in
      if a0 == 1 then 2 else 3
But (either here or earler) that's not ANF, the "a0 == 1" isn't a simple constant or variable:

    let
      a0 = if x then 1 else 2
      a1 = a0 == 1
    in
      if a1 then 2 else 3
Let's take the time to rewrite all those conditionals and such as case statements:

    let
      a0 = case x of { True -> 1; False -> 2 }
      a1 = case a0 of { 1 -> True; _ -> False }
    in
      case a1 of { True -> 2; False -> 3 }
Now we can inline a0 into a1, since it's only called once:

    let
      a1 = case (case x of { True -> 1; False -> 2 }) of
               { 1 -> True; _ -> False }
    in
      case a1 of { True -> 2; False -> 3 }
A case-of-case transform applies to a1:

    let
      a1 = case x of
             True -> case 1 of { 1 -> True ; _ -> False }
             False -> case 2 of { 1 -> True ; _ -> False }
    in
      case a1 of { True -> 2; False -> 3 }
Two case of constant transforms on a1 apply:

    let
      a1 = case x of { True -> True ; False -> False }
    in
      case a1 of { True -> 2; False -> 3 }
a1 can be inlined and case-of-case once again applied:

    case x of
        True -> case True of { True -> 2; False -> 3 }
        False -> case False of { True -> 2; False -> 3}
Case-of-constant again leaves us with just:

    case x of { True -> 2; False -> 3 }
And that's our desired outcome. We took some risks doing case-of-case because of the duplicated outer case, but that can be dealt with using join points for the expressions that get duplicated; a bit more long winded but the same end result here due to the case-of-constant transform eliminating the duplicate immediately each time, followed by an inlining of an expression evaluated only once.



If there was a clear distinction between let-bindings and join points, I would be happy. But there is not - there is this contification process and, rather than proving that their algorithm finds all join points, they just say "it covers most of the ground [in practice]", and then they cite (Section 4) two other papers that suggest contification is undecidable - whether a function returns to a specific continuation or function is a behavioral property, not syntactic. Even if you accept that the dominator-based analysis is optimal, it is a global property. not a local property like in a proper IR. So what I see is a muddy mess.


Last I looked, join points hadn't made it into GHC Core directly - do you know if the ideas in the join points paper were introduced to (mainline) GHC in the end?

edit: it looks like there's distinct Core join points now, never mind!


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: