Hacker Newsnew | past | comments | ask | show | jobs | submit | fiznool's commentslogin

$24k over 40 months is $600/month. Am I missing something?


Yes. ‘Safe withdrawal rate’ is the annual fraction of an investment which one can sell while maintaining the principal’s value, adjusted for inflation. That might sound very complex, so here’s an attempt to explain it simply.

If one has $24,000 and spend $600/month, then after 40 months there will be nothing at all left. On the other hand, if one invested that $24,000 in Treasury bonds one would have something like $26,229. So (ignoring inflation) one could spend ($24,000 - $26,229)/40 = $55.72 each month, and still have $24,000. Why would one want to do that? Because one could keep going, investing the money, collecting the interest and paying one’s expenses: that $24,000 can last forever, paying $55.72 a month.

Now, in real life one can’t ignore inflation, and that $24,000 will be worth less than $22,000 in 40 months. In real life inflation tends to outpace the risk-free rate of return one can get lending money to the U.S. Treasury. So one needs to get more return by taking one more risk, for example by investing in the stock market as a whole. But that exposes one to economic downturns.

To make a long story short (too late!) folks have run the numbers and figured that one can conservatively invest one’s money in some broad indices, withdraw about 3–4% a year and maintain the post-inflation value of one’s capital. More than that, and one runs out of capital; less than that, and the capital continues to increase, but you have less money to spend today.

If Jellyfin wants to be able to pay their $600 (in 2024 dollars) hosting bill forever, they need to invest $240,000 today. And then they’ll never need to ask for money again (assuming all sorts of things, like no decade-long recession, no world war, an asteroid doesn’t crash into the Earth and so forth). For the same reason, an American making the average wage of $63,795 needs a $2.2 million nest egg to never work again.


You might be interested in the ‘testing trophy’ as an alternative to the traditional pyramid.

https://kentcdodds.com/blog/write-tests


This advice is so misguided that I'm concerned for our industry it's getting so much traction.

> You really want to avoid testing implementation details because it doesn't give you very much confidence that your application is working and it slows you down when refactoring. You should very rarely have to change tests when you refactor code.

Unit tests don't need to test implementation details. You could just as well make that mistake with integration or E2E tests. Black box testing is a good practice at all layers.

What unit tests do is confirm that the smallest pieces of the system work as expected in isolation. Yes, you should also test them in combination with each other, but it serves you no good if you get a green integration test, when it's likely only testing a small fraction of the functionality of the units themselves.

This whole "unit tests slow you down" mentality is incredibly toxic. You know what genuinely slows me down? A suite with hundreds of integration tests, each taking several seconds to run, and depend on external systems. But hey, testcontainers to the rescue, right?

Tests shouldn't be a chore, but an integral part of software development. These days I suppose we can offload some of that work to AI, but even that should be done very carefully to ensure that the code is high quality and actually tests what we need.

Test code is as important as application code. It's lazy to think otherwise.


If by "smallest pieces of the system" you mean something like individual classes then you are definitely testing implementation details.

Whenever you change a method's parameters in one of those internal classes you'll have unit tests breaking, even though you're just refactoring code.

Unit testing at the smallest piece level calcifies the codebase by making refactors much more costly.


> If by "smallest pieces of the system" you mean something like individual classes then you are definitely testing implementation details.

No, there's nothing definite about that.

The "unit" itself is a matter of perspective. Tests should be written from the perspective of the API user in case of the smallest units like classes and some integration tests, and from the perspective of the end user in case of E2E tests. "Implementation details" refers to any functionality that's not visible to the user, which exists at all levels of testing. Not writing tests that rely on those details means that the test is less brittle, since all it cares about is the external interface. _This_ gives you the freedom to refactor how the unit itself works however you want.

But, if you change the _external_ interface, then, yes, you will have to update your tests. If that involves a method signature change, then hopefully you have IDE tools to help you update all calling sites, which includes application code as well. Nowadays with AI assistants, this type of mechanical change is easy to automate.

If you avoid testing classes, that means that you're choosing to ignore your API users, which very likely is yourself. That seems like a poor decision to make.


Congrats, you understand what "unit test" was originally supposed to refer to. This is not what it's commonly meant to most people for years. The common meaning is "test every individual function in isolation".

I think this came about because of people copying the surface appearance of examples (syntactic units, functions) and not understanding what the example was trying to show (semantic units), then this simplification got repeated over and over until the original meaning was lost.


> If by "smallest pieces of the system" you mean something like individual classes then you are definitely testing implementation details.

If your classes properly specify access modifiers, then no, you're not testing implementation details. You're testing the public interface. If you think you're testing implementation details, you probably have your access modifiers wrong in the class.


If I change something at the lowest level in my well abstracted system, only the unit tests for that component will fail, as the tests that ‘use’ that component mock the dependency. As long as the interface between components doesn’t change, you can refactor as much as you want.


I prefer having the freedom to change the interface between my components without then having to update large numbers of mocked tests.


Sure, that's a tradeoff that you make. Personally I update my implementations more often than I update the interfaces, so I'm happy to take that hit when modifying the interface in trade for knowing exactly where my implementations break.


in a perfect world each unit would do the obvious thing without many different paths throught it. The only paths would be the paths, that are actually relevant for the function. In such a perfect world, the integration test could trigger most (all?) paths through the unit and separate unit-tests would not add value.

In this scenario unit tests would not add value over integration tests when looking for the existence of errors.

But: In a bigger project you don't only want to know "if" there is a problem, but also "where". And this is where the value of unit tests comes in. Also you can map requirements to unit tests, which also has some value (in some projects at least)

edit: now that I think about it, you can also map requirements to e2e tests. That would probably even work much better than mapping them to unit-tests would.


> in a perfect world each unit would do the obvious thing without many different paths throught it.

I don't think that's realistic, even in an imaginary perfect world.

Even a single pure function can have complex logic inside it, which changes the output in subtle ways. You need to test all of its code paths to ensure that it works as expected.

> In such a perfect world, the integration test could trigger most (all?) paths through the unit and separate unit-tests would not add value.

This is also highly unlikely, if not impossible. There is often no way for a high-level integration test to trigger all code paths of _all_ underlying units. This behavior would only be exposed at the lower unit level. These are entirely different public interfaces.

Even if such integration tests would be possible, there would have to be so many of them that it would make maintaining and running the entire test suite practically unbearable. The reason we're able and should test all code paths is precisely because unit tests are much quicker to write and run. They're short, don't require complex setup, and can run independenly from every other unit.

> But: In a bigger project you don't only want to know "if" there is a problem, but also "where". And this is where the value of unit tests comes in.

Not just in a "bigger" project; you want to know that in _any_ project, preferably as soon as possible, without any troubleshooting steps. Elsewhere in the thread people were suggesting bisecting or using a debugger for this. This seems ludicrous to me when unit tests should answer that question immediately.

> Also you can map requirements to unit tests, which also has some value (in some projects at least)

Of course. Requirements from the perspective of the API user.

> now that I think about it, you can also map requirements to e2e tests.

Yes, you can, and should. But these are requirements of the _end_ user, not the API user.

> That would probably even work much better than mapping them to unit-tests would.

No, this is where the disconnect lies for me. One type of testing is not inherently "better" than other types. They all complement each other, and they ensure that the code works for every type of user (programmer, end user, etc.). Choosing to write less unit tests because you find them tedious to maintain is just being lazy, and finding excuses like integration tests bringing more "bang for your buck" or unit tests "slowing you down" is harmful to you and your colleagues' experience as maintainers, and ultimately to your end user when they run into some obscure bug your high-level tests didn't manage to catch.


> Even if such integration tests would be possible, there would have to be so many of them that it would make maintaining and running the entire test suite practically unbearable. The reason we're able and should test all code paths is precisely because unit tests are much quicker to write and run. They're short, don't require complex setup, and can run independenly from every other unit.

I think having a good architecture plays a big role here.


After reading this (very entertaining) article, I think I can guess why the fuse on his machine was missing. This is exactly what I would do, if I was met with this monstrosity!


Synology has a good suite of products here that run on their NAS devices. Moments is a Google Photos clone - not quite as good, but definitely good enough.


The term you are looking for is ‘Purchasing Power Parity’.

https://www.oecd.org/sdd/purchasingpowerparities-frequentlya...


243MB for an app that allows you to write text posts and attach media? Shouldn’t this be an order of magnitude smaller?


Try having kids. The amount of crap you will end up with, like it or not, will make you rethink your position in the world, and your house’s storage capabilities.


My box of wires in the attic is often the butt of my wife’s jokes, but every once in a while I find that satellite coax connector I’ve been saving, or the scart cable that is the only thing standing between old vhs tapes and nostalgia.


> I even had a boss that almost fired me for putting Lorem Ipsum in the content before it was sent to the client

Next time use ChatGPT.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: