Hacker Newsnew | past | comments | ask | show | jobs | submit | more npodbielski's commentslogin

Bitwarden client works fine if server goes down, you just can't edit data. I am self hosting bitwarden for several years and I do not complain.

You can look at https://kopia.io/ Looks quite OK. With one downside that it manages only one backup target so you can't I.e. backup to local HDD and to cloud. You need two instances.

Why? I was running like 15 containers on a hardware with 32gb of ram. You could probably safely use disk swap as additional memory for less frequent used applications, though I did not check.

For my case, and my workload, the answer has always been "RAM is cheap, and swapping sucks" -- but there are folks using Rpi as a NAS platform so really... my anecdote actually sucks upon reflection and I'd retract it if I could.

For every clown like me with massive RAM in their colo'd box, there is someone doing better and more amazing things with an ESP32 and a few molecules of RAM :D


I would not use nuc like this guy. Had one and it was slow and it have limited capacity.

Then I had my old PC and it was very good but I wanted more nvme disks and motherboard supported only 7.

Now I am migrating to threadripper which is a bit overkill but I will have ability to run 1 or two GPUs along with 23 nvme disks for example.


I also have a threadripper pro with tons of pcie lanes. Just wish there was an easier way to use newer datacenter flash and that it wasn't so expensive. Hoping those servers that hold 20 something u.2/3 drives start getting decommissioned soon as I hope my current batch of HDD's will be my last. Curious to know how you're using all those nvme drives?

Asus and Acer motherboards supports bifurcation on pcie slots. So for example you can enable this in BIOS and put Asus hyper extension card to put 4 nvme disk into pcie slot https://www.asus.com/support/faq/1037507/

There are other cards like that i.e. https://global.icydock.com/product_323.html This one have better support for smaller in size disks, much easier to swap disk but costs like 4 times more.

I think it could put even more drives in my new case I.e. using pcie to u2 card and the using 8 drives bays. But this would cost me probably like 3 times more just for the bay with connectors. and I do not need that much space.

https://global.icydock.com/product_363.html

If you like u2 drives then icydock provides solution for them too. Or if you want go cheaper there are other cards with slim-sas or mcio https://www.microsatacables.com/4-port-pcie-3-0-x16-to-u-2-s...

But u2 disks are at least 2 times more costly per GB. Like 40tb costs 10k$. This is too much IMO.


I'm your opposite :-)

Intel n100 with 32GB RAM and single big SSD here (but with daily backups).

Eats roughly 10 Watts and does the job.


If this does the job for you sure. For me they were very pricey at the time comparing to my old PC Intel core i3 that I had already lying around. And power cost does not matter really in my case.

I have two NUC’s (ryzen 7 and intel i5) they’re rock solid.

Yes, if this works sure why not. Few years back decent NUC cost was at least 1k$ dollars. And still it is quite small, so you cannot slam 8 ssds in there.

I did use my old PC and it was working very nice with 4 sata ssds, in raid 10.

And as I already said on other comment - in my case power does not matter much. Space too.


I did all of those DNS shnigannas with spf, dmarc and others ones like 6 years ago.

I think I had problems with my emails like 2 twice , with one exchange server of some small recruitment company. I think it was misconfigured.

Ah there were also some problem with gmail at the beginning they banned my domain because I was sending test emails to my own account there. I had to register my domain on their BS email post master tools website and configure my DNS with some key.

In overall I had much more problem with automatic backups, services going down for no reason, IPs being dynamic and etc. Email server just works.


Exactly. I thought that in modern languages and frameworks there are better tools to do that like 'ProjectReference' in .Net. Oh well..

I have worked with ProjectReference before. How is it different from expressing a cross-module dependency in a Bazel BUILD file?

But as I already said in two other comments in this discussion, ProjectReference would be equivalent to what I'm describing in the article, just using language-specific tooling. If you are breaking your solution into various projects and keeping them separate with cross-references among them, you are doing exactly what I was describing already.


> the build graph -- the very thing that BUILD files define -- is the best place to encode [dependencies] in a programmatic manner.

so the thing is that a BUILD file doesn't define the build graph, it approximates it -- the build graph is always defined by language-specific tooling and specifications

it's fine that the BUILD file is an approximation! that's as good as you can do, if you want to try to model dep relationships between heterogeneous languages

so when we're talking about the dep graph, "using language-specific tooling" isn't a detail you can brush aside, it's a core requirement for correctness, really


So this incomprehensible file at the end of article, that supposed to be "lean" is what the author is fighting for?

And it supposed to show 'architecture'?

Wow. I am happy that I never started working with Java. That is terrible.


This article has nothing to do with Java, and the author explicitly states that.

But somehow all the examples involve it.

In the article, yes. Look at the other comment threads in this discussion though: they touch upon many other languages.

Yeah, that was bizarre

What are those magic annotations you are talking about? Attributes? Not much of those are left in modern .net.

Attributes and reflection are still used in C# for source generators, JSON serialization, ASP.NET routing, dependency injection... The amount of code that can fail at runtime because of reflection has probably increased in modern C#. (Not from C# source generators of course, but those only made interop even worse for F#-ers).

Aye, was involved in some really messed up outages from New Relics agent libraries generating bogus byte code at runtime, absolute nightmare for the teams trying to debug it because none of the code causing the crashing existed anywhere you could easily inspect it. Replaced opaque magic from new relic with simpler OTEL, no more outages

That's likely the old emit approach. Newer source gen will actually generate source that is included in the compilation.

Don't we have automated tests for catching this kind of things or is everyone only YOLOing in nowadays? Serialization, routing, etc can fail at runtime regardless of using or not using attributes or reflection.

Ease of comprehension is more important than tests for preventing bugs. A highly testable DI nightmare will have more bugs than a simple system that people can understand just by looking at it.

If the argument is that most developers can't understand what a DI system does, I don't know if I buy that. Or is the argument it's hard to track down dependencies? Because if that's the case the idiomatic c# has the dependencies declared right in the ctor.

But the "simple" system will be full of repetition and boilerplate, meaning the same bugs are scattered around the code base, and obscured by masses of boilerplate.

Isn't a GC also a Magic? Or anything above assembly? While I also understand the reluctance to use too much magic, in my experience, it's not the magic, it's how well the magic is tested and developed.

I used to work with Play framework, a web framework built around Akka, an async bundle of libraries. Because it wasn't too popular, only the most common issues were well documented. I thought I hated magic.

Then, I started using Spring Boot, and I loved magic. Spring has so much documentation that you can also become the magician, if you need to.


I haven't experienced a DI 'nightmare' myself yet, but then again, we have integration tests to cover for that.

Try Nest.js and you'll know true DI "nightmares".

OK lets brake this down:

- code generators, I think I saw it only in regex. Logging can be done via `LoggerDefine` too so attributes are optional. Also code generators have access to full tokenized structure of code, and that means attributes are just design choice of this particular generator you are using. And finally code generators does not produce runtime errors unless code that they generated is invalid.

- Json serialization, sure but you can use your own converters. Attributes are not necessary.

- asp.net routing, yes but those are in controllers, my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attributes; you can inject services into minimal APIs and this does not require attributes. Most of the time minimal APIs does not require attributes at all.

- dependency injection, require attributes when you inject services in controllers endpoints, which I never liked nor understood why people do that. What is the use case over injecting it through controller constructor? It is not like constructor is singleton, long living object. It is constructed during Asp.net http pipeline and discarded when no longer necessary.

So occasional usage, may still occur from time to time, in endpoints and DTOs (`[JsonIgnore]` for example) but you have other means to do the same things. It is done via attributes because it is easier and faster to develop.

Also your team should invest some time into testing in my opinion. Integration testing helps a lot with catching those runtime errrors.


> Json serialization, sure but you can use your own converters

And going through converters is (was?) significantly slower for some reason than the built-in serialisation.

> my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attribute

Minimal APIs use attributes to explicitly configure how parameters are mapped to the path, query, header fields, body content or for DI dependencies. These can't always be implicit, which BTW means you're stuck in F# if you ever need them, because the codegen still doesn't match what the reflection code expects.

I haven't touched .NET during work hours in ages, these are mostly my pains from hobbyist use of modern .NET from F#. Although the changes I've seen in C#'s ecosystem the last decade don't make me eager to use .NET for web backends again, they somehow kept going with the worst aspects.

I'm fed up by the increasing use of reflection in C#, not the attributes themselves, as it requires testing to ensure even the simplest plumbing will attempt to work as written (same argument we make for static types against dynamic, isn't it?), and makes interop from F# much, much harder; and by the abuse of extension methods, which were the main driver for implicit usings in C#: no one knows which ASP.NET namespaces they need to open anymore.


I am working on entire new hobby project written on minimal apis and I checked today before writing answer to your comment: I did not used any attributes there, beside one 'FromBody' and that one only because otherwise it tries to map model from everywhere so you could in theory pass it from Query string. Which was extremely weird.

Where did you saw all of those attributes in minimal APIs? I honestly curious because from my experience - it is very forgiving and works mostly without them.


Funny it is posted on HN where your user score, which is called karma here for some reason, decides if you can or can't do stuff to engage with entire community fully. So either you are conformist or you will be downvoted and basically invisible.

Meh, as long as you're contributing more than you upset people, it seems to balance itself out. I've made some egregious comments in the past (judging by the downvotes at least), yet you can still see this comment and probably my future ones too.

And even though some comments I've made been downvoted, they've stilled spawned interesting conversations, so I count that as a win regardless.


So conformists that trolls from time to time? :)

HN has not just self-reinforcing consensus through karma, but also imposed false consensus through moderation decisions. I've just been informed that flagging and voting from my account have been disabled (they appear to work, but don't actually do anything) because I didn't flag all political sides in equal numbers. It seems my account has been identified as "a side", therefore is subject to equality requirements. Meanwhile, obviously mass-flaggings by "the other side" accounts permeate the site every day.

You have been informed that they shadow banned you? For side? In what politics?

Informed by who?

By dang, the moderator, via email.

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: