I run wireguard on an even lower end Linksys router, and it seems to work just fine. I suppose my internet connection isn't the fastest, though.
It's interesting to see someone using a single board computer as their main computer, and preferring a Linux distro that compiles everything from source.
Why bother with docker for a home server other than for the fun of it?
I've been embracing systemd recently, and using it to manage processes in an embedded system. Once you wrap your head around how services work, it seems pretty convenient compared to init scripts. Notably, systemd works with legacy init scripts, so you could still go that route if you wanted to. Maybe throwing docker into the mix makes it clunkier, though?
> Why bother with docker for a home server other than for the fun of it?
I do this. Over time you forget how each service was configured, or simply don't care. Adding more and more stuff to a home server increases the complexity and the attack surface more than linearly in the number of services.
I run nearly all my home services in docker, and I have a cookie-cutter approach for generating SSL certs and nginx config for SSL termination (not dockerized). Provisioning is automated through ansible, so my machines can be cattle not pets, as far as is possible on 3 raspberry pis.
Same, but I haven't started with anything like Ansible yet, only beginning to learn it at work.
Running all my services in Docker keeps it all clean because I'm a very messy person when it comes to Linux. Change, change, change, it works, forget about it, it breaks, find something I did years ago tripping me up now, change, change, change, it works, forget about it.
With docker every service is contained and nearly separated. I can rip something out and replace it like stacking a new network switch in the rack. Delete the container, delete the image(s) and delete the volume if I want to start over with something completely fresh.
I can move everything to a new server by moving bulk hard drives over, restoring docker volumes from backup and cloning docker-compose configs from git. Haven't tried any distributed volume storage yet.
> Haven't tried any distributed volume storage yet.
Having tried Gluster, Ceph/Rook, and Longhorn, I strongly recommend Longhorn. Gluster is kinda clunky to setup but works, albeit with very little built-in observability. Ceph on its own also works but has some fairly intense hardware requirements. Ceph with Rook is a nightmare. Longhorn works great (as long as you use ext4 for the underlying filesystem), has good observability, is easy to install and uninstall, and has lower hardware requirements.
Its main drawback is it only supports replication, not erasure coding, which tbf is a large contributor to its ease of use and lower hardware requirements.
longhorn has no authentication at the moment, so any workload running in your cluster can send API requests to delete any volume. I think they are working on it but it might not be the best solution unless you deploy a security policy to prevent network access to the API pod.
>Why bother with docker for a home server other than for the fun of it?
Because it legitimately makes things easier. Your source of truth for how an app was set up is the compose file, and as a result you only need to care about the storage each app uses for its state. This makes upgrades and back ups almost entirely painless and drama free.
Deploying the average app the old way feels downright medieval.
I never said NixOS is uniquely solving any problems. But - none of the other solutions are really quite the same or equivalent. In total, its unified approach is relatively unique.
I really don't see myself using another Linux distro. It would make more sense to just modify my NixOS config. Don't know what another distro would even buy me tbh.
A- If you want to be able to restore to any point after installing your system: LVM/ZFS/BTRFS snapshots got you covered.
B1- If you want to install your system the way you want, you have Kickstart (RedHat family) and Preseed (Debian family). You can provide an installation template to the installers, and they install your system the way you want.
You want to get your /etc/ afterwards the installation? Either provide it on a disk, or your local network, or anywhere on the internet. Either integrate it to your Kickstart/Preseed end hooks, or do it after your first boot.
B2- If you want something more fancy? Create a FAI installation media which installs the system you want.
C- You want to transform an existing Debian system to something you want? You get the dpkg package selection state and set to the target system, apply the state so the system is converted to your own selections. Apply the /etc via git or any way you want.
D- Want something more programmatic? Run an Ansible playbook locally (This is how GitLab installs and configures itself during install).
E- For fleet installations there are XCAT and network variants of B1, but they are out of scope for personal systems.
We use E in our system room. I used B1 for personal systems and B2 to deploy an installation country-wide via USB sticks. I used C in the same project for a small subset of servers. In these cases, all systems are operational in the first boot, starting from a known state.
I know a lot of people using A, and nudging me to try/use it. GitLab uses D every time I upgrade it.
Thanks for all the details. "A" can't really be checked into git due to size, and doesn't allow for forward config changes, just rollbacks. Do B1 or B2 allow updates to an existing system? C so far seems the closest to Nix, though it's missing the configuration side. I used D before and frankly Ansible is not great. It is meant to be idempotent but has so many holes that it never really is, and because it's a separate layer from the system itself, the scripts might not work after a system upgrade. With Nix, the config IS the system. E sounds interesting, never heard of it - sounds like you saying it's not suitable for personal use though for some reason.
Entire OS configured with code, which can be modular with imports and functions. Want to run a Postgres server? One-liner in your config. Want to advertise yourself over avahi? Same deal. Need custom udev rules? One-liner again.
I literally cannot bork my machine. I can always roll back. Useful when messing with low-level stuff like drivers.
I have full control and understanding of updates and the exact versions of all the programs I run. I can always read the source of any program I'm running by looking in the nix store.
Super easy to package new programs and contribute to nixpkgs.
Super easy to tweak packaging other provided without forking via overlays.
I share config across devices in a git repo, so provisioning a new machine is trivial. I got a desktop and I basically imported a bunch of common config along with like 10 lines of config specific to the desktop.
I'd say it's more like vim. Steep initial learning curve for a few months then it becomes pretty easy and indespensible. Perhaps Haskell is like this but I never got past the curve.
Are templated VMs contained within a few kilobytes of config files? Can anything in the entire system be included or excluding by tweaking these text files? Can it be used as a desktop on bare metal?
ZFS and BTRFS are orthogonal and often used in conjunction with Nix.
This person's main argument is that because Ansible can be used to accomplish some of the benefits (for some definition of accomplish) of NixOS, that NixOS isn't doing anything special or differentiating.
I also get a sense they are for some reason solely focused on cattle server use-cases. I'd say the OP (home infra) is in between your production servers and your PC.
I run my home infra as sort-of-cattle. Nothing that matters is stored solely on local disk. If my Mac were to die, I'd probably spend an hour or so waiting for Homebrew and asdf to install stuff, and I'd have to manually grab some files from GitHub. The worst case would be my Windows desktop dying, since I have nothing in the way of repeatability set up for, but all I use it for is Steam. Again, mostly just dealing with the annoyance of installing Windows and waiting for Steam to re-download a ton of games.
Conversely, I can lose a k8s node and have nothing change. If I lose my NAS (separate node, separate Proxmox cluster), I'd have to boot up the backup (which boots daily to sync, then shuts down) and run an Ansible play to change its IP address so that all the NFS targets still worked. I could make that more automated, I suppose, but it's an unlikely scenario so I'm fine with the small amount of manual labor.
I guess my point is that I don't see the benefit in having a special OS for daily use. If I want to fiddle around and possibly break things, I don't want to be doing that on the device I use daily. I used Gentoo for years in the early 2000s, and no longer have the time or patience for my main computer breaking constantly. If I want to play with something, I spin up a VM. If I want to play with something baremetal, I have an old Dell T310 I can use, and a couple of ancient Macbooks somewhere.
This is what I love about NixOS users - how belligerently defensive y'all get immediately when someone pokes at your project.
> Are templated VMs contained within a few kilobytes of confif [sic] files?
Not kilobytes, but Proxmox supports sparse images so pretty small. More importantly, disk space is cheap as hell, and I value my time way more than a couple of hundred megs of space.
> Can anything in the entire system be included or excluding by tweaking these text files?
Personally I template my images with Ansible, so yes in fact anything can be included or excluded with a text file.
> Can it be used as a desktop on bare metal?
Who cares? The performance hit from a modern Type 1 hypervisor is so small as to only matter if you're also the kind of person who is tweaking obscure CFLAGS for emerge, which is to say it doesn't matter.
> ZFS and BTRFS are orthogonal
Only in that they aren't an OS, obviously, but they perform the same function (rollback) that you mentioned as a positive point for NixOS.
> Who cares? The performance hit from a modern Type 1 hypervisor is so small as to only matter if you're also the kind of person who is tweaking obscure CFLAGS for emerge, which is to say it doesn't matter.
I want to use it as my personal computer...using a VM seems even more fringe and niche than NixOS there lol. And for a home network, VM also seems overkill.
I suppose there are also network effects at play. If you use NixOS for a laptop and desktop, suddenly using it for home infra is actually more economical than using other tools.
I literally cannot be paid to care about borking my machine. It takes ten minutes to reinstall.
For stuff that cannot conveniently be installed locally (hello multiple versions of DaVinci Resolve) or ideally want to be ephemeral (hello basically all the development environments I use), I've got Docker.
NixOS's virtualisation module can use either Docker or Podman and if Podman is enabled it has a Docker compatiblity mode (via `virtualisation.podman.dockerCompat = true`) that puts symlinks in all the right places (`docker` binary, `docker.sock`) so that software doesn't even know it's not running vanilla Docker :)
Really not seeing the point of using docker-anything on NixOS here. Shouldn’t Nix already be able to isolate dependency trees from each other? Why would you want to duplicate all those files yet again in a docker image?
Sounds good in theory but there are a couple reasons to use docker. The first is that there are endless packages already prepared for docker that you'd have to manually set up on any OS, let alone Nix. For instance "itzg/minecraft-server". The second is that if you use docker you've got control of where all the stateful volume data sits. I keep it all in one folder for easy backup. The rest of the system is fully managed by nix.
> Why bother with docker for a home server other than for the fun of it?
As others have alluded to, mostly for having everything in code. I went from Docker --> Docker-Compose --> Kubernetes. The latter is 100% overkill, and was mostly done to assist learning. Still, it's very nice to have as close to HA home services as possible (with the exception of a dedicated failover WAN connection, and me needing to manually fire up a generator for power - UPS will last ~15-20 minutes).
Scheduling aside, though, yes you could accomplish all of that with systemd and something like keepalived.
This entire website reads like the biggest caricature of a nerd that I have ever seen. Every opinion is that of gatekeeping and purity über alles.
Signal's creator has reasonable concerns about Web3 and NFTs? Better drop it, since "[Moxie has] very disturbing opinions on decentralization, [and] cryptocurrency..."
6+ paragraphs written about the print quality on various deskmats.
Tables of every material possession they own, seemingly lacking only the UPC.
This comment made me click the shady af looking link. If a person that (judging from their keybase avatar and their website's "about" page) appears to be a total caricature of a nerd accuses another nerd person of being "the biggest caricature of a nerd" it must be good.
Sadly I was disappointed. It's just another of these "digital gardens" or whatever the term for the type of sites is, that one can find across various platforms like neocities these days.
Not sure what the described homelab has to do with crypto or decentralization, though.
The homelab has nothing to do with those things, it's the other pages on the site, wherein the author discusses things like why they've eschewed Signal for XMPP due to Signal's author's feelings on crypto.
A while back I came across a site on HN with a very similar style, which went into similar detail on living "off the grid" on a sailboat. https://100r.co/site/about_us.html
The posted website is the second time I've seen this, but I have to imagine it is a specific style or practice.
Some of the content is interesting on the rest of the site, but this post definitely screamed "out of touch", perhaps surprisingly so if you've followed the content for a while. Being on the ball with tech is an ever-moving target, and this feels very 2015.
I used to use the crappy mainstream stuff like the Linksys, in fact I have probably over £1k of WiFi routers in my router graveyard at home.
I switched to a Ubiquiti Edge Router and went the "discrete devices" route, one device for one purpose.
The edge router lasted 5 years or so before some power component blew, and with the sorry state of updates from Ubiquiti these days, I decided to move on. When I was looking around Turris came up a lot but it goes against my "discrete devices" approach so I bought a Protectli i5 box and run pfSense.
Ideally I'd like to get to the point where my Ubiquiti Unifi APs can be replaced with generic but capable hardware running Linux or BSD too for WiFi 6 speeds (not sure that's currently possible/available) and Switches are likely a way off.
Fishing with punycode is a mostly solved problem: Show the Unicode characters only if there are no homoglyphs in it. That's why browsers can show them. See "client side mitigations" on https://en.m.wikipedia.org/wiki/IDN_homograph_attack
It will show in browsers correctly now, except in cases where there's mixed character sets. This is to avoid phishing attacks abusing unicode with look-alike characters.
firefox on macos shows the unicode in the address bar properly.
I do kind of like that HN retains the xn-whatever as it's actually formatted in an authoritative DNS zonefile, because this is hacker news. As I do not read the language in question the unicode original and the xn-whatever are equally incomprehensible to me.
>Even though the Linksys WRT3200ACM was released back in 2016, it’s still a solid and well-supported option for OpenWrt – unless WiFi is your primary requirement.
>Since the Panda PAU09 N600 works out of the box, the only thing needed to enable the WiFi access point is a working hostapd configuration: [75+ lines of config for something that "works out of the box"]
>One pitfall here is that Docker likes to mess around with iptables, leading to WiFi clients being unable to communicate with the internal network. In order to fix this, another drop-in configuration is needed:
>Update: Because I simply couldn’t get it working with this rule, that was suggested in the Docker docs in first place, I eventually gave up and used a jackhammer to fix it: [...]
I see wifi is still a shitshow in the libre world. Running entirely free software is a laudable goal but the above sums up why it's never going to be mainstream. All that fuckery for previous generation .11ac?
No thanks, I have a zillion other more important things to do with my time.
Libre WiFi is a shitshow, but half the things you cited aren't about WiFi. The "Docker/WiFi" problem that two of your quotes are about isn't about WiFi, it's about "Docker insists on messing with the firewall, and that's a problem because I'm trying to use the box as a router." Docker messing with the firewall would be a problem on any router, wired or wireless; and it's just coincidental that OP's setup happens to be wireless.
None of the configuration was about the wifi interface not working out of the box, but about setting up an access point and a network bridge. If OP had just wanted to use the interface as a normal wifi client I suspect installing NetworkManager was all he had to do(installed and started by default on most user-friendly distros) and the experience would have been equivalent to Windows, Android or iOS. At least that has been my experience the last 5-10 years.
And this is why I went separate AP from router. You can easily get a commercial grade Wifi AP that doesn't do anything else other than WiFi. They're stable and include the latest standards.
Meanwhile, I run my own Linux router on a Raspberry Pi. It covers all the wacky edge case needs I have such as Wireguard, VRFs, VLAN, etc.
Yes. I can achieve line rate throughput (940mbps measured tcp throughput) with a USB 3.0 gigabit dongle on one side, and the built-in ethernet on the other.
Out of bos without tuning, it's more like 750mbps maximum throughput. Most of the optimizations are pinning IRQs and RX buffers to particular CPU cores, otherwise it'll max out one core and limit the throughput.
I'm routing and forwarding, as well as stateful firewalling with nftables.
on the topic of the original post I would encourage people who really want an open source home area network router to run something like pfsense on a very small x86-64 system with four gigabit ethernet ports, rather than the linksys in the example.
Or better yet, OPNsense, as the NetGate team has time and time again shown that they are incredibly petty, and do not want to support the community in the most basic of ways.
+1, I switched to OPNsense about 11 months ago and could not be happier. Once the dust settled I also changed some of my small buiness customers over and it was a smooth experience.
The only thing holding me back from switching to OPNsense is the lack of pfblockerNG. To the best of my knowledge, there is no competing package available at the time.
Do you have any suggestions / personal experience with systems like this? I'm really concerned about getting something that isn't powerful enough to process stuff at gigabit+ rates, and most of the "routers" I've seen get pricey to do that.
The pcengines APU2 will do gigabit happily on four interfaces with passive cooling and ECC memory, there’s nothing but actual servers which will do more realistically speaking.
This is an annoyingly unfilled product space. I keep on wanting and the market keeps failing to provide something small, cheap, and with 2 gige ports. Heck, I don't even need x86_64, as long as there's a well supported Debian port.
Anyone have suggestions for even two gigabit ports and the lowest possible price?
Also, I'm curious if anyone has concrete examples of advantages of anything other than Debian for my router. I want to know if I'm missing out on cool features or some such :)
Earlier this week I bought a COOFUN GK41 with two Gbit Ethernet ports for $160. That included 8 GB ram and 120 GB SSD. It arrived yesterday, and I haven't installed OPNsense on it yet. So I can't report how well it works, or even whether it works.
I was considering a QOTOM Q355G4 until I decided I didn't need 4 NICs. They're $300-400ish, depending on configuration.
So the hardware is there, it's just from companies I'd never heard of.
Lowest seems to be Protectli (https://eu.protectli.com/vault-2-port/) or any of the chinese 4-port devices (Topton, Qotom, KingNovy, ...) in a barebones config where you add your own RAM & SSD.
ARM-based devices are not supported by pfSense / opnSense so you should stay with x86_64 based devices.
Used rack server is also an option if you want room to grow, Sandy Bridge and newer models have decent power consumption. My dl360e idles around 40W with five disks. Four gigabit ports, BMC, ECC, and you can virtualize pfsense and run additional containers and VMs on top.
What are some advantages of Gentoo Linux? I've never tried it or heard much about it. (I'm familiar with many others though: Ubuntu, Debian, CentOS, Fedora, Arch, NixOS, etc.)
Builds everything from source and most packages have flags ("USE flags") that control in which way the source packages are built (e.g. remove unneeded package dependencies). The other potential benefit is to compile with better optimization than generic packages.
Mainly that you can tweak the compiler options to your liking, and have the full userspace built with it. For example, you can target your exact CPU with all its particular extensions. There are also hundreds of compile time feature flags, so you can omit functionality you don't need.
I've used it for a long time before switching to Arch and then Ubuntu. When using Gentoo I used to know more about the different subsystems running - it was a function of seeing what needs to recompile when you update and also having to sort out issues when certain hardware stopped working.
I stopped using Gentoo and Arch because it was just easier to be on the same OS as other devs and it didn't feel like I lost much switching to Ubuntu. Every now and again I'm tempted to go back to Arch, but honestly, there are other things I'd like to play with rather than my OS
The advantage of Gentoo Linux is you get to see lots of compiler output scrolling past instead of being able to do real work on it, so you look like a l33t h4xx0r.
The disadvantage is that everything is built from source, so it takes forever to install anything and it's very fragile, and it's a rolling release distro so you're installing stuff every single day and it's very fragile.
It's interesting to see someone using a single board computer as their main computer, and preferring a Linux distro that compiles everything from source.
Why bother with docker for a home server other than for the fun of it?
I've been embracing systemd recently, and using it to manage processes in an embedded system. Once you wrap your head around how services work, it seems pretty convenient compared to init scripts. Notably, systemd works with legacy init scripts, so you could still go that route if you wanted to. Maybe throwing docker into the mix makes it clunkier, though?