If you want low power, check out the N100 systems that are out now. They use like 3W. I bought the N200 from this deal recently and it's a great little machine that can handle a lot of Plex transcodes https://slickdeals.net/f/16934029-msi-cubi-n-adl-dual-nic-in...
It's interesting that these N100 systems can cost less than some of the Zima products. So for me, I don't see much of an upside of going with them.
N100 and N200 systems are perfect price/performance for small servers. I am running a 7-year-old N3710 fanless laptop right now and it performs well enough for a few tiny sites and a postgresql database. I am pretty sure we reached this point ten years ago but I was too numb to notice it but low power devices have become powerful enough to replace the dedicated servers that we were spending tons of money on twenty years ago.
Once you get objective about what you're using that home server for, this is so true. Ignoring things like crypotomining or $FOO-At-Home gamified compute grinding, the vast majority of home needs are met by the lowest end processors. The J4105-based mini-pc I have running VMware isn't annoyingly slower (tho to be fair is definitely not faster) than the 10 year old server it replaced, but uses a fraction of the power and makes no noise.
As I read through the article I came to exactly the same conclusion.
I have an N95 mini PC that was $195 CAD, which came with 16GB RAM and a 500GB NVMe. It will also hold one SATA drive. All in a single box, although it is slightly larger and has a fan.
The NVMe slot on the Orange Pi 5 I use also keeps that mess down to a minimum. Power + network and that's a finished setup.
Edit: There are a TON of options in this space now. The value of 10 year old eBay gear is questionable at best.
Although multiple SATA to USB3 cases should be a viable option too; I plan to build my next home server using one of them paired to a mini PC with lower power requirements than the one I use now.
While slightly bigger than those mini desktops, Asus and Asrock have recently released Intel N100 ITX motherboards. I have the Asus variant in a 1u short depth case and since theres no fan, its silent and capable of running some self hosted stuff at ease.
It is not true, N100 system consuming 30w at full load (or 15-25w if set in bios), and around 7-10w idle. Previous generation atoms systems were around 20w max. Even older systems like n3150 - 15w. Anything USB connected can add another 5-10w permanently.
The MSI isn’t fanless though. For a passively cooled N100 mini PC, look at the ASUS ExpertCenter PN42 or the upcoming Zotac ZBOX edge CI343. You’ll pay a bit more than for the ZimaBoard though.
Because home servers tend to be bursty on demand workloads.
If you doing something that is hammering a CPU at 100% 24/7, then you’d get a faster CPI to get the workload completed faster and you’d be back to having idle CPU cycles…
> Because home servers tend to be bursty on demand workloads.
Not all workloads can be "completed" by throwing hardware at them. The sort of thing this kind of hardware is ideal for is things like home automation.
I run Home Assistant on an Xeon ITX PC with a micro-PSU for example, it's at ~40-60% CPU and ~80% RAM (of 16GB) running inference on my camera streams, handling Zigbee and Z-Wave devices etc, running automations, handling all of the sensors for the doors, lights etc in my home. This isn't a bursty problem you can simply get a more powerful CPU for, the goal here is to keep power down and performance up because it's a constant load. A smaller, lower power micro-PC would save me a bit in energy costs if I wanted to.
For real work, people like me (us?) use real servers like you say; I run a Ryzen 9 12c/24t, AsRock Rack, 2 M.2 NVMe, HBA, 8 SAS bays, 128GB RAM, 6 GbE NICs, with dozens of VMs and containers on, a few Kubernetes clusters, and a bunch of services (git, Harbor, Argo, etc) it uses about 100-150W with my CPU downspecced to 65W TDP.
Now that's a system that has bursty workloads, but it's in a different class to these low power machines that "normal" home users are looking for.
> Now RAM on the other hand!
> Free RAM is wasted RAM!
60W might be right for an ancient desktop, but it isn't correct for at least the above microserver (2nd item). From rough memory, mine were under 40W (38W?) when the drives were spun up and active.
And likely isn't correct for the minipc (1st item) either, though I've not got one of those (yet) to measure it.
More reason to buy the kind of passively cooled mini PC people get as pfSense routers from AliExpress & co. You get Jasper Lake (2021) or even Alder Lake (2023) instead of the much older Apollo Lake from 2016. The only thing you really miss out on is the form factor and the PCIe slot.
I recently got one of those passive boxes with an N5105 for opnsense. But I found it ran pretty hot. I was able to replace the thermal paste and shim the cooler a bit to get it closer to the CPU (there was originally a huge gap they bridged with a glob of paste), and it lowered the temps a bit, but it still hung out around 60-65c afterward. I think that could be OK, but it still just felt too warm too the touch. I ended up placing a USB 5V fan on the outside of the case and now it sits around 40-45c which I'm more happy with. But now I have a fan running 24/7 running on my passive device.
At the end of this ordeal, I bought another machine (MSI Cubi-N with an N200) for cheaper than the Topton Aliexpress job, which includes an internal fan. The fan is super quiet and the build quality is way better. And it comes from a reputable manufacturer which I trust more to not load any weird stuff into the bootloader. If I could do it again, I'd probably try to make my Opnsense router out of an MSI Cubi N with an N100 or N200, in both cases it would have been cheaper, more powerful, and use less electricity than the Aliexpress passive one. The only possible hiccup would be non-Intel nics that the Cubi comes with.
Just some perspective from someone who recently bought a couple of these devices.
These machines are super attractive spec-wise for their cost! However, and maybe it's FUD, I don't really trust the power supplies or the firmwares on these units. I'd much rather pay for a system with a power supply that is UL certified and a system that, at least on the surface, has a much better chain-of-custody for processor firmware. I know I could replace the power supply with something UL certified, but that now means I'm contributing to e-waste needlessly. My eyes are currently peeled for a 12th-13th gen i5/i7 1L system in my price bracket, as between the heterogeneous cores to get solid power efficiency, the ability to drop 2x NVMe SSDs and a lot of RAM, and on some of them, even the ability to get 10 Gbit, I can hit my ideal performance per watt budget and take advantage of my NAS for high performance decentralized storage.
And I'm not a huge fan of the form-factor either, it gets messy once you've got your SATA connectors, power and ethernet.
Not suggesting Zima is the right solution, but ebaying dated hardware often isn't either for many of us.
Don't get me wrong, it's great there are people who can use this stuff and not pay disproportionate prices for electricity to run it; but much of the world isn't in that situation; maybe one day.
Yeah, that's fair. The first option there (HP ProDesk 600 G4 Mini) do seem to have a reputation for being noisy, though replacing the fan with something quieter (eg Noctua) seems to be what people do:
While comparing server prices it’s useful add 3-4 years of electricity cost of running it mostly idle 24/7. In europe new small power efficient SoC boards often win in such comparison over used socketed systems.
These machines and the ZimaBoard absolute bottom of the barrel cheapo NICs, which are the biggest source of issues for networking devices.
In my experience using all of these, if you want to have reliable networking, either use a Linux distribution that ships with the quirks needed for specifically consumer NIC hardware (like VyOS), or use a NIC that ships for the OCP platform (aka commodity hyperscaler NICs) like the i350.
Yeah...this really bites the folks that don't do a lot of networking and expect things to 'just work'. The forums are filled with threads that start with 'I have this weird bug/failure I can't debug...' that peter out with 'I replaced the NIC with a good one and problem went away'. Those crappy (looking at you, Realtek) NICs have all sorts of bizarre issues that the vendors clearly don't care about beyond 'works in Windows...ship it'. IME, if it's an Intel or Broadcom chip, you're probably fine.
> if it's an Intel or Broadcom chip, you're probably fine.
You'd think, but a good test is to punch into google "e1000e proxmox issues" or "i210 proxmox issues." You'll discover that in addition to i225 issues another commenter is talking about, the Intel NICs shipping on the USFF PCs also have catastrophic issues in Linux and Windows.
I have the Gen8 microserver from your second link and what I really like about it (besides the 4 bays) is that is has ILO. Whenever there is a boot issue or I need to reinstall I don't have to drag it out of my server closet onto my desk, find a monitor and keyboard to hook it up to (now that I think about it, I don't even have a VGA monitor anymore) just do basic BIOS or GRUB stuff. It can all be handled via the web interface and the HTML5 terminal works great. Biggest downside of this machine is the power draw, even at idle.
I couldn't find anything in the Zimaboard docs about it, but my guess is I'd have to find a mini displayport compatible monitor if I ever need to do disaster recovery of the OS. This especially becomes an issue with the eMMC, as you cannot swap it out easily like an SD on a raspberry pi.
Yeah, I have a few of the Gen8's too, and agree about the iLO. Especially with the "advanced" iLO option, which the BIOS doesn't check the validity of its license key... so you can just look for an iLO key using your favourite search engine and you're good to go. ;)
For fallback purposes I bought a VGA to HDMI adapter like this:
Not sure if that's the exact model I have (am in different location atm, so can't check), but the pictures in that listing look like the thing I have. It's worked fine in the times I've needed to use it.
Huh. I have one and found the iLO such a total PITA to try to make the thing boot from its primary disk, a small SSD, that I ended up turning it totally off.
I guess now I need to find a way to turn it back on again...
Yeah, it's useful once you've typed in an "iLO Advanced" (or similar name) license key to enable it. Findable online for free with any search engine. ;)
It's a home NAS. It has one user, me, and only 1 network cable. The faffing around with a console cable, separate from the network cable, and the inability to just point to a boot disk like an ordinary PC, were hugely irritating and drove me to shouting anger several times.
It never occurred to me that I'd want the wretched thing, let alone try to unlock more of it.
All I want is to say BOOT OFF THIS DRIVE without defining volumes or single-member arrays or any of that enterprise BS.
The main thing I use iLO for is the remote (full screen) console (inc keyboard/mouse) that it makes available over the network.
So, if I want to muck around with settings, change boot things (like in your case), etc, it's all doable from anywhere in my house via web browser.
Keeps things pretty straight forward. From memory, the default iLO password for each machine is randomly generated and printed on a tag attached... um... to the back (I think?).
But there's a physical switch you can set (inside somewhere) which disables the iLO login password. Less secure of course, but for a home environment that can be the right choice. :)
If you don't need a stupid amount of compute power you can find even cheaper alternatives.
I bought a used Asus Chromebox CN60 for $21, put $20 of ram(16gigs) in it and a $20 256 gig ssd. All in, ~$61 minus shipping and I have a small home server running CasaOS. Same as the Zima Board, I have it running Home Assistant, PiHole, a MariaDB instance & an image hosting website for my wifes project. It happily hums away running on a Cloudflare Tunnel.
It's excellent!
I had to set up my Cloudflare Tunnel manually but it has since been added as an app to the app repository.
https://github.com/IceWhaleTech/CasaOS-AppStore/issues/158
I'm doing pretty much the same thing with cloudflared on an Acer cxi2 (Ubuntu server). I just use an external enclosure with on old SSD I had and it cost me about $20 with 4gb of RAM.
The Lenovo Thinkcentre M720Q and M920Q have both an m.2 M key and an internal PCIe x8 slot you can stuff a GPU, quad gigabit or dual 10GbE into. No ECC though. Sells for about $100 on eBay for a decent one with PSU that is the same PSU as their laptops (Bonus if you're a Thinkpad user). You need a riser and rear bracket for the PCIe slot that can be found on eBay for like $25. I have a 6 core i5, 32GB RAM, a dual Intel 10Gb adapter and 2TB NVMe in mine. Idles at 19W which is 2W less than the rectangular trash can 10Gb Verizon router's 21W. Total build cost was around $200 for all used hardware.
The lenovo m720q is amazing little machines. I’m using it with a 2 x 2.5 gbps QNap nic and running OpnSense. It’s been almost a year and it has been rock solid as a router. Before I discovered the m720q , I had no idea that a 8 lane PCie slot was possible in a mini-pc that small.
I have a 4x1Gb i350 in my SFF EliteDesk and it can get quite warm. I've had to install an aftermarket small fan for peace of mind, since the case only has a CPU and a PSU fan which don't create any airflow over the extension cards.
How's a dual 10 GbE faring in the even smaller enclosure of those Lenovos? What's the noise situation? I wanted to switch my router and random home VM needs to an EliteDesk mini I have lying around but it only has a Gb port, so I was looking at thunderbolt 10 GbE adapters, but seeing how pricy they are, might as well get a new complete box.
I've yet to really beat on it but it's quiet for its size with a typical laptop like fan sound under load. The dual port 10Gb intel nic doesn't seem to add much heat though I purposefully went with an SFP card in order to use cooler running fiber SFP's (A copper SFP runs burning hot vs a warm fiber SFP).
A brand new dual port 10Gb SFP card costs like $100 USD. Used is far less. I also wound up realizing that when it comes to 10Gb, copper can be more costly than fiber and uses more power per port. So I only buy SFP+ gear so I can use one of many interfaces: copper, fiber, DAC, etc. Fiber and DAC cables are cheaper and use less power. I have a ~$250 USD 8 port Mikrotik 10Gb SFP+ switch and also bought a few cheap 10Gb mellanox SFP cards for my server, desktop and work bench PC for like $30 each off ebay (I use them in Linux and FreeBSD.)
This is passively cooled. Fans are likely to be the first thing to fail on a setup like this. Plus it is likely you are running 24x7, and so fans means dust gets inside and eventually will kill things.
I got a refurbished J4105 8GB RAM thin client for 40€, 2 m.2 SSD slots (though one required an adapter, and only that one is NVMe, so lets add another $15 for the adapter), passively cooled.
The case doesn’t look as nice, but still over 100€ difference for a slightly faster, slightly newer CPU. And while not officially, it does support 16 GB RAM if I ever decide to upgrade. It does not have GBe ports, so that is probably the biggest reason to go with this board instead, besides wanting the sleek case.
I find NUCs kind of just work for home servers, consume a very reasonable amount of power (10W idle, 90W at full load, ~15-20W average in my case), aesthetically very neutral, and run standard Ubuntu distros with zero hardware driver issues. Everything works out of the box with a vanilla install.
They're also available dirt cheap second-hand if you're okay with a generation or two older processor, and you can use an m.2 SSD inside the case without a dangling hard drive like I see with TFA's Zimaboard setups.
I have a couple of Dell versions similar to your first link, the Micro form factor. They are nice and compact, but they have heat issues. I have one running BlueIris security camera software and have to run it with the case off and an extra fan to keep it from going into thermal overload.
The next step larger systems are probably a better bet.
Doesn't sound like a good deal when compared with 2nd hand microservers and small form factor PCs from Ebay.
Things like these:
∙ https://www.ebay.com/itm/175917817224 ⇦ 2x NVMe slots
∙ https://www.ebay.com/itm/204477355543 ⇦ 4x 3.5" drives, ECC memory capable
Note - I don't know those sellers.