Hacker News new | past | comments | ask | show | jobs | submit login
Home Lab Hardware Guide (haydenjames.io)
196 points by ashitlerferad on Aug 24, 2021 | hide | past | favorite | 102 comments



Rack mount hardware is almost always expensive, loud, and power hungry. I just have never seen the point of building a home lab like this.

A single ATX desktop can do almost [1] everything a homelab at a fraction of the cost & power consuption. I think a lot of the reason for homelabs/server hardware was to get access to more CPU cores, now that 8+ cores are very cheap it is actually cheaper to buy a new consumer desktop than it is to run an old server.

What makes even less sense is wanting use to use software like vSphere or ESXi since its about 10x more complicated than just using virt-manager/QEMU. It's like using an excavator to dig a fire pit. Server hardware & software makes sense when it's not your home because then you do need a remote access tool like iDRACK. (There are DIY options if you just need something for personal use).

That said if you enjoy it as a hobby (or your homelab is actually a business thing) then go for it.


> Rack mount hardware is almost always expensive, loud, and power hungry.

Buying old rack-mount server hardware for home use is almost always a mistake. Old server hardware may feel cheap when you see an old dual-socket rack mount server on eBay with hardware that was fast 8 years ago, but you can probably meet or exceed the performance with something like a cheap 8-core Ryzen.

Rack mount servers are also exceptionally loud. Unless you love the noise of small, high-RPM server fans, you don't want rack mount server hardware in your house.

And don't forget the power bill. Some old servers idle at hundreds of watts, which will add up over the several years you leave it running. 24/7 server hardware is a good example of where it makes sense to be mindful of power consumption.

> What makes even less sense is wanting use to use software like vSphere or ESXi since its about 10x more complicated than just using virt-manager/QEMU.

I disagree. ESXi is actually extremely easy to use, as long as you pick compatible hardware up front. The GUI isn't perfect, but it's intuitive enough that I feel confident clicking around to accomplish what I need instead of looking up a tutorial first.


I've always ignored advice even people say somethings too hard or not worth it. And I pretty much never regret it.

I absolutely regret trying to get a used rack mount server running.

The combination of steep learning curve from workstations to server hardware, plus parts that were failing but tested ok, made for an extremely difficult path to troubleshooting and getting it running right.

And that's before you get to the quirk's of getting it to boot and installing an OS and drivers and software.

I love it now that it works, but it easily took 100x the time (yes 100x) and probably 2-2.5x the total expected cost getting it to that point.

Not counting the additional AC unit I installed to keep it (somewhat) quieter.

I usually expect one or two aspects of my projects to have unexpected roadblocks, but for this it was issues with what seemed like every single step.

It's eventual replacement will be factory new.


My experience was buttery smooth. It was plug it in, replace the drives with new SATA spinning for bulk and SSD for fast storage, install proxmox, and I had my first apps on it within a few hours of starting. This was first on a Dell RX720 and later on an HP DL380 Gen9.

I got complete servers from decommissioning projects and they just worked. In 5 years, I’ve replaced a SAS controller battery backup unit on one of them.

The plural of anecdote isn’t data and all that, but if you buy complete gear that just aged out, it worked on the last day they used it and is very likely to work on the first day you use it.

The fan noise and power draw is annoying. Running a house full of VMs (I’ve got about 20 containers plus VMs), it pulls about 290 Watts per the meter. That doesn’t feel outrageous on the power side and is certainly convenient. (It’s about $500/yr in power.)


I have used consumer hardware and enterprise hardware in my rack. There’s pros and cons either way.

Consumer hardware is cheaper, more power efficient, and quiet.

But, there are a few reasons I have switched to enterprise hardware:

* remote management and redundancy features make it more reliable if I’m traveling and want to remote in to do something

* some software works better with enterprise hardware features (i.e. special disk controller modes)

* I feel like my development experience is closer to what I can expect in production

* 100+ gigs of RAM on one system without a big upfront expense

* the occasional PITA of working with enterprise hardware helps me to understand what I might be expecting out of infrastructure team in production, or design ways to make their life easier


> And don't forget the power bill. Some old servers idle at hundreds of watts, which will add up over the several years you leave it running. 24/7 server hardware is a good example of where it makes sense to be mindful of power consumption.

My rack has an always on laptop for applications that always "need" to be running. I then have an arduino in my office that's sole purpose is to wake on lan or power on via UPS until the servers are online when I turn my switch to the "on" position, and put them to sleep when in the off position. Any servers still on after 15 mins in the "off" position just get halted.

Once I did that and the friction of going on/off was so low my power consumption went way down.


Latest Ryzen's CPU perf and max 128GB ECC UDIMM should be enough for most users, but it still lacks homelabby features and flexibilities. I need more than 16x PCIe lanes that supports ACS and IPMI, so I go for EPYC build. It's great except idle power consumption. I with they improve it to Xeon level, but maybe chiplet architecture (8CCD and massive CCD) isn't good about it.

For hypervisor, I recommend Proxmox VE.


Don't have to go all the way to EPYC, stopping at Threadripper Pro along the way will get you most of the way there.


It was my initial thought since X399D8A-2T mobo (X399 but has IPMI) existed for TR 2000. But they changed chipset for TR 3000 and TR 3000 don't offer 16 core SKU (it's enough for me). I also found that even TR 2000 build isn't much cheaper than EPYC because it needs ECC UDIMM that rarely sold cheaply rather than ECC RDIMM. So finally I go for EPYC Rome.


there are ryzen compatible super micro mothwrboards out there you know......

Edit: spelling


>What makes even less sense is wanting use to use software like vSphere or ESXi since its about 10x more complicated than just using virt-manager/QEMU.

I always just assumed it was as a learning experience. When I was in my 20s, I didn't try to get Apache, qmail, and bind running because it was practical for me, I wanted a marketable skill. There are lucrative jobs out there for people who know these technologies.


That was my experience as well. Home labs seem to be cyclical for a lot people including myself. Started with a big overkill rack to learn technologies, and now I’m down to a 10 inch desk rack that is ATX case size with just a few things to run my network and small VMs.


The application side is a more understandable. There are lots of reasons to know and use apache, postfix, etc.

As far as learning goes, if you really want to work in enterprise IT then vSphere is good to know but if you are willing to learn things on your own then you might as well learn kubernetes, docker, etc.

Knowing things like vSphere is cool & perhaps useful but it also hides the how things work. If you want to know and understand things it is better to stick to the open source & interact directly with KVM and Xen. Like you wouldn't use cPanel to learn how a LAMP stack works.


What if you just want to spin up servers in an easy to use platform so you can use them to other things, and not learn the integrals of virtualization?


Then KVM is still probably the move. Install Ubuntu, then install Cockpit for a web interface to manage KVM and Docker. If you want a little more depth, Proxmox is also good but there’s some mild learning curve there.


Best to avoid vSphere/ESXi in that case. I learned using qemu and was able to step into many roles immediately, including some VMware ones. The Linux/qemu/Xen ones pay better.


Absolutely, rackmount servers tend to be very loud and often power-hungry. But if you think of rackmount as a form factor, it can make some practical and aesthetic sense.

The IKEA CORRAS Birch Effect "rack cabinet" in my living room currently has a nice virtually silent GPU compute server based on a consumer/gaming PC in a 4U short-depth rackmount case, a mostly passively-cooled Supermicro Atom server in which I've replaced the PSU fan with a Noctua, and a sinewave UPS. (There's an ongoing evolution of gear over time, including more exotic stuff, and most toys I might play with in the future will also fit this form factor.)

Aesthetically, those stack on the bottom of the cabinet, a Birch Effect shelf sits on top, and the plastic OpenWrt WiFi router with all the pokey antennae sits safely in the shelf cubbyhole. (Once I need detached WiFi APs, I'll probably build a 1U or 2U router with opnSense/Linux/*BSD. And I have a discreet slide-out rackmount console for if I ever get a deeper cabinet.) I like to think it looks like understated black home AV that's not out of place in a living room, which is better than a tangle of assorted non-rackmount PCs and UPS.

Practically, besides the tidy organizing and uniform cooling airflow, I have some rack posts I can use if the stack of gear gets too large/tricky for pulling individual boxes without disturbing the others too much. Also, when I lived in a dodgier student apartment, fastening a bunch of rackmount gear together with security-head screws seemed a good way to discourage a burglar from walking off with my data.


> Rack mount hardware is almost always expensive, loud, and power hungry. I just have never seen the point of building a home lab like this.

Rack mount is just a form factor, so that generalization doesn't make sense. You can get pretty much any capacity systems you need in a rack case, including small cheap ones.

When I got tired of having systems in various different case sizes and a mess of wiring and switches etc, I moved everything at home to rack mounted.

Now literally everything is one big box (16U rack), with just power and network cables going in. All the computers, network switches, wiring, UPSs, KVM switch and monitor. Very tidy, very clean.

The computers themselves are cheap, quiet and efficient, since I don't need any monster servers at home. A couple are Atom-based (<20W total), one Celeron and one desktop AMD-based (forgot model). Same level of hardware I had before rack mounting, but now a lot neater and more organized.


True, if you build something inside a rackmount case then it can be whatever you want. However, from what I could tell empty rack mount case & rails are still a bit of a premium. Were you able to get rack mount case w/ rails at decent prices? Seems each one would have to be at least a 2U to fit a normal ATX PSU ( I guess you could use an ITX or small PSU but those can be more expensive for the same wattage).

How did you handle the system & case fans? Did you rack cases have the standard fans and were you able to configure those to be slow/quiet?


Most of my systems are from Supermicro, so they already came with all the suitable components for 1U.


Rosewill cases are affordable. 4U cases supports 12cm fan so any silent fan can be installed.


> A single ATX desktop can do almost [1] everything a homelab at a fraction of the cost & power consuption.

So this is true, but the iDRAC/iLO on my big, loud server has a virtual KVM feature that lets me lazily sit at my desk and click buttons and install my server operating system of choice. It saves a lot of space and effort compared to flashing a USB, going and plugging in a keyboard, (mouse), and monitor, and going through the whole deal. I'd wager that's one of the big things that would compel me to buy a rack server. I recently built a nice ATX desktop, fitted with a 5950X and everything, and I found that the PiKVM project [0] does a pretty good job at replacing that "integral" part of the server for me (you can also look into an ASRock Rack PAUL [1], but good luck finding one for sale right now.

> What makes even less sense is wanting use to use software like vSphere or ESXi since its about 10x more complicated than just using virt-manager/QEMU

A lot of people (not me, I end up using libvirt/QEMU as it suits my needs) buy homelabs to work towards having hands-on experience for their system administration job, which uses ESXi/vSphere. It might also be for working on getting certifications from VMware, in which case they really don't have any choice but to use ESXi on their servers.

> Server hardware & software makes sense when it's not your home because then you do need a remote access tool like iDRACK

Now, I addressed this earlier (laziness), but these BMC things are very useful—you can monitor the health of various components of your server, and I believe even update the BIOS without stepping out of your chair. It makes administering a homelab much easier, and even the Pi-KVM, a DIY option, I'm pretty sure, doesn't have monitoring features. Plus, those DIY solutions require wiring stuff into your ATX motherboard, which can get janky and might put off people who want a turnkey solution.

[0] https://pi-kvm.org/

[1] https://www.asrockrack.com/general/productdetail.asp?Model=P...


> So this is true, but the iDRAC/iLO on my big, loud server has a virtual KVM feature

This is available on AMD Ryzen motherboards, too.

Just get one of ASRock's motherboards with the BMC controller built-in: https://www.asrockrack.com/general/productdetail.asp?Model=X...

No need to mess with PiKVM or add-in cards. It's a server board with KVM management that works out of the box with Ryzen processors. It might need a BIOS update to support your 5950X, but it will work.


Yeah, I know about those ASRock boards, but they're more expensive than the rudimentary Pi-KVM solution I have right now (a 35 dollar Pi and a 12 dollar HDMI capture dongle; I would use wake-on-lan for powering the board on… if my MSI board's WoL worked). Also, not the one you linked, but the newer B550 ASRock Rack boards are impossible to find for sale—the only place I could find the B550 boards were on wisp.net.au, and I don't live in Australia or New Zealand so it wouldn't be cost effective. Perhaps I should've opted for an X470 board, but it was "older" so I was put off.


Yea that's true. ESXi/vSphere can be very relavent although I found that it was easy enough to learn on the job. The real complicated stuff of vSphere probably isn't going to come up in a homelab but if experience is necessary to get the job then it's worth it.

pi-kvm looks very nice. I would really like to not have to use iDRACK or pay the license fee.

Normal ATX motherboards do lack a lot of features. I'm not sure why none of the normal mobos don't just use LVFS [1] to update the BIOS but luckily they can read the update file directly off the vfat EFI partition so pi-kvm would solve that. I think every modern ATX motherboard also supports UEFI network boot so you could setup a simple DHCP+iPXE server for onboarding machines.

[1] https://fwupd.org/


The lenovo “tiny” hardware recommended in the article is really ideal. Its essentially laptop components in a micro case (no hid/battery/screen), they even are powered by a laptop style external DC adapter.

They are affordable, quiet, powerful (modern x86_64 with basic gpu) and light on power usage.

I run a pair of m92p myself.


The downside to these small form factor kit PCs is that then you are very connectivity limited. You can't use it to build a NAS directly or GPU connected VMs, etc.

They are quite good as a cheap thin client that you use to access your more powerful hardware. As hardware ages it tends to lack some of the nice feature's like dual 4K@60Hz output, thunderbolt, etc. so having a new but cheap/lowpower machine helps.


Thats true. This model has just one small expansion slot so you have to be clever. But its fine for general purpose compute.

Yeah realistically NAS is out, but thats probably ok. I mean NAS is not a great fit for most general purpose computers. In a pinch you could go usb-3 jbod or something. But personally I think it’d be better to go with either specifically storage oriented hardware, or a scale out filesystem on top of a cheap cluster, something like odroid-hc2.

Personally I like to keep things compartmentalized even at home. So my NAS is a dedicated (off the shelf) system, and the lenovo mini servers mount it via NFS/CIFS.


> expensive

This isn't always true. I got a 24-port Aruba 802.3at PoE switch with FOUR 10-gigabit ports, for a grand total of $120.

> loud, and power hungry

Again, not always true. Enterprises do care about power a lot of the time.

I highly recommend the Ubiquiti stuff for home lab use -- most of it is pretty quiet.

If you buy other rack-mount hardware, try to buy at least 2U hardware, the bigger fans are much quieter than the 40mm fans in 1U equipement.

If you must buy non-Ubiquiti 1U equipment, you can usually change the fans out for Noctua fans.

> A single ATX desktop

You can build an ATX desktop into a 4U case. I highly recommend the SilverStone RM42-502 for about $250 on Amazon. It takes standard components including standard ATX power supply, standard ATX, micro-ATX, or mini-ITX motherboard, standard fans (or even a CorsAir liquid cooler), it's basically a standard case. If you use quiet components it will be quiet. My ATX rack mount PC is not even noticeable unless I'm running up my GPU doing machine learning stuff.

There are much cheaper cases available as well, but the SilverStone case is quality, and will last you forever, you can just keep building new PCs into it for as long as ATX/ITX exist.

One of the advantages to building your PC in a rack mount configuration is that it's very easy to stack multiple PCs along with your network routers, switches, NAS, UPS, in one nice rack that's easy to move from apartment to apartment in one piece, and all your cables and connections stay nice and tidy.

It's also ideal if you play with a lot of smaller devices. For example if you want to have a cluster of 10 RPis, a rack mount solution is great for keeping the ethernet and power cables tidy, and it isn't going to be loud or any more power hungry than if you had them spread out across the table.

You can also 3D print rack mounts for non-rack equipment, just to keep them tidy.

My rack: https://i.redd.it/xcss9uassrg71.jpg


> I just have never seen the point of building a home lab like this.

https://www.youtube.com/watch?v=38ApYaywLzs

I have to pay PG&E rates for power ($$$) so I'm a big fan of lower-power hardware for a system I'm going to leave on 24/7, e.g. my 25W-TDP 1U racked 8-core ECC-equipped Atom server built on this board: https://www.supermicro.com/en/products/motherboard/A2SDi-8C+...


I like the Atom C-series... interesting option for edge deployed equipment.


> Rack mount hardware is almost always expensive, loud, and power hungry.

It really depends on the particular hardware. I recently picked up an R720 with 128GB of memory and dual E5-2670v1 for $450 on eBay. It idles at 120W (about the same as my brand new Ryzen 5950X desktop). It is not much louder than my old air-cooled 8700K consumer PC. Of course, it's not much faster either, and it's definitely slower than my Ryzen.

I bought it to learn how to use iDRAC and practice ZFS with 10x $20 1GB 10k RPM SAS drives. Also maybe to give Proxmox a try. All of my practical home-prod stuff runs on an old i5 desktop, not my rack servers.


120W at idle on desktop computer looks not good.


The issue is that power savings stuff is often bad on Linux... especially on older hardware. 'Idle' in this context means running with near zero load since a server cannot suspend or hibernate. In theory you could setup hibernate/suspend + WoL but that tends to not work well.

Some power saving features will really only save 5-10W and can make PCIe cards unstable, poor hard drive performance, laggy keyboard & mouse, etc.

In particuler, GPUs often have poor suport for the normal power saving features. I just pulled the dedicated GPU (GTX 760) out of my old machine that is now a headless server and that saved ~20W. What's weird and bad is that that 20W was being consumed without a graphical interface even running.


>What makes even less sense is wanting use to use software like vSphere or ESXi since its about 10x more complicated than just using virt-manager/QEMU.

I disagree. Assuming your example of a single ATX desktop, ESXi really is easy to setup and modern versions provide a graphical web client. This assumes you're staying away from vSAN, vMotion, and iSCSI storage.


but, assuming you are learning the vmware stack because of employment. Not knowing things like vsan, ISCSI and vmotion makes your lab nearly worthless.

Learning vsphere and esxi properly requires atleast a decently sized cluster, especially if you start throwing NSX in the mix.


Forget even iSCSI as the goalpost, a lot of places still use fibrechannel drives and controllers and FCoE makes sense mostly once you’ve hit at least 10 GbE which is a bit pricey for the sake of learning while it’s basically hot garbage in professional environments worth a career in. Half the point of vSphere in a professional capacity requires use of several nodes such as different HA options, how to trunk networks, distinctions between different LUN abstractions (RDMs in physical v virtual compatibility modes) and an endless parade of the ways different vendors’ SANs and physical switches can completely mess up and ruin your entire month if you’re not pedantic about every random switch or flag in configuration. All these other vendor minutiae being so important rather than the general concepts as a practitioner is part of why I don’t do VMware professionally anymore and went quietly to SRE janitoring in major cloud providers and chalked those days up as time in IT. It’s just much more practical to learn AWS than to pore over the Cisco Nexus documentation unless one’s career really is in the lower infrastructure levels of tech.


There's a lot to be said for 1L form factors, which were included in the article, which are basically the size of a Mac Mini.

I have an HP ProDesk 405 Mini, with an AMD Ryzen 7 PRO 4750GE (Zen 2 based). 8 cores, 16 threads, with a base clock of 3.1GHz and boosts to 4.3GHz. Includes 8 Vega 64 GPU cores. Tossed 64GB of memory into it and a 1TB Samsung 980 Pro. Supports AMD Dash for basic LOM.

Idles at like 12-14W. Always silent. When stuff with the 5750GE hit they'll be better still. It's a great (and cheaper!) alternative to something like a Threadripper 5970X (when it hits in Novemberish) desktop if you wanted to cluster a few and have a low-power Apache Spark cluster that can actually rip through things. I think the only downer would be the lack of 10GbE support unless the next round of 1Ls offer 10GbE cards as options.


I had more space in my network rack than near my desk, so I bought a cheap Rosewill rackmountable ATX case, and rebuilt my old desktop into it (since I pretty much only use my laptop these days).

But I agree that buying old Dell servers for home use is rather silly at this point.


Right? Obviously, everyone should do their thing. But I suppose the tiny tiny "issue" I might have would be - I kind of feel like this reinforces the idea that "having server things in the house is big and complex."

So I encourage everyone who does this also tells the newbs, "I mean, you could also just slap linux on that old computer in the corner and do 95% of what I"m doing here, BUT MINE WILL LOOK COOLER."


Agreed. The startup I used to work for (acquired by Intel) both before and after acquisition had a good deal of lab hardware. Between evaluation boards and rack mount hardware our server room was insanely loud - like "it's 25M away and behind a closed door, but you can still tell when the lunchtime QA and benchmarking runs kick off".

There's no way I could live with that in my house, or even a fraction of it.


Exactly. The point of homelabbing at that scale is to dig a fire pit with an excavator, in order to learn how to use said excavator.


I agree. My setup is very much like the setup shown near the end of the article (the one that consists of a couple Synology NAS boxes and what looks like a few Mac Minis or other small form factor PCs). I have a file server with a decently large storage array and a few Raspberry Pis and other small electronic gizmos, some of which connect to my wifi, and some which plug into a PC via various cables (ethernet, USB, etc.)

The only thing out of this that consumes any amount of power worth mentioning is the file server/storage array. I haven't measured how much power it uses (I probably should), but I'm able to minimize it by allowing the disks to go to sleep and the CPU to run slower than when I'm actively using it.

I've never felt limited by this setup at all, but, then again, my home lab isn't really my main hobby.


You really need the separate physical hardware a home lab is not just a bunch of VM's

One of the best things about doing my CCNA courses at a CISCI academy was having 20 switches and routers to play with and find those areas where it does not always work out the way the books say they should.


Yes! Companies refresh Dells pretty often and you can find an i7 for not much money on eBay. Buy two to double up the RAM. If you know the right people at a company, you could get them for free even.

Home labs aren't about millions of hits. It's a playground.


Classic cars are also expensive, loud, and power hungry ;-)

Sometimes it's not about being practical, it's about having fun playing with stuff you wouldn't otherwise normally use.


But excavators are fun!


I've been re-establishing my home lab and decided to get away from rackmount gear. I found ServeTheHome's TinyMiniMicro[1] series invaluable for choosing some mini PCs that would be right for me.

I went with three HP Prodesk 600 G4 that averaged $250/ea with the i5-8500T/i5-8600T, 256GB NVMe, and a total of 40GB RAM. They can go to 64GB RAM, the dual M.2 M-key plus potentially an SFF SATA drive offer plenty of storage potential, they're effectively silent, and power consumption is much lower than a big server full of fans. vPro potentially offers out-of-band remote management but I haven't tried digging into that yet.

I have two dedicated to Frigate with M.2 Coral TPUs. On the third I've been consolidating the sprawl of Linux VMs and Docker containers running home automation and network management stuff. Could probably make do with just two but why buy only two when you can have three?

[1] https://www.servethehome.com/tag/tinyminimicro/


I'm on board with most of this except the suggestion that rack mounted hardware should be kept where people live.

No.

I can't afford a house but like to tinker with networking. I pay more for weaker equipment so it will be low power/low noise. I'd love to just buy half a dozen used Dell PowerEdges but rack-mounted hardware is insanely loud.

A basement is the ideal spot. Water isn't an issue as long as your rack is not directly under anything that could leak (including on higher floors) and has a palette underneath it. If there is the possibility of your basement flooding more than 2 inches then you have bigger problems you need to address first. Keeping a rack with electronic equipment there will motivate you to do what you should be doing to the place anyway: dehumidifying and managing pests.


I’ve done this and it wasn’t good. The servers heated up the basement, and chewed through power like crazy. Plus it was heavy as hell and a pain to recycle.

Rack servers are that form factor to maximize expensive datacenter rack space. Once you are not in a datacenter regular commodity hardware is a better bet.

For home use really laptops are ideal. They have an inbuilt UPS and KVM.


Agreed, I started with rack mount stuff and quickly moved away from it. Very loud and for the budget stuff frequently used by homelabs, very power hungry for not that much performance.


Yeah i tend to agree..

I had a small rack in our old house. Mostly just to house the router/switch because the little cabinet wouldnt fit.

My wife dubbed it the EyeRack, short for eyesore.

For the most part you can easily hide this stuff in a cabinet or a bookshelf, no one will be the wiser.

Ill never really understand people using pizza box style servers, especially 1U units, in a home. With the sole exception of the one time i saw one stood veritcally behind an entertainment center. I think i was the only person at the party to notice it.


> I'd love to just buy half a dozen used Dell PowerEdges but rack-mounted hardware is insanely loud.

This is a common belief that isn’t quite correct. My 2U Dell R520 is quite quiet after the initial boot (once the BMC takes over fan control), albeit I had to do some ipmitool magic to get it to not ramp them up with non-OEM PCIe cards installed.

My 1U R420 and R320 boxes? Yeah, they’re a little loud, 40mm fans have to run at higher speeds to get air flowing.

Ultimately my lab lives in my home office and the noise doesn’t really bother me, I wouldn’t put it in the bedroom or living room though.


> but rack-mounted hardware is insanely loud

A 1U chassis stuffed with high power dissipation components will be insanely loud like a jet engine. But it's easy to avoid that. The key for quiet is to use 4U chassis for anything that requires significant cooling so it has large fans.

On my home rack I have three 1U computers, two are Atom-based (fanless with SSD, so zero noise) and one is a Celeron-based with slow fans (about as loud as my MacBook). The larger machine is a 4U with larger fans, so same sound level as any tower case (a 4U is literally a tower case sideways, with rack mounting tabs).


I've kept my server[0] in my basement since I moved into a place with one. I keep it elevated off the floor (not just for water concerns, but airflow, too) and under a table. Although it doesn't, it can make a lot of noise since no one's near it most of the time.

[0] is really an old desktop.


IF you live near a co-lo, you might be able to get a half rack and go in on it with a buddy or two.


Note that 10Gbe SFP+ switches have come down to $250 or so, and may be worthwhile for homelabs to experiment with. See Mikrotik's CRS309-1G-8S+IN, or servethehome's review (https://www.servethehome.com/mikrotik-crs309-1g-8sin-review-...).

If only because most of us probably already know how to use Cat5/Cat6 Ethernet, but how many of us have experimented with fiber optics?

10Gb Ethernet over Cat6 exists too. But that may be boring for some! Home labs are about experimenting with new things.


https://fs.com for all your optics!

Maybe not all, but very usable and informative regarding prices and availabilty of all that 'exotic' stuff.


Another advantage of fiber is it helps prevent lightning and other power surges from spreading. If your equipment is protected on the power edge, fiber isolates it on the network side.


Hmm, maybe "fiber" is the wrong moniker here.

I'm more talking about SFP+ ports, because most of your connections within the rack will probably be DAC (copper cables pretending to be fiber) for lower costs. Fiber is really for longer runs. If you only have a few feet worth of cable, I'm not sure if fiber per se is worth it over DAC.

But learning to work with SFP+ hardware is a skill, just like learning to strip CAT6 cable or run it around. Working with DAC cables, or SFP+ modules and finding what works is the "dumb part" of IT, but the kind of stuff you need to practice a few times to understand.

----

Grabbing a few SFP+ ConnectX-2 cards from Ebay (for $30 or so), a few DAC cables, and a $250 switch... you can be well on your way to a 10Gbit network.


Also one can learn to use a Raspberry Pi or similar with GPIO as 'programmer' for the EEPROM in 'white label' optics, to make them work in equipment from established brands, which wouldn't work otherwise, because not whitelisted ;-)


DAC cables are usually vastly more expensive then SFP+ optics and some multimode cable.

Singlemode is not really required in a homelab setting because of distance, but DAC cables are more trouble then their worth in my opinion.


> DAC cables are usually vastly more expensive then SFP+ optics and some multimode cable.

FS.com suggests 17€ for 10GBASE-SR, and 4€ for 1m of OM3, or ~40€ total, with three different components that could be a point of failure, in many weird and wonderful ways, vs ~10€ for a 1m 10G DAC, which is sold as a single unit and is replaced entirely in the event of a failure.

DAC is also lower latency at these short distances too, as you don’t need to convert electrical to optical and back again. It’s just end to end electrical.


> DAC cables are usually vastly more expensive then SFP+

DAC cables don't need separate transceivers.


they still need an SFP+ (or higher like QFSP) slot to fit into though.


You need that regardless?


AOC cables are also affordable.


Aruba 1930 is a great option too


I strongly endorse this notion of equipping your own laboratory for your experiments. Learning through doing is always more durable than learning through reading only.

While the author is looking at learning about and perfecting their skills as an administrator of networked computer systems there are other "kinds" of laboratories that people set up.

Mine, and one I'm more familiar with, are electronics labs. If you're going to be learning about circuits and such it helps to have the basic kit at hand. Similarly for people doing robotics, having a 3D printer in their home lab is essential these days. Nearly everything you might do in a home laboratory will involve some sort of data processing so the ideas by the author are great for creating the lab's "IT infrastructure."

In California it also makes it easier to defend the "I built this technology my own gear (picture/description of lab) so you don't own it." But that may be unique to California.


Yeah I think I was expecting a home lab = "home electronics lab", although some networking equipment is still required. The rack mount stuff is nice, but yeah, scopes, bench dmm and power supplies, solder station, cable hangers and many drawers to sort parts was what I was expecting to see. Still the rack mount stuff is pretty cool. I have quite a few raspberry pis these days and was always looking for some "rack mount" style ways to make the power/ethernet cables nice and still have some easy way to temporarily hook up keyboard/mouse/monitor when it's occasionally needed.


It's not a lab until you lower the Bridgeport milling machine into the basement. Also, you may need three phase, but you will definitely need 240V for the multi-process welder.

https://www.homemodelenginemachinist.com/threads/getting-the...


Surprisingly enough I have two friends who both run mills (one is a Bridgeport the other is a full size Japanese brand) who use single-phase to 3-phase converters. Plugged into a 240V outlet they are essentially motor/generator sets. I was going to use one for a VAX6000 until I figured out the only reason for the 3-phase requirement was the blowers and re-wired it to work on 2-phase.


Exactly, I think this thread is focusing on the utility of an environment from a mature SDec POV. Personally having a cheap R710 allowed me to practice my sysadmin, networking, security and a host of other skills that directly translated to my (then) current and future roles. I think the perception of a virtualized environment being the same is false and diminishes the sys/network/security admin's knowledge set. It's is akin to saying that writing JS is the same as Java, coding is coding. I'll also say the number of times I've been in rooms of IT pros who don't understand underlying systems, OS, or networking is disturbingly high.

Lastly, even if it is frivolous I see tinkering with hardware as part of the hacking spirit. It's fun, I like it and it gives me something to grumble about with the grey beards.


Here in the Bay Area I know a guy who both wires and de-wires commercial buildings for networking. It is surprising to me how many businesses move out and simply discard any equipment they installed in the POE to support their network. He gave me a Catalyst 9000 series to play with (he usually sells them to the resellers) and it was cool to be playing with "real" hardware but man, that thing was LOUD with the fans.


Serve the Home did a fab set of reviews on the different SFF machines and how useful they are for a homelab

https://www.servethehome.com/?s=tinyminimicro


This is a good, if not opinionated, guide. And to be fair, r/homelab and its ilk are so full of options that it's easy to become overwhelmed.

Personally, I settled on Supermicro because they're modular, and don't care what brand of stuff you throw in them (HP is notorious for spinning fans to turbo if you put non-HP disks into them), although I may be picking up two Dells to complement the one I have for a Proxmox HA/Ceph cluster.


I dont run server anything.

I have dell 7050's running a VMware/virtualization lab. They can have 64GB of ram and can handle anything i throw at them. Not iKVM though. But i just walk across the room and hook up a monitor the one or two times a year i need to.

Honestly Synology has been the best godsend. I was a SAN admin in a previous life. I have run freenas, openfiler, openfiler in HA, linux+NFS+iSCSI etc over the years. Synology generally makes it simple and totally integrated and allows me to play with other things rather than getting storage working.


Dell 7050 may have vPro, which enables Intel AMT remote KVM access via Management Engine.


I love Dell. IDRAC is killer.


The HTML5 interface is definitely very nice. I haven't had a chance to use Supermicro's HTML5 IPMI, as I have an X9 board, and to my knowledge the minimum support for it is X10.


I have a little experience with both the old and new Supermicro stuff. The new x10 IPMI experience is a lot like the old x9 java app experience, except you don't have to dig out an ancient computer to make it work. (Which feels great by comparison!)


6c/12t 64gb NUC with 1tb nvme drive works very nicely. I have offsite NAS but the nuc has a thunderbolt 3 port for a raid array or 10 GB ethernet if needed.

With KVM on Ubuntu 21 zfs root pretty much covers most of these home lab uses. Have a Linux workstation that also does gpu pass through for windows gaming. Use a physical Kvm to switch between hosts.

Really hard to justify a rack these days


> AMD has really raised the bar. I’m most impressed with the CPU performance of the M715q. They both run quiet and cool, with Ubuntu Server and Windows 10.

The M715q was offered with a fantastic 4750G chip, a Ryzen 7 Pro chip with 8 core. Today all one can buy in terms of small form factor business PCs is an M75n with low end low power Ryzen 3300U, a multiple-generations old Ryzen 3 with 4 cores.

Small business PCs are great, and for a while, there was serious excitement that AMD was going to make this segment much more interesting. Those dreams seem to have all been cancelled. I'm glad to see that affordable, competent AMD laptops are about, because in many ways it feels like AMD has succeeded so greatly that they have vanished from the market. They don't seem to be allocating production capacity to consumer GPUs, they seem to have withdrawn from this price-conscious market segment,... AMD keeps vanishing.


I just did some work on my main server, and my take is a bit different.

I upgraded my desktop and built a server from a Ryzen 1700, put 64GB of RAM in it, and now this one device acts as DNS filter/cache(Pi-Hole), VPN server (Pi-VPN Wireguard), and a 10TB ZFS NAS. This is just the base, I also use it for gaming and labs.

The main recommendations:

Fractal Design Define R5 - this is a large case, and is pretty wide - but it is a dream to work in. The extra width gives plenty of room behind the motherboard for hiding cables. It has quiet fans, it is built to minimize noise, and it can hold 8+ hard drives.

OS: Proxmox. I use this as the host OS, and configure my ZFS on the host. I then expose the ZFS as a NAS via a privileged container running Turnkey Linux.

If you get some multi-port NICs on it, you can put an OPNSense firewall as a VM, and use the machine as your router as well. In the end, you would only need UPS, modem, small switch, and the host.


So many of these posts gainsaying the practice of a home lab are spot on. In my view it's plain foolish to try and cram enterprise gear into a living space. It's almost too hot and loud for my office, why would I take any of that home?

The other posts talking about Tiny Mini Micros are on the right track but I think it goes further yet - there's good reason to have a small rack of crap in the corner:

- ISP hardware.

- pfSense gateway.

- Wifi base station.

- A good gigabit switch for the house.

- Those tiny mini micros or Mac minis for lab stuff.

- a NAS chassis or two.

- Raspberry Pi clusters.

- PiDP-11 or other such hobby stuff that needs a place to sit and blink.

There are plenty of other uses too, like security DVRs, ingest stations for cameras/recorders, optical and tape media devices, etc.

None of that stuff is hot or loud, but you probably wouldn't want it piled up on your desk or spilling out of some bookshelf. And I think the article kinda gets at that point, tbh.


> Enterprise features: Ubiquiti EdgeRouter ER-10X, 10 Port Gigabit Router with PoE Flexibility – $110 (specs) – (10) Gigabit RJ45 Ports, PoE Passthrough on Port 10, Dual-Core, 880 MHz, MIPS1004Kc Processor, 512 MB DDR3 RAM, 512 MB NAND Flash Storage, Internal Switch, Serial Console Port

Has anyone been able to purchase a small Ubiquiti EdgeRouter in the last six months? They've been out of stock at Amazon, Newegg, B&H. Beginning to wonder if they have deprioritized the consumer market, since other vendors are shipping routers.


Ubiquiti appears to have been prioritizing their own online store since the supply chain disruptions began. The ER-10X in particular may also be suffering from a general lack of popularity -- they've only had stock twice this year in fairly low quantities. They sell through fairly quick, but not nearly as fast as the ER-X which has been stocked regularly and in much larger quantities.

There's an inventory tracker for the Ubiquiti store on the Discord.

https://discord.gg/ui


EdgeRouter 4 is available on their own store right now. Most of it's lighter weight siblings aren't, but their 3 most expensive models are also out of stock. None of their consumer wifi gear seems out of stock, though. I'd guess Si shortages before assuming they deprioritized the consumer market.


Yeah, at that $250 price point there are x86 coreboot alternatives.


Are they still making them? I thought they had switched to pushing their Dream Machine as opposed to the Edge Router series.


I _hate_ the Dream Machines. We've been switching to them at work and the whole cloud UI is just an absolute mess. It's so hard to find anything.

I will be sad when the last of our Mikrotik stuff gets swapped out.


Yes, I bought an ER-4 last month from NewEgg. In fact, it is in stock and has a promo code for $10 off currently. https://www.newegg.com/p/0XM-0013-00087


I was hunting the 8 port switch for a while. I had to setup my own stock monitoring with alerts and managed to catch one that way.


Highly recommend the Discord on https://www.serverbuilds.net/. It's a great community built around home lab hardware using off-lease or decommissioned enterprise hardware. Guides range from 4U virtualization builds to custom pfSense routers.


Ikea Lack Rack for rack mounting on the cheap! https://boingboing.net/2020/08/14/lack-rack-ikeas-cheapest-t...


For UK people https://www.bargainhardware.co.uk/ is an _excellent_ source of kit.

Personally I steer away from cisco. Yes, some people in enterprise swear by it, but I _personally_ hate it with a passion. However there is a fucktonne of it on ebay.

I use ubiquiti for APs, but I've not tried their switching. Currently I have some dlink "smart" stuff. Its PoE and has vlans, which is good enough for my purposes. Can do 10gig, not bad for < £120 (second hand)

Firewall, I'm all for pfsense. I've never liked hardware firewall/router appliances. They've always sucked.


Would be great to have more VGA-IP solutions. My biggest problem is headless servers is when they dont boot its a PITA to move somewhere else to plug in a monitor and keyboard. Its the main reason I gave up and just hire a cloud host - even though its expensive.


https://www.aten.com/us/en/products/kvm/ ?

Just as an example. There are much more, this was just from the top of my head, because they tend to just work.


KVM still needs a monitor which I'm trying to avoid.


When I had a larger homelab, I found it was a large proportion of my electricity usage.

A constant 200W adds up, and 200W of heat is not desirable during Australian summers.

I have settled on a PC Engine APU2 acting as my router, and a Raspberry Pi acting as my IoT brain. 10W.

My other devices sleep.


UPS plus a generator is a must have if you live someplace with severe weather.


I made a rack out of some dumpster-dived supermarket shelves, lumber, a truck air filter and a forced draft fan. The thing doubles as drying cabinet for produce (mint, mushrooms, fruit etc.) by having the equipment in the top half of the rack followed by an air flow divider and 8 rack-sized metal-mesh-covered drying frames. From top to bottom the thing contains:

* D-Link DGS-3324SR (managed switch, €35)

* HP DL380G7 with 2xX5675 @3.07GHz, 128GB (ECC) RAM and 8x147GB SAS drives (€450)

* NetApp DS4243 (24x3.5” SAS array, currently populated with 24x650GB 15K SAS drives, €400)

* the mentioned airflow divider

* 8 drying frames

It is managed through Proxmox on Debian and runs a host of services including a virtual router (OpenWRT), serving us here on the farm and the extended family spread over 2 countries. The server-mounted array is used as a boot drive and to host some container and VM images, the DS4243 array is configured as a JBOD running a mixture of LVM/mdadm managed arrays and stripe sets used as VM/container image and data storage. I chose mdadm over ZFS because of the greater flexibility it offers. The array in the DL380 is managed by the P410i array controller (i.e. hardware raid), I have 4 spare drives in storage to be used as replacements for failed drives.

The rack is about 1.65m high, it looks like this (here minus the DS4243 array which now sits just above the air flow divider):

https://imgur.com/a/M4Lbf1K

In the not-too-distant future I’ll replace the 15K SAS drives with larger albeit slower (7.2K) SAS or SATA drives to get more space and (especially) less heat - those 15K drives run hot. After a warm summer I added an extra air intake + filter on the front side (not visible on the photos), facing the equipment. This is made possible by the fact that cooling air is pulled through the contraption from the underside instead of being blown in through the filter(s).

I chose this specific hardware - a fairly loaded DL380G7, the DS4243 - because these offered the best price/performance ratio when I got them (in 2018). Spare parts for these devices are cheap and easily available, I made sure to get a full complement of power supplies for both devices (2 for the DL380G7, 4 for the DS4243) although I’m only using half of these. I recently had to replace a power supply in the DL380 (€20) and two drives in the DS4243 (€20/piece), for the rest everything has been working fine for close to 4 years now.

On the question whether this much hardware is needed, well, that depends on what you want to do. If you just want to serve media files and have a shell host to log in to the answer is probably ‘no’, depending on the size of the library. Instead of using ‘enterprise class’ equipment you could try to build a system tailored to the home environment which prioritizes a reduction in power consumption and noise levels over redundancy and performance. You’ll probably end up spending about the same amount of money for hardware, a bit more in time and get a substantially lower performing system but you’d be rewarded by the lower noise levels and reduced power consumption. The latter can be offset by adding a few solar panels, the former by moving the rack to a less noise-sensitive location - the basement, the barn, etc.

As to having 19" rack equipment in the home I'd say this is feasible as long as you don't have to sit right next to the things. Even with the totally enclosed, forced-draft rack I made the thing does produce enough noise to make it hard to forget it is there.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: