Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a largely academic question. HDDs simply don't have the performance required for modern use, and NAND flash SSDs are not archival if left unpowered, so it's SSDs for all online storage and HDDs for backups.

You do have to take precautions: avoid QLC SSDs, and SMR hard drives.



HDDs are the backbone of my homelab since storage capacity is my top priority. With performance already constrained by gigabit Ethernet and WiFi, high-speed drives aren’t essential. HDDs can easily stream 8K video with bandwidth to spare while also handling tasks like running Elasticsearch without issue. In my opinion, HDDs are vastly underrated.


I run a hybrid setup which has worked well for me: HDDs in the NAS for high-capacity, decent-speed persistent storage with ZFS for redundancy, low-capacity SSDs in the VM/container hosts for speed and reliability.


Same, I run my containers and VMs off of 1TB of internal SSD storage within a Proxmox mini PC(with an additional 512gb internal SSD for booting Proxmox). Booting VMs off of SSD super quick so its the best of both worlds really.


Yes, those workloads are mostly sequential I/O, that HDDs can still handle. Most of my usage is heavily parallel random I/O like software development and compiles.

You also have the option of using ZFS with SSDs as L2ARC read-cache and ZIL write-cache to get potentially the best of both worlds, as long as your disk access patterns yield a decent cache hit rate.


I do something similar as well for my primary storage pool appliance of 28TB available. It has 32GB of system ram so I push push as much in to ARC Cache as possible without the whole thing toppling over; roughly 85%. I only need it for an NFS end point. It's pretty zippy for frequently accessed files.


I need big drives for backup. Clearly, there's even more reasons to use HDDs now.


Even in this case, you need to be careful with how you use HDDs. I say this only because you mentioned size. If you’re using big drives in a RAID setup, you’ll want to consider how long it takes to replace a failed drive. With large drives, it can take quite a long time to recover an array with a failed drive. Simply because copying 12+TB of data to an even a hot spare takes time.

Yes there are ways to mitigate this, particularly with ZFS DRAID, but it’s still a concern that’s more a large HDD thing. For raw storage, HDDs aren’t going anywhere anytime soon. But, there are still some barriers with efficient usage with very large drives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: