Anyone know about the status of the Kobol Helios64? So far it seems the best possible option hardware-wise for a compact ZFS capable NAS, although obviously very different in price from the Odroid HC4.
I've got one! It took a while to ship because it got wedged in the middle of the COVID supply chain and shipping meltdown, but now that I have it it's a solid device, massive upgrade from the 4
I have armbian loaded on it, the OS (other than the debain base) is DIY, but armbian has templates for common stuff (that I've never tried because I just use sshfs)
My understanding is that even beyond deduplication, ZFS performance relies heavily on caching data in memory, and without that it's actually quite slow. As in, it'll work with even 1 GB of RAM, but becomes less and less useful.
My use case would be two RAID1 mirrors plus a spare disk. I already have a 5 years old similar config on one Mini ITX Atom board with 4 Gigs RAM and Nas4Free; no problems so far. On ARM I would probably go with OMV (suggestions?) since XigmaNAS doesn't support that architecture (yet?).
Interesting device! There's more details linked in the CNX Soft post [1], in particular, a thread on ODROID.com showing benchmarks [2]. The two disks are connected to the CPU via the ASM1061 SATA to PCIe x1 controller, meaning it can't fully handle max throughput from 2 SSDs, but I guess spinning drives should be fine.
Skip the middleman and directly connect your hard drive[1]! I think you may have trouble saturating the hard drive's dual 25Gbe links, and a decent switch is gonna be pricey, but that backup should be going very VERY fast indeed when you're done.
Kioxia (nee Toshiba) is not the only one doing this, thankfully, cause I think it's interesting as heck. A lot of hard drives have a good amount of speedy Cortex R processors on them already, why use those to talk NVMe or SATA when you could have the hard drive talking ethernet directly?
I really wish we had better understanding about the failure modes of NVMe past wearing out the flash. Lets say we're writing a vibration sensor's data to the drive, once an hour. We leave 50% of the drive unpartitioned to give the drive controller plenty of headroom for whatever it wants. It might take centuries to get to the expected TBW. Yet there is still some kind of MTBF that is likely to happen.
This above workload scenario is 100% write based. What about drives in data-centers that have lots of user images that they are unlikely/not-incentivized to prune. The drive could, hypothetically, only need to be written once, then forever read. When do sectors start going bad here? Or is the failure mode otherwise, in other componenets? Will this drive last a century? More?
What other modes of failure should we expect, how probable are they?
I love your challenge here @JAlexoid. Switching from thinking in terms of TB/$ to TB/y/$ is a great twist.
Depending on your use case I would argue. If you use it as a pure nas it wouldn't matter. If you want to do fast encoding or something similar and get bottlenecked by your storage it could be annoying. I do post processing on my NAS and use my SSD as swap so I like every mb/sec I can get :)
Edit: arg thought the comment was responding to a different comment from me. (Where I talked about my home ceph setup that was another reason why I need fast swap)
Yes I build something using a standard x86_64 (celeron) setup. I heavily experimented with an all arm setup thought. I still find it pretty hard to balance a cheap build that is expandable. I recently lost an OSD and my setup had a consistently high load while recovering.
I don't really like the power consumption of my current setup and am thinking of moving to arm. (I use multiple small HDDs between 1tb and 10tb and have different pools with different redundancys. Am thinking of replacing it with a simple raid or something and 3-4 12tb HDDs with a simple redundancy. Could use minio as s3 replacement and btrfs as fs [I won't use zfs for different reasons] btrfs has snapshots, compression and deduplication so everything I want and need. Never did use rbd over network)
Edit: This is my home setup I'm talking about. At work we got a real setup.
Just buy a FreeNAS mini from IXSystems. It just works. I have had one for 6 years now with no issue. Yes it cost money, but if you can pay for 4x12TB then I assume you can pay for the mini. :)
When I was younger and still an everyday networking guy I had a rack with tons of kit, ran my own DNS, firewall was OpenBSD, etc. Today I have a family and a sr. role at a startup. My network is 100% Ubiquiti, storage is a FreeNAS Mini and every TV has an AppleTV with the same apps and setup. Why because I do not have time to play anymore. Why because if I get killed in a car accident my non-techy wife can understand the tools I left.
My Celeron boxes spend most of their life at ~6W, burst to 15W. Drives are all attached via USB, but since these are Intel Broadwell based Celerons (5 years old now! wow, long in the tooth. time flies), there are 4x USB3.0 root complexes that can run pretty much full tilt.
Trying to go lower power doesn't feel worth it, particularly when the hard drives drink so much power.
I feel like I'm spilling the beans sharing this, but ServeTheHome has has an excellent "TinyMiniMicro" series on ~1L size mini-PCs, which are used in businesses a lot. They are much beefier boxes than my little celeron, now often 6 core with way higher clocks. They come with 35W and 65W cpus, but even the couple generations old models still tend to idle at ~10W & otherwise sip power only as demanded[1].
If power is your concern, get rid of your small hard drives. Switch to ARM? Meh.
FAst encoding and NAS are not synonyms. You need a compute system for that, so you're already in the realm of storage server. NAS is not a "storage server", it's function is primitive.
There're also plenty of boards around Ryzen Embedded (V1605B/V1807B/etc) if you need more lanes. There's Udoo Bolt. Sapphire and iBASE also has several boards.
I'm not aware of any decent contenders with multiple m2 sockets based on ARM, if that's what you're specifically looking for. But you can always buy a cable or board that splits or bifurcates a single interface into multiple m2 slots. I haven't tried any of these myself but there are plenty of options on Ali, eBay, or directly from embedded/industrial vendors depending on your cost/reliability preferences.
I've spent quite some time looking around for the optimal low-footprint storage cluster options and eventually ended up with mITX form-factor after realizing I want plenty of RAM and need more cores per node to handle ZFS+gluster+encryption.
There are plenty of options if you don't, though!
I'd advice to check out the APU's, either way. They have served me very well until I started upgrading and have now been repurposed for other use.
I did discover that JMicron cards[1] kinda work if you leave the drives powered off until the kernel boots and then power them on and let them hotplug. Wasn't ever able to make it work reliably though.
mPCIe and M.2 are different physically, so be careful in selecting cards.
I've had positive experiences with the APU2 line as well - getting ECC support on the RAM now that firmware is being developed in the open was a really nice change: https://pcengines.github.io
Right, my bad. I can't edit my comment anymore but yes, it's mSATA/mPCIe (which have the same physical interface so you have to be careful which card you put in which port), not M.2.
---
These interfaces can be really confusing, and it doesn't help that even manufacturers and vendors often incorrectly mix them up as well, which makes searching for adapters quite an ordeal...
The Wikipedia page[0] is good as a primer now, but I remember spending like an hour or so figuring out if I could put an M.2.2260 in an M.2.2280 port (hint: the last digits only signify the length of the card so yes). And then the B-key/M-Key/E-key varieties, which ARE incompatible. I swear PCIe/SATA are worse than USB-C/TB in figuring out what you can put where to what effect...
The APU? The manufacturer also sells cheap metal cases that double as heatsinks, check out the webshop. The black one has the best heat-dissipation. If you want to fit 2.5" disks you'll have to DIY a bit.
The processors on ARM SBCs are mostly industrial or automotive designs and feature sets lag behind the consumer laptop/desktop market. Many of those processor families don't yet support NVMe, or just started supporting it recently.
Fully agreed. I really want the low end SBC market to start dabbling in some expandability. This S905X3 chip has one port that multiplexes PCIe OR USB3 on it, & they use it for SATA, & I can't wait for these SoC systems to start offering just a little more high speed connectivity, please.
Marvell's EspressoBin[1] was a great board, because it was small, easy to power, easy to plug a drive into, had space for wifi, had decent ethernet. So much goodness built in, at a great price point. Please all, more ambition on the I/O front!
The Rockpro64 comes with a PCIe x4 port; and it works well with a M.2 NVMe adapter. You could try it with a double M.2 adapter, but I'm not sure what speeds you can get with those.
MediaTek MIPS CPU and only 512mb RAM though. The community is kinda moribund and the installation instructions don't match the firmware I got on the device. I've made a couple of halfhearted attempts to update to later versions but haven't had any success so far.
If you want to setup the gnubee be sure to have a good enough power supply and use https://github.com/neilbrown/gnubee-tools instead of the official firmware. I would cross compile it since the compilation on the gnubee didn't provide me with a working image, but Neil browns effort getting a nearly mainline kernel to work are fabulous.
There are a few pitfalls so make sure to check your devices Mac address before and after the install. Also do a rudimentary performance check on the HDDs with hdparm. I was disappointed at the gnubee as well because of the (IMHO) stupid design decisions with the bootloader and the bugs it entails when running a newer system from an SD card. (You have to name your partition a certain way and it changes to that partition halfway through the boot. Also they never really tried (afaict) to mainline uboot and the version they used is ... not nice.
I really wanted to like and to use it, but for me it was a bit unstable with a full bay and since I didn't want to buy a bunch of new 2.5" discs for a unstable system and my 2.5" discs I tested with were all just 250gbit doesn't really work for me
I also discovered that the 2.5" drives I bought to use with mine (Seagate ST3000LM024) are DM-SMR and thus fall out of even mirrored arrays within 24 hours of booting. :(
Also because low RAM and MIPS, running Go-based software like Minio or Restic seems unlikely (or at least unstable).
One of these days someone will do a proper homebrew NAS board. Surely.
Not a dumb question! If you're talking about spinning rust, this is precisely the job of some particularly niche sysadmins (the kind that manage storage clusters) to know. I'm not one of them, but a quick search produces this:
The easier answer I'm comfortable giving (despite me not being a sysadmin by trade) is that good storage software/setups should abstract over and protect you from something like this, because SSD/HDD bit-rot is absolutely a thing regardless of orientation -- if you're not using a WAL-ing (ex. zfs) or checksuming (ex. ceph's Bluestore) file system you're exposing yourself to data corruption. Only thing left is whether you'll actually have your drives long enough for them to exhibit complete failure from orientation, can't know that except to test like Backblaze/other providers do.
Common wisdom is that it doesn't matter, but you shouldn't change it. Anecdotally drives that ran mounted vertically that then are moved to horizontal mounts do have increased failure rates.
Somebody might be able to correct me on this but I believe that BackBlaze's vaults use vertically mounted drives, so that might be some useful data to apply to answer that question.
BackBlaze was not alone (not sure who was first) to mount drives vertically. Xyratex did that in the near-line storage units (e.g. RS-4835), as did Hitachi and others. Once you've seen them, it's the obvious choice to maximize the number of 3.5" drives in a rack (~48drives in a 4U chassis). Heavy beasts though -- you want a lift to mount them.
Don't know about relative reliability to horizontal mounts, but afair the drive manufacturer recommends mounting either vertical or horizontal and we didn't observe any obvious issues.
BackBlaze publishes good data, although I suppose that other manufacturers (such as Dell) make storage appliances with vertically mounted drives thats a data point that they at least think its okay-ish.
So, much of running a NAS is going to be software support, which isn't something I think your going to get from this device. Yes its inexpensive, but say you want to run freenas (or another nas distro, or a hypervisor with a NAS). I don't think you get out of the box support for random OSs on this device because it doesn't adhere to any of the arm platform standards.
So, at this price point your probably better off with a rpi4 and a USB3 disk enclosure. The nice thing about that solution is that the RPI is slowly gaining full out of the box OS support (vmware, windows, *bsds, along with the various linux's), it has much better cores (a72's), and the end user isn't limited to a 2 disk solution as there are a number of 1/2/4/5/etc bay USB3 enclosures. The end result is a flexable solution which can saturate the 1Gbit ethernet on the rpi4.
Counterpoint: This is an Amlogic S905X device, presumably similar under the hood to the S905. Armbian already supports at least one such device (ODROID-C2) using the current kernel, so once the device tree configuration is worked out, NAS software support ought to be practically the same as any other linux box.
It has at least one advantage over the Raspberry Pi for NAS applications: a real SATA bus. That means no mucking about with USB-to-SATA bridges, many of which do awful things like meddle with SMART reporting or spin down disks when they should not.
The barrel connector for power is also an advantage IMHO, since mysterious failures due to voltage sags turn out to be very common with USB-powered single-board computers. (USB power can be done without such problems, but extra care must be taken to avoid the many cables, power supplies, and connectors that are not up to the task.)
The S905 system-on-chip might not be capable of maxing out the throughput of an SSD, but for people who want a spinning rust server that's light on power consumption, it could be a pretty good fit.
Thomas Kaiser, as usual, has some good insights over in the CNX Software comments:
Also worth mentioning: The RockPro64 has a faster CPU, a PCIe slot, a NAS enclosure, and mainline support in Debian unstable (though the Debian installer doesn't yet install a boot loader; that must be done manually for now).
Your right, the rpi4 is hardly the best in many ways, but it is the one board that likely isn't going to be abandoned in a couple years. Which despite its very long list of failings has basic support in many places today.
OTOH, for double the money, the ODROID-H2+ kicks the pants off all these devices, by having dual 2.5Gbit Ethernet, actual sata ports _AND_ usb3, expandable ram, nvme boot options, out of the box software support for pretty much anything you can imagine, including particularly freenas, crashplan, plex, and various other things people want to "just work".
> the rpi4 is hardly the best in many ways, but it is the one board that likely isn't going to be abandoned in a couple years.
That might have been true once upon a time, but not any more. Once a board is supported by the open source community, it is no longer dependent on vendor updates. I think Armbian has a good track record of long term support, and mainline Debian has a fantastic record of it. The RockPro64 (which I mentioned above) doesn't even require any vendor blobs.
> the ODROID-H2+ kicks the pants off all these devices
That device uses an Intel Celeron processor, and probably more power, and costs nearly twice as much. I don't consider it comparable to a low-power ARM board outside of a superficial sense. (Nice to know it exists, though; it seems like an interesting middle ground.)
The rk3399 devices are definitely on the better side of open source support. A large part of that has been rockchip's been more open than other vendors recently, but its still AFAIK not as clean as you suggest.
As far as whether the celeron is better or worse power wise, I couldn't answer. But for sure I wouldn't make such sweeping statements. A lot of people think ARM=low power, but the power mgmt on these more "open" boards is actually quite bad. A large part of that is simply the fact that mainline linux is missing much of the finer points of controlling these boards. So, maybe the board could be X watt's with a given workload, but due to the lack of frequency control on the memory controller (or whatever) it ends up being 5-10X. The intel boards tend to be much more dynamic at this point.
For example, comparing an rpi4 and an atomic pi, you wouldn't expect from the massive heatsink on the atomicpi that it ends up being much lower power than the rpi4 for quite a number of workloads.
A new take on the entry-level NAS, nice! If I wouldn't already be running a Synology (which is running without issues) I'd get this. Small, cheap and very decent specs.
Sincere question from a NAS-curious noob. If I wanted a setup for making media, documents, etc available on my network, would this be a reasonable option? All I really know is that a NAS is not a backup. I've gone ahead and purchased storage for backup, so now I'm curious if this would be a better budget choice than something like a Synology DS218 (much more expensive)
> a setup for making media, documents, etc available on my network
Sure its gonna do that. Major difference to Synology is it has a pretty case and usable out-of-the-box software, while here you've got to be comfortable with some Linux stuff.
If you have a separate backup for the things that cannot be easily replaced (movies, music can just be re-downloaded), than I don't see the need to worry too much about redundancy or ECC RAM or ZFS etc.
thanks very much. May be worth checking out then. It would be nice to have something simple to start with and know what my needs really are before investing in something multi-hundreds of dollars
I got a Synology 218. It does checksumming like zfs to guard against bit rot. and I didn’t have to spend any time wading through how to’s and blog posts to make it work, “just works” out of the box
It depends how deeply you want to get involved in all this stuff. I used to have a Synology box and it was great in terms of plug'n'play: you set it up via a nice web UI and it... just works! It also has downloadable plugins/apps that let you bolt interesting functionality onto the NAS.
By comparison this Odroid box comes with Ubuntu preinstalled and that's it. Much more flexibility, but much more work involved too.
Very interesting. If this thing can run Ubuntu, presumably you can run any OS and customize it, right? When it comes to storage, I don’t want anything unless it’s zfs or some other feature equivalent system.
ARM SBCs often only work with a few "approved" distributions (downloaded from the SBC vendor) because the kernel patches required for it to boot aren't in mainline Linux.
There are a lot of distributions (and pieces of software) that care about compiling on ARM/other architectures thanks to various disparate things that exist -- raspberry pi's popularity, go/rust/and friends making cross compiling easier, etc.
I've had a good experience with ODROID U2 (I personally think ODROID is my favorite SBC producer, so I'm biased, but most have some debian-based or similar distro that you can run and be productive things with.
In the more concrete world, both ZFS and Ceph (Bluestore has checksuming now) run on ARM as far as I can tell:
Releasing a board meant for NASes that you can't run either of those on seems pretty short sighted for a company that's definitely not new to the SBC game.
Note that Ceph only supports 64-bits ARM, not 32-bits ARM. They removed binaries for the latter from their website, and the armhf port in Debian has a critical bug if you have non-armhf hosts in your cluster. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=961481
Fortunately the Odroid HC4 is 64-bits, unlike its predecessors (HC1 and HC2).
There was a question asking after upstream support for the Odroid N2+, currently easily the fastest most powerful Single Board Computer (SBC) under $100 (starts at $63). I had a pretty robust answer then[1] with lots of details.
The short answer is: the Amlogic chip's ARM "Bifrost" GPU is still basically unusable, because ARM has a huge problem upstreaming anything, but that could maybe change some day. But the rest of the platform is what I would classify as "fantastically well supported", and will run excellently with a mainline kernel (and also with fantastic uboot support)[2].
We're still some time away from boot-systems being standardized, so uboot is "the" way to go pretty much now, which targets embedded systems, but eventually we all hope the Server Based Boot Requirements[3]- which mandate ACPI and UEFI support- will become semi-standard, such that we can use better secured & more capable standardized bootloaders.
Not necessarily. I know at least the older Amlogic (805?) based boards run a custom kernel, which ended up stuck at a specific version. You can't just install any arm distro on them, it needs to be specifically built with the hacked up kernel. And support tends to end sooner and get spotty (it looks like Odroid C1 is finally supported by Armbian, but hadn't been for quite some time). I don't know how much of this applies to this specific board, but there's a lot we take for granted in x86/amd64 land.
what holds me back from Odroid is their scarce support to updates. I remember the Odroid X series left to ancient Ubuntu and kernel due lack of upstream support. Better buy a RPi4 with a NAS Hat
A couple of years ago I bought a C2 and it still hasn't progressed beyond the 3.14 kernel required by the versions of Android Samsung was shipping on phones using that chip[1]. Also, getting the display working correctly with a monitor attached via an HDMI to DVI-D cable was obnoxious (again reflecting the CPU's Android heritage - hardware expects to be told exactly what's attached and doesn't try to detect anything).
If you expect a flashable vendor-provided OS image with long-term support and vendor repos that get continuous updates, I think you'll be frustrated with anything else than a Raspberry Pi or NVIDIA Jetson.
If you're fine with community images and configuring/compiling kernels yourself, vendors like Odroid/RockPi/NanoPi/OrangePi/Pine64 are fanastic. They're hardware manufacturers, not an OS vendor.
At least the spec is open enough that anyone has the ability to do things like Armbian without having to do reverse-engineering or extracting data from binary blobs (which is the case with a lot of other vendors).
The Odroid C2 was released in 2016 and is now EoL.
Even Wandboard (wandboard.org) has an "open" specification, although, the graphics chip (a Vivante GC2000) is closed source and the official binary blobs are working only in Ubuntu 11.10.
Until today, the board can mount latest kernels, since it's running on a i.MX6Q Chip, but if you need something with graphics (even OpenCL), you're out of luck
I look forward to more ARM boards getting mainline kernel support for exactly this reason.
> Better buy a RPi4 with a NAS Hat
Do you know of such a HAT that does USB-attached SCSI Protocol correctly and reliably, and passes through SMART reporting without meddling, and doesn't inappropriately spin down the disks under its control? I wouldn't mind bookmarking such a thing if it exists.
I considered the RPi for a recent NAS build, but I didn't want the higher CPU load imposed by a USB-to-SATA bridge (as compared to native/PCIe SATA), nor did I want to play chipset roulette looking for a SATA bridge free from the problems I mentioned above.
I ended up using a RockPro64 with a Marvell 88SE9235 PCIe SATA card. It works well, and boots mainline Debian with no vendor blobs. Assuming the electronics don't wear out, I expect it will continue to run fully-updated linux for a long time to come.
There's a lot of gray area between "entirely free of such issues" and "completely unusable." The AMLogic chips on mainline linux are much closer to the latter than the former.
As long as you're on a board with good community usage in Armbian (Odroid is one of the most popular brands and some have official support), I don't think it's much worse than vanilla Ubuntu on any common laptop or desktop.
Pick up a sketchy Android TV box from Ali that's not in the top 3 and yes, you're in for some "fun" times.
"Hardkernel says the system comes with an Ubuntu Linux image pre-installed, but it should also support third-party software including CoreElec, OpenMediaVault, and Android. OS images will be posted on the ODROID-HC4 Wiki in the coming weeks."
So you should be able to get a local command prompt and run things like rsync or SFTP over SSH, yes ?
Would be really, really curious to see someone benchmark how this performs as a glusterfs cluster of zfs mirror bricks of SSDs (if 4GB is enough for zfs?) or lvm mirrors.
Several people have posted HC1/HC2 gluster clusters but since gluster dropped 32-bit support there hasn't been a great alternative for that until this.
Cool to see low-end finally get an upgrade to the old Cortex A53: a "new" (it was announced May 2017) Cortex A55 core. I mentioned this chip in a recent discussion on single-board-computer's cpus[1], in relation to Odroid's $50 C4 which uses the same chip & their much beefier 2.4GHz A73 $63+ N2+[1].
I realize the ethernet is the primary thing a NAS tends to need. Still can't be a bit disappointed that this board doesn't have USB3. Seems to be the chip, the S905X3, which multiplexes USB3 and PCIe on the same pins: you've gotta pick! And they use that PCIe for SATA, so there's no pins for USB.
Alas alas, PCIe switching is just too expensive. It'd be so nice for some small PCIe switches to be more robustly used. And like, why not more pcie micro-IO hubs? Big desktop CPUs talk to platform hubs that have a bunch of SATA, USB, PCIe on them. They have no where near enough bandwidth to the CPU to support it all at once, but it allows for good expandability. Systems like this would benefit so much from a ~$8 platform hub, whose uplink is a PCIe lane or two, and which exposes 4 SATA, a USB3 root & 4 port hub, and 2x pcie x1 links. One of the notables from the recent "Turing Pi 2" was discussion of how to attach a bunch of peripherals to a micro-cluster, and they discussed their shying away from PCIe, which I started some discussion on[2]. I think the hub would do them well, but any old PCIe switching, especially with NTB, would do them a world of good, but is just regarded flatly by most of the world as "too damned hard" and/or "too damned expensive". Enough "alases" in this post already, but it would be so enabling to see us get better at "small" pcie.
The first unit I got (the case) had manufacturing flaws, but the second one was okay. The board it's built for has been doing well in the first few weeks of testing.
I was looking for a NAS that could take 2.5" drives a year or so back (I no longer believe in spinning rust).
In the end the prices of them put me off so much that I built a mini-ITX server instead. This might have swayed me (though a 4 driver version would be preferable)
I wonder if a few of these could be purchased to function as ceph nodes. I havent tried running ceph on ARM but it feels tailor-made for a cluster storage unit.
It would work as ceph node, but I would argue that it doesn't fit the bill since you should have ~1gb mem per tb storage. If your load is low you can do with less and you could potentially use swap, but it would regularly freeze on you or crash the osd processes depending on your setup.
If however you want to try it you could get raspberry pi 4s and use them over usb3. Get them with decent memory and use spinning rust to host the OS. To have an up to date version of ceph I would suggest using cephadm with podman to get it up and running with good builds.
1) enough memory
2) up to date ceph version
Reminder: cephadm is kinda beta right now
Maybe the odroid hc1 would be better, but I don't have one
> If your load is low you can do with less and you could potentially use swap, but it would regularly freeze on you or crash the osd processes
I capped my OSDs' memory usage using osd_memory_target (set to 1.5GB), to make them run on HC2s with 4TB and 6TB disks, I never had stability issues with these.
Yeah I set my memory target lower as well (I think around 900mb was the smallest value setable for OSDs ... Oh it's 939524096) since this value is only a target it would generally try to hit it, but (in my case) overshoot it regularly. In normal use this isn't a problem at all, but I got OSDs restarting under heavy usage. I would be interested in your stability. Can/Did you check your logs for OSD restarts? I could imagine that OSD processes died and got restarted by systemd. If not this could just mean that you either have a lower load than me, it could be better in a newer release or my setup could have just sucked :) would be interested to hear back
I get monitoring alerts when they restart, and I only remember getting two of those over two years
I don't have logs anymore, since I upgraded my cluster to Nautilus which can't run on HC2s (because they are 32-bits and I can't roll back my cluster to Luminous). However I have one running on a N2 and a 6TB disk with the same memory target (1.5GB), and it didn't crash since the last reboot three weeks ago, and it's currently using 1.3GB.
I used to have two odroid HC2s in my Ceph cluster. They worked fine with a line in the config to cap their RAM usage. My home connection (100Mbps) was the bottleneck, not the HC2s themselves.
Unfortunately Ceph stopped supporting 32-bits ARM in Mimic or Nautilus, so I'm looking forward to the HC4
I think the more important question is "what kind of reception will I get from the FreeNAS forums if I turn up trying to run it on this ARM SBC with 4GB of non-ECC RAM?".
If you've ever lurked there, you'll probably know the answer already.
Should have specified I was referring to xigmanas (former nas4free). The specs, specifically the RAM, might not be enough to make a it a good idea, though.
I might be wrong, but I don't see why one wouldn't be able to build freenas for arm (all the included software should be arm64-compatible), it's just that so far no one took the time to do so and share it.
Most likely there's a DC-to-DC stage internally that provides a clean and filtered 12V from whatever the 15V DC input is. This gives them a bit of headroom for power design and ensures that regardless of the DC input, the device is getting clean DC power.
Not really, disks are sealed and don't really care about dust until it stops all airflow to them. Still not a fan of that form factor for permanent things, looks to easy to bump or knock over, so I'd probably mount it in a box with a slow fan if I were to run this permanently.
Helium filled disks are sealed (as much as possible), but air filled disks are not, their airports are filtered though. If I were to use this as a permanent NAS, I also think an enclosure would be wise. This setup could be good for bulk data load/unload though (if the interface speeds work for you).
Looks cool as a toy, but the advantage of qnap is not the hardware (which is just cots), it's the software. Ain't nobody got time to mess about with mdadm. And no, Webmin isn't even remotely in the same league. I totally love the reliability and ease of use of all the qnaps I had over the years.
You almost never would use mdadm directly since you can just use LVM which uses mdadm under the hood. Still no need for commericial software just for RAID.
I personally use solutions like qnap or synology for my personal use. Not that I dont know how or could not do it. It is more my patience for messing around with disk configs is rather low these days. Years ago I would mess with it endlessly and worry about every bit of the config. It was sort of fun in its own way. Now as long as it stores my files and can max out on my network I am pretty good and willing to pay to get ease of setup. If you are willing to trade time for money then go for it.
https://kobol.io/