RAID10 (mdraid+lvm+xfs (never use lvm raid) or btrfs) is way more convenient in terms of rebuild speed, simplicity, performance, and also supports online growing and online shrinking (btrfs only). If there are any failures in a batch, it signals the need for possible proactive replacement. The biggest predictor Google found that SMART doesn't catch is slightly elevated temperatures. ZoL (as opposed to SmartOS/Solaris ZFS) bit me before (array permanently unmountable on good drives) and there was absolutely zero (0) support and they were shameless about it.
I find that RAIDZ rebuilds almost as fast. Like my 75% full RAIDZ2 pool of 10x14TB disks resilvers a disk in about 18 hours. Which means that it resilvers the 10.5TB of used data onto the new disk at an average of 160MB/s, which is not much slower for the average write speed of the disk by itself. Maybe a mirror would rebuild in like 15 hours? Anyways it doesn't seem like a meaningful amount faster and the much higher capacity of RAIDZ2 over a bunch of mirrors saves a ton of cost both in numbers of disks, but also in slot cost and interface connection costs.
Don't do that for anything critical. Manage mdraid yourself. It's a whole lot more predictable, manageable, and debuggable than an opaque abstraction.
Example: I have 5 XFS volumes across 45 drives on a 4U NAS box with 1 cold spare. Granted, I probably should've chopped them up with a single-box hypervisor RAIDless Ceph cluster of VMs in retrospect.