Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I also had two mirrored drives fail simultaneously in my zpool a few days ago. There was nothing on them so I wasn't worried. WD Reds in my case;

Using matched drives seems to be a very bad idea for mirrors. I'll probably replace them with two different brands.

I also have a matched pair of HGST SAS Helium drives in the same backplane so hopefully I can catch those before they fail too if they're going to go at once, I _do_ have data on those.



Alternatively, you can also buy your drives a few months apart to get different batches, most likely.


Yeah, that's an option; though since my pair failed at once, I need to buy at least two immediately to get the mirror back up.


Because of this, I always buy a different brand as well.


Any limitations, or am I good so long as they're the same label capacity? I assume I just lose a few MB or so from whichever is a little larger?


Is there a specific reason why the drives die at the same time? Electricity spike?


Buying two identical drives has high chances of them being from a single batch, which makes them physically almost identical. It’s a pretty well-known raid-related fact, but some people aren’t aware of it or don’t take it seriously.


Identical twins may both die of a heart attack, but not usually at the same time.

Normally, failures come from some amount of non-repeatability or randomness that the systems weren't robust to.

The drive industry is special (in a bad way) in that they can exactly reproduce their flaws, and most people's intuition isn't prepared for that.


If they're bought together, like mine were, and they have close serials, they've be almost identical; if you then run them in a ZFS mirror like I was, they'll receive identical "load" as well.

Since mine had ~43000 hours, they didn't fail prematurely, they just aged out, and since they appear to have been built pretty well, they both aged out at the same time. Annoying for a ZFS mirror, but indicates good quality control in my opinion.


If they're ~identical construction and being mirrored so that they have the same write/read pattern history, it could trigger the same failure mode simultaneously.


More likely to be from the same bad batch too. There was a post with very detailed comments about this just a few days ago.


Why bad? What's considered a good/bad lifetime for these? Mine had ~43000 power on hours, I don't know if that's good or bad for a WD Red (CMR) drive, but they weren't particularly heavily loaded, and their temps were good, so I'm fairly happy with how long they lasted (though longer would have been nice).


You're right it might be a natural end of life that coincides too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: