Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this is in error. The delivery times for SSDs are correct, if you only consider the periods when the SSD is working. When the SSD fails, the delivery time comparison is the AGE OF THE SUN. Ok, I kid. You never get your data. So, lets call "Age of the sun" an average between really fast and infinity.

I've owned 3 SSDs and have had 2 failures, so far, over 2 years[1]. In the past 20 years, I've owned around 100 hard drives and have had only 4 failures.

This is the achilles heel of the SSD for me. I've gone back to spinning rust because I need the reliability more than I need the performance.

The performance was nice, very nice. But having to restore from backup is something that I do not like doing every year. I'd like to do it once a decade or less.

Until then, I'm no longer using SSDs.

I did a bunch of research into why SSDs fail and inevitably it seems to be software bugs due to the SSDs being clever. I suspect the Samsung SSDs that Apple uses are not clever and thus do not fail. I will use an SSD if it comes with an Apple warranty. But I had an intel SSD fail, and I had a Sandforce based SSD fail. Both catastrophically with zero data recovery (fortunately I had backed up, though in both cases I lost a couple hours of work for various reasons.) In both cases, near as I can tell, the SSD had painted itself into a corner-- it actually hadn't been used enough to have flash failures sufficient to be a problem, let alone in excess of the extra capacity set aside. Nope, it was a management problem for the controller that caused the failures. These kinds of problems can be worked out by the industry, but give that the market has existed for 3-4 years now and we're still having these kinds of problems, I'm going to wait before trying something clever again.

[1] The one that is still working is in my cofounders machine, and I'm dreading the day that it too fails. I am afraid it is just a matter of time, and as soon as we can reshuffle things they'll be using spinning rust again as well.



HP sell SSDs with guaranteed numbers of write cycles that will tell you when they're going to fail, but they aren't cheap. It's one of those cases where there are markedly better options available in the server space than for consumers.


I've also had both SandForce and Intel (310, 320, 520, and 710!) SSDs fail -- the issue is not running out of cycles (which could be predicted, or mitigated by buying more Enterprise style SSDs), but rather weird controller errors.

One (milli-)second the drive is totally fine, then the next it is just gone. Magnetic disks usually give some warning, and usually fail for mechanical vs. controller reasons.

I still use SSDs, but am constantly wary, especially in the first month of a new drive's service.


Reliability is an issue for me, but in my workstation I need speed more than I need five 9's. So I have a Vertex 2 and a 750Gb HDD in the DVD space using an OptiBay [1]. Time Machine backs up the SSD to the HDD with room to spare. I have a bunch of "old" disks that I can drop in and start using immediately (the HDD in the bay is the original boot drive in fact). And all my code is in remote repos if I lose the whole thing. I find I'm regularly (every 9 months or so) upgrading my disk, and in fact the Vertex 2 is probably the longest I've gone without upgrading.

[1] http://www.mcetech.com/optibay/


I've found Intel Smart Response an acceptable compromise. You get close to the performance of SSD with the space, reliability, and recoverability of magnetic storage.


I like the compromise approach as well, but in the drive so it works with anything:

http://www.tomshardware.com/reviews/hybrid-hard-drive-flash-...



I've had SSDs failing every couple of months. I don't care because I have redundancy. I never had to restore from backup, I didn't even had to reboot my servers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: