For those who have not yet taken the leap to SSD goodness because they are afraid of flash wear, the burnout test from The Tech Report seems worth a read. The short story is that they wrote data to the drives until they wore out. All tested drives survived considerably longer than guaranteed, but 4/6 failed catastrophically when they did die.
http://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead I am disappointed about the catastrophic failures. One of the promises of SSDs was graceful end of life by switching to read-only mode. Some of them did give warnings before the end, but I wonder how those are communicated in a server environment? Regarding Lucene/Solr, the write pattern when updating an index is benign to SSDs: Updates are relatively bulky, rather than the evil constantly-flip-random-single-bits-and-flush pattern of databases. With segments being immutable, the bird's eye view is that Lucene creates and deletes large files, which makes it possible for the SSD's wear-leveler to select the least-used flash sectors for new writes: The write pattern over time is not too far from the one that The Tech Report tested with. - Toke Eskildsen Whose trusty old 160GB Intel X25-M reports an accumulated 36TB of writes.