A posting on the PostgreSQL/Performance group (having to do with the Intel 320 announcement, which I'll save for a different posting after the dust settles a bit) got me to looking again for published tests of SSD vs. HDD and databases. As you can see, Dennis Forbes is listed in the links block. I don't recall whether I've mentioned this post of his before; may haps I have.
But, in some sense related to the Intel 510/320 situation, he makes the salient point (my point since I discovered SSDs years ago) thusly:
Of course NoSQL yields the same massive seek gain of SSDs, but that's where you encounter the competing optimizations: By massively exploding data to optimize seek patterns, SSD solutions become that much more expensive. Digg mentioned that they turned their friend data, which I would estimate to be about 30GB of data (or a single X25-E 64GB with room to spare per "shard") with the denormalizing they did, into 1.5TB, which in the same case blows up to 24 X25-Es per shard.
Of course, his insight is rare, even among those I've read who've been positive about RDBMS/SSD synergy. It's always been obvious: normalized datastores are orders of magnitude smaller than their flatfile antecedents. This smaller footprint comes with all the integrity benefits that Dr. Codd (and Chris Date, et al since) defined. There are a whole lot of ostriches out in the wild, insisting that massive datastores are needed by their code. What does it all mean Mr. Natural? Don't mean shit.