SandForce recently announced a sloganeering campaign. They're attempting to be the Enterprise SSD Supplier of record. STEC has, sort of, been the de facto Enterprise SSD supplier. Is it a good thing, from the perspective of this endeavor, for SandForce to be successful?
I think not.
My reasoning is ground in the nature of the SandForce controller implementation. In order to reduce write amplification, the SF controllers muck with the raw data, compressing (at least) the incoming byte stream. AnandTech has tested the SF-1200 controller (the SF-1500 Enterprise controller is not material different in approach) in various SSDs, and found that, on randomized data, performance drops markedly. This shouldn't be too surprising; the lack of patterns in the data stream means that the secret sauce in the controller's algorithms can't be slathered on. Since industrial strength databases routinely compress indexes, and some offer the (security driven) option to compress data, I expect that SF based drives, should SandForce succeed in its plan of hegemony, will not serve the database world well.
The type of drive most amenable to RDBMS is the "fat" drive described here. The STEC drives, as it happens, are regular. Fat drives have the advantage of updating the on-drive dram cache before writing to flash. Now for the "sets are the way to think" plug. Updates which are, in application code, Row By Agonizing Row inevitably run afoul of SSD write requirements. Unless the controller (or the database engine) is smart enough to *fragment* table data, RBAR code will make looping write requests to, one row at a time, a block. This is not a Good Thing. Fat drives, with suitably contiguous table data, will fair much better in set thinking applications.
So may the Force of BCNF data be with us. Just not in SandForce drives. Not, at least, without demonstrating that the SF secret sauce is compatible with the engine in use.
18 May 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment