If you're of a certain age, "solid, man", has a certain meaning (if you go here, and open up the list and then go to the bottom. SolidFire is a company which claims to be building SSD in the cloud; somehow or other I got on the early announcement list.
According to their press section, they've been in business since January of this year. It'll be interesting to see whether they make the connection between the value of SSD and the value of the RM/SQL/RDBMS. Minimize footprint, wear SSD galoshes.
31 May 2011
24 May 2011
Do the Limbo Rock
There's been some whoopla recently about OCZ's latest drives, and the SandForce controller used. Today, AnandTech looks at the latest of the latest, the Agility 3. Of most interest to this endeavor is the background on NAND architecture.
What's it all mean, Mr. Natural? Well, with respect to Enterprise (or even SMB) systems, not much I expect. OCZ and SandForce remain, so far as I can determine, firmly in the consumer/prosumer territory. Until SF drives start showing up from a major vendor, of course (there's not been an announcement from SF, yet). I don't see that happening, what with the implementation of encrypted/compressed data in industrial strength databases. Workgroup/internal document storage, may be.
What's it all mean, Mr. Natural? Well, with respect to Enterprise (or even SMB) systems, not much I expect. OCZ and SandForce remain, so far as I can determine, firmly in the consumer/prosumer territory. Until SF drives start showing up from a major vendor, of course (there's not been an announcement from SF, yet). I don't see that happening, what with the implementation of encrypted/compressed data in industrial strength databases. Workgroup/internal document storage, may be.
05 May 2011
Mr. Wonka's Factory
Here's a post on simple-talk which was dormant for most of a month (it's down to the last place on the front page today), then got really popular the last few days.
The subject: cleavages; well no, but I've liked that line since Lou Gottlieb used it to introduce a song lo those many years ago. Well, may be, actually, in a manner of speaking.
The subject is database refactoring. The post and discussion started out blandly, but then opened up (cleaved, see?) that long festering wound of front-end vs. back-end design/development. Simple-talk subjects and discussions tend toward a coder perspective, I suppose because T/SQL is often germane to topics, so a certain amount of self-selection exists both for authors and readers.
The idea of refactoring a database, especially given the tripe espoused by the likes of Ambler (some of his stuff), has been hijacked by the kiddie koders. Refactoring, for a RDBMS means getting more normal. It doesn't mean twisting the schema to fit some single access path, a la IMS or xml (hierarchies). That's not why Dr. Codd made the effort. He made the effort because he saw the mess that IMS was making; he'd been there when IMS was released and devised the RM a couple years later. In other words, it didn't take a math guy very long to identify both the problem and the solution.
Now that we have high-core machines with SSD as primary datastore, we can implement BCNF (at least) schemas/catalogs with no worries about "let's denormalize for speed". Those days are truly history.
The subject: cleavages; well no, but I've liked that line since Lou Gottlieb used it to introduce a song lo those many years ago. Well, may be, actually, in a manner of speaking.
The subject is database refactoring. The post and discussion started out blandly, but then opened up (cleaved, see?) that long festering wound of front-end vs. back-end design/development. Simple-talk subjects and discussions tend toward a coder perspective, I suppose because T/SQL is often germane to topics, so a certain amount of self-selection exists both for authors and readers.
The idea of refactoring a database, especially given the tripe espoused by the likes of Ambler (some of his stuff), has been hijacked by the kiddie koders. Refactoring, for a RDBMS means getting more normal. It doesn't mean twisting the schema to fit some single access path, a la IMS or xml (hierarchies). That's not why Dr. Codd made the effort. He made the effort because he saw the mess that IMS was making; he'd been there when IMS was released and devised the RM a couple years later. In other words, it didn't take a math guy very long to identify both the problem and the solution.
Now that we have high-core machines with SSD as primary datastore, we can implement BCNF (at least) schemas/catalogs with no worries about "let's denormalize for speed". Those days are truly history.
03 May 2011
Get Offa That Cloud
On more than one occasion, I've criticized all things cloud, since I see cloud as a lowest common denominator storage approach. But then, this is based only on my experience with cloud-like provisioners over the last couple of decades. Well, turns out I'm not the only one, and some of these folks have as much fun with the silliness as I have. The thread is new, so keep track for further hilarity.
01 May 2011
Ruler of All I Survey
There's that old phrase, "master of my own domain", and it is particularly useful in relational databases. While working in the COBOL oriented DB2 world of Fortune X00, I was continually rebuffed whenever I suggested the use of check constraints. I had to sneak them into some of my lesser databases, but that was cool.
In addition to regular check constraints, there is the notion of domains. They're sometimes referred to as user defined types. Here's a Postgres based treatment. Interestingly, Postgres has better support than DB2; PG allows check constraints while DB2 doesn't. Once again, the author's make my point (admittedly, the point made by anyone who takes RDBMS seriously) that control of the data in the database, rather than relying on each application code. Bulk loads, with most engines, enforce constraints so batch data transfers are a breeze. Not to mention that client code can be in any convenient language. And that a smart generator can use domain/UDT definitions whilst doing its thing. I've said that before, I believe.
In addition to regular check constraints, there is the notion of domains. They're sometimes referred to as user defined types. Here's a Postgres based treatment. Interestingly, Postgres has better support than DB2; PG allows check constraints while DB2 doesn't. Once again, the author's make my point (admittedly, the point made by anyone who takes RDBMS seriously) that control of the data in the database, rather than relying on each application code. Bulk loads, with most engines, enforce constraints so batch data transfers are a breeze. Not to mention that client code can be in any convenient language. And that a smart generator can use domain/UDT definitions whilst doing its thing. I've said that before, I believe.
Subscribe to:
Posts (Atom)