The point of this endeavor is to find, and at times create, reasons to embrace not only the solid state disk multi-processor database machine, but also the relational database as envisioned by Dr. Codd. The synergy between the machine and the data model, to me anyway, is so obvious that the need to have this blog is occasionally disappointing. Imagine having to publicize and promote the Pythagorean Theorem. But it keeps me off the streets.
Whilst sipping beer and watching the Yankees and Red Sox (different games on different TVs) at a local tavern, I noticed for the umpteenth time that the staff were using Aloha order entry terminals. Aloha has been around for years, and I've seen it in many establishments. The sight dredged up a memory from years ago. I had spent some time attempting to be the next Ernst Haas, but then returned to database systems when it didn't work out. I was working as MIS director for a local contractor, and convinced them that it might be a good idea to replace their TI-990 minicomputer applications with something a tad more up to date. They took me up on the idea, so I had to find replacements for their applications. Eventually, we settled on two applications both written to the Progress database/4GL. They're still using them.
Progress was and is relational, but not especially SQL oriented. While talking with the developers of one of the applications (both were general ledger driven verticals), we talked about some of the configuration switches available. The ledger posting subsystems each had a switch for real time versus batch updating. The recommendation was to batch everything except the order entry; inventory needed to be up to date, but A/R, purchasing, and the like would put too much strain on the machine. And don't even think about doing payroll updates in real time.
Now, the schema for this application printed out on a stack of 11 by 14 greenbar about a foot thick. There were a lot of tables with a slew of columns. Looking back, not very relational. Looking ahead, do we need batch processing any longer?
The reason for doing batch processing goes back to the first computers, literally. Even after the transition from tape to disk, most code was still sequential (and my recent experience with a Fortune 100 dinosaur confirms this remains true) and left as is. Tape files on disk.
But now the SSD/multi machine makes it not only feasible, but preferable to run such code with the switch on real time. No more batch processing. No more worrying about the "batch window" overnight. The amount of updating to tables is, at worst, exactly the same and at best, less. Less when the table structure is normalized and therefore less data exists to be modified. Each update is a few microseconds, since the delay on disk based joined tables is removed. The I/O load is the reason to avoid real time updates in databased applications. We're not talking about rocket science computations, just moving data about in the tables and perhaps an addition here and there.
New rule: we don't need no stinking batches.
19 June 2009
Subscribe to:
Post Comments (Atom)
1 comment:
besides other solutions for exchange recovery, you can get another program for the parsing of affected files
Post a Comment