25 August 2010


Once again, the folks at simple-talk have loosened my tongue.  The topic is Oslo, or more directly, its apparent demise.  I was moved to comment, shown below.

I'll add a more positive note; in the sense of what Oslo could/would be.

I've not had much use for diagramming (UML, etc) to drive schema definition/scripting.  Such a process seems wholly redundant; the work process/flow determines the objects/entities/tables, and converting these facts to DDL is not onerous.

OTOH, getting from the resulting schema to a working application is a bear.  There have been, and still are, frameworks for deriving a codebase from a schema.  Most are in the java world (and, not coincidentally I believe, COBOL some decades past and not the first; it's that ole enterprise automation worship), but without much fanfare.  I suspect, but can't prove, that Fortune X00 companies have built internally (or, just as likely, extended one of the open source frameworks) such frameworks.

This is what I thought Oslo was to be:  a catalog driven code generator.  My obsession with SSD these days (and it's looking more and more like the idea is taking hold Out There) still convinces me that BCNF catalogs can now be efficiently implemented.  Since such catalogs are based on DRI, and such kinds of constraints are easily found and translatable, generating a front end (VB, C#, java, PHP, whatever) is not a Really Big Deal.  Generating, and integrating, code from triggers, stored procs, check constraints, and the like is a bit more work, but with more normalization, constraints become just more foreign keys, which are more easily translated.

That's where I expected Oslo was headed.  This is not an ultimate COBOL objective, but "drudge work" tool for database developers (and redundancy notice for application coders, alas).  Such a tool would not reduce the need for database geeks; quite the contrary, for transaction type database projects we finally get to call the tune.  Sweet.

And, I'll add here, that the shift that's clearly evident with iStuff and Droids leads to another inevitable conclusion.  A truly connected device, and phone devices are surely, means we can re-implement the most efficient transaction systems:  VT-100/*nix/database.  Such systems had a passive terminal, memory mapped in the server, so that the client/server architecture was wholly local.  Each terminal has a patch of memory, and talks to the database server, all on one machine.  No more 3270/web disconnected paradigm.  With the phone paradigm, the client application can be on the phone, but it has a connection to the server.  For those with short memories, or who weren't there in the first place, the client/server architecture was born in engineering.  The point was to ship small amounts of data to connected workstations that executed a heavy codebase, not to ship large amounts of data to PCs that execute a trivial codebase.  The key is the connection; http is a disconnected protocol, and is the source of all the heartburn.  The frontrunners will soon have their fully normalized databases shipping small spoonfuls of data out to iPads/Droids.  That's the future.

No comments: