Is that an Emerald City over the horizon? Are we almost to Oz? Yes, yes we are.
The last week has brought a host of news, which taken together, indicate that the near future is nearly here. If I keep stepping half-way to the wall, do I ever get to the wall? Yes, yes you do, to within any delta you wish to name.
First, there was the Windows on ARM announcement.
Next, we have two reports from CES, courtesy AnandTech: the next Tegra, and its successor.
The future is clearer: it will be a pixelated VT-220 connected to a wireless network and thence to a relational database stored in BCNF on SSD. Such applications (not ported, in the common way, from COBOL) will run rings around all that legacy file based stuff. With sufficient bandwidth, and a persistent connection (your phone is, right?), it's back to the future. No, I don't believe that the phone/web (or web/phone if you prefer) paradigm is the winner. The flexibility and accuracy of the persistent connection paradigm will win.
For those who think that the web is really great progress over what came before, you need to know the simple history. I'll start with the consolidated mainframe world of the late-60's. There were mainframes and terminals (3270 is the archetype), over a disconnected connection. There were local edit programs written into the 3270. The transfer was block mode, meaning that all keyboard activity was between the user's fingers and the local edit program. Hitting Send (what we now call Carriage Return/Enter) sent off the edited screen to the mainframe app code.
I've just described a browser/html application.
Then, along came Unix and the VT-100 (there were later, more capable VT's, the VT-220 in particular). While connected by a wire, just as a 3270, this wire is always on. Ah. As a result, database engines and application code residing on the server see each keystroke on the VT-220. In fact, it was common to have 4GL's resident with the RDBMS in what was often referred to as "client/server in a box": the database engine being the server in its patch of memory, and a patch of memory being the application code for each client connection. Blindingly efficient, and allowed the database and the application code to edit/check *character by character* as the user progressed; if that sounds a bit like AJAX, well, yes it is. Not quite as cheap in memory, but this was the early 90's and later, when memory began to get cheap. Supporting 1,000's of terminals (or PC's running a terminal emulator) was not uncommon.
Then, along came the www, and young-uns thought this was new and fabulous. Negatory.
With the increasing density of chips we see an artefact of Moore's Law not often (except por moi) remarked: look at block a diagram of recent multicore chips. Most of the real estate is dedicated to various caches, oddly it seems, rather than using all those transistors to execute native instructions in hardware, the trend has been to emulating, for example, X86 instructions in a "hidden" RISC machine. Not that this is new; the 360/30 was widely believed to use a PDP-11 to run the instruction set. IBM did acknowledge that most of the 360 series emulated the instruction set; only the top end machines executed in hardware. The upshot is simple: few, in any, normal client machines have need for the compute power on tap.
So, we see the rise of ARM and MIPS (you did buy ARMH when I told you to, yes?) running minimalist machines at low power and low cost. With a persistent connection to a persistent datastore, let's dance like its 1995. And we will. You can't fool Mother Nature, and She's saying: keep the data and its management in one secure place, and let the kiddies make pretty, pretty pictures on their phones and pads. Just don't let them mess with the data.
Just stay away from the poppies. You heard me. Don't go into the field.
05 January 2011
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment