Among quant types, there's been an on-going battle twixt the frequentists (among whom, Your 'Umble Servent) and the Bayesians. To the unfamiliar: frequentists have all faith and credit in the observed data while Bayesians accept that investigators must have some prior knowledge of the situation, and this knowledge ought not be wasted, but utilized in the analysis.
In the words of the Wiki the point of Bayesian: "...the posterior probability of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence is taken into account."
That "relevant evidence", typically called the prior (again, the Wiki): "A prior is often the purely subjective assessment of an experienced expert."
Since the prior is fuzzy, shall we say, frequentists often use a somewhat derogatory phrase for the process of finding one. So, imagine the chuckle induced when I ran across this turn of phrase from a recent posting: "... a different sampler for sampling from their posteriors." Sometimes, an editor is worth the few ducats they receive.
31 August 2012
29 August 2012
Too Much of a Good Thing
There's the great tag line from Scotty on "Star Trek", "I can nah make it go faster, Cap'n". Turns out, he never said exactly that. But, as mentioned in recent musing, the IT/computer world is facing a surfeit of power and a shortage of useful purpose for that power. As the saying goes, It's the Distribution, Stupid.
Back in the Goode Olde Days, Intel and MicroSoft had the symbiotic good fortune to satisfy each other's need for more of what the other had to offer. To the extent that Office owns the Fortune X00 offices, the PC turnover will continue for a while. Whether MicroSoft can find a way to chew up more (parallel) cycles remains to be seen.
But what is now apparent is that the mobile phone world, surprise!!, has entered that Twilight Zone. Here's a new review of an LG phone, the Optimus 4X HD.
If you read the whole review, there's creepy crawlies afoot for both hardware vendors (parts is parts), software (Google and MicroSoft), and phone assemblers (Apple is just that, too).
Having just gotten an LG Lucid (one of the freebies on the Verizon upgrade), yeah LG phones seem to be battery shy. And for someone who just wants to make a few damn phone calls...
It makes one wonder how long planned obsolescence can succeed? Who, with what, will be the next Great Cycle Sink for mobile phones? I recall a scene from a later "X-Files" episode where Mulder is seen walking, wearing his trenchcoat. You hear a phone ring, and Mulder pulls one of these up to his ear.
(Dr. Martin Cooper of Motorola made the first private handheld mobile phone call on a larger prototype model in 1973. This is a reenactment in 2007, Wiki)
As the current notion of what the purpose of a mobile entertainment device morphs, how soon will we be lugging things that big around? Hell, an iPad has nearly the same bulk.
The original Razr, while not always bullet proof, is still the best design for making mobile phone calls.
Back in the Goode Olde Days, Intel and MicroSoft had the symbiotic good fortune to satisfy each other's need for more of what the other had to offer. To the extent that Office owns the Fortune X00 offices, the PC turnover will continue for a while. Whether MicroSoft can find a way to chew up more (parallel) cycles remains to be seen.
But what is now apparent is that the mobile phone world, surprise!!, has entered that Twilight Zone. Here's a new review of an LG phone, the Optimus 4X HD.
Honestly, Tegra 3 hasn't done anything for me that OMAP4 and Exynos 4210 weren't already able to do just fine. So while it's awesome that quad-cores have come to phones, I'm not certain that it'll change your smartphone usage patterns significantly unless you have a specific need for a ton of compute horsepower.
If you read the whole review, there's creepy crawlies afoot for both hardware vendors (parts is parts), software (Google and MicroSoft), and phone assemblers (Apple is just that, too).
The only way the O4X HD significantly changed my usage patterns, actually, were related to battery life. Or, to be more accurate, the lack of it. It's a phone that's pretty brutal on battery, between the quad-core and the 4.7" IPS display, so I wasn't expecting anything great to begin with. But connected to 3G, I was averaging roughly 24 hours of *standby* time. That means screen off, sync off, everything off - just with the phone sitting there doing nothing.
Having just gotten an LG Lucid (one of the freebies on the Verizon upgrade), yeah LG phones seem to be battery shy. And for someone who just wants to make a few damn phone calls...
It makes one wonder how long planned obsolescence can succeed? Who, with what, will be the next Great Cycle Sink for mobile phones? I recall a scene from a later "X-Files" episode where Mulder is seen walking, wearing his trenchcoat. You hear a phone ring, and Mulder pulls one of these up to his ear.
(Dr. Martin Cooper of Motorola made the first private handheld mobile phone call on a larger prototype model in 1973. This is a reenactment in 2007, Wiki)
As the current notion of what the purpose of a mobile entertainment device morphs, how soon will we be lugging things that big around? Hell, an iPad has nearly the same bulk.
The original Razr, while not always bullet proof, is still the best design for making mobile phone calls.
28 August 2012
Teddy Roosevelt's Revenge
Now that the planet has read up as much as it wants about the Apple-Samsung case, here comes another drop in the ocean.
Does anyone remember the iPhone, as it was called in 2007? I do. It was the dumbest phone I'd ever seen. Bar none. The current phone design at the time was the flip phone, narrow compact, yet able to span the ear/mouth distance quite nicely when open and in use. This iPhone drek looked like the then current iPod chassis. Here's the page for the iPhone, here's the page for the then current iPod. Notice the similarity? So, what's so innovative about the iPhone? The hardware, but that's not developed by Apple, but by the hardware companies.
From the Wiki:
The equally silly BlackBerry of 2007 can be seen here.
The dimensions of the BlackBerry: 107 × 51 × 15 mm
The dimensions of the iPhone: 115 x 61 x 11.6 mm
Should RIM have sued? You betcha. So far as I can find, they never did. Too bad. This case makes it clear that the US patent process is so broken, one could patent a ham sandwich. It's a farce. On balance, the USPTO gets money for granting patents; it's biased towards granting an application for a ham sandwich. The details are convoluted, and AIA makes them moreso (IMHO), but the overall effect is that applicants are buying approval. Much of the (left wing?) criticism of the patent regime is the bias to approve; the rest lies with the law which empowers silly patentability. Not everyone agrees that "Apple has a patent on a rectangle with round corners"; some split hairs and some wave their hands, but the result is that Apple has such authority. Does it make sense to grant such authority?
Not exactly the Square Deal that the Rough Rider had in mind. Here's a picture of what's happened to copyright authority over the years (from the Wiki entry).
Note that the 1909 (beige) version from Roosevelt is about 50% of what it is today, effectively infinite.
Does anyone remember the iPhone, as it was called in 2007? I do. It was the dumbest phone I'd ever seen. Bar none. The current phone design at the time was the flip phone, narrow compact, yet able to span the ear/mouth distance quite nicely when open and in use. This iPhone drek looked like the then current iPod chassis. Here's the page for the iPhone, here's the page for the then current iPod. Notice the similarity? So, what's so innovative about the iPhone? The hardware, but that's not developed by Apple, but by the hardware companies.
From the Wiki:
Most touchscreen patents were filed during the 1970s and 1980s and have expired. Touchscreen component manufacturing and product design are no longer encumbered by royalties or legalities with regard to patents and the use of touchscreen-enabled displays is widespread.
The equally silly BlackBerry of 2007 can be seen here.
The dimensions of the BlackBerry: 107 × 51 × 15 mm
The dimensions of the iPhone: 115 x 61 x 11.6 mm
Should RIM have sued? You betcha. So far as I can find, they never did. Too bad. This case makes it clear that the US patent process is so broken, one could patent a ham sandwich. It's a farce. On balance, the USPTO gets money for granting patents; it's biased towards granting an application for a ham sandwich. The details are convoluted, and AIA makes them moreso (IMHO), but the overall effect is that applicants are buying approval. Much of the (left wing?) criticism of the patent regime is the bias to approve; the rest lies with the law which empowers silly patentability. Not everyone agrees that "Apple has a patent on a rectangle with round corners"; some split hairs and some wave their hands, but the result is that Apple has such authority. Does it make sense to grant such authority?
Not exactly the Square Deal that the Rough Rider had in mind. Here's a picture of what's happened to copyright authority over the years (from the Wiki entry).
Note that the 1909 (beige) version from Roosevelt is about 50% of what it is today, effectively infinite.
26 August 2012
Money Makes the World Go Round
It's been a while since I've vented about the silliness of the DNC in turning down the Triage application; in particular the mapping part. What to my wondering eyes did appear, but a US map half done in today's Times. It's half done, since it displays the input, $$$, but doesn't show the output, it seems. Again, in Triage terms, output would be some measure of effectiveness. Can be done, as Triage demonstrated.
To be fair, it may not even be the Times who did the work. The source cite is "Campaign Media Analysis Group at Kantar Media", but doesn't say whether Kantar provided only the raw numbers, or both the numbers and the map. Since the Times is well known to use R for its data graphics, my guess is the former. The states are graphed with their blue/red leaning (without indicating whether the lean was measured correlatively with the spends), so a time series of the map would be closer to the real time thrust of Triage (googleVis supports such graph animation, sort of; R-bloggers has a number of posts on point). Even so, bravo.
To be fair, it may not even be the Times who did the work. The source cite is "Campaign Media Analysis Group at Kantar Media", but doesn't say whether Kantar provided only the raw numbers, or both the numbers and the map. Since the Times is well known to use R for its data graphics, my guess is the former. The states are graphed with their blue/red leaning (without indicating whether the lean was measured correlatively with the spends), so a time series of the map would be closer to the real time thrust of Triage (googleVis supports such graph animation, sort of; R-bloggers has a number of posts on point). Even so, bravo.
25 August 2012
Mine Is Bigger Than Yours
Likely the most significant (not so secret) secret in quants is that Size Matters. Big Men on Campus think so, too. Both have about the same result: the innocents get screwed (or substitute for the euphemism).
Alzheimer's turns out to be intractable so far, and this has had a depressing effect on drug trials. Nothing has worked, so far. Well, may be something does work.
Today's reports about Lilly's failure, not the first in recent time, reveals that Lilly is trying the sample size bigger ploy. The snippets are from the NYT.
Pooled data is acceptable to math stats, but the requirements are pretty strict, principally with regard to variance within and between trials. Here's an historical criticism.
Lilly also went the way of post-hoc sub-group analysis, another not universally accepted gambit.
A biostat can make the slightest difference look colossal, given a large enough sample size. When does a ham sandwich look like a BLT? Measure enough of them, and you'll be able to prove it.
Alzheimer's turns out to be intractable so far, and this has had a depressing effect on drug trials. Nothing has worked, so far. Well, may be something does work.
Today's reports about Lilly's failure, not the first in recent time, reveals that Lilly is trying the sample size bigger ploy. The snippets are from the NYT.
Lilly said, however, that when the results of the two trials were combined, creating a larger sample size, there was a statistically significant slowing of the decline in cognition.Being somewhat addicted to cop shows since adolescence, one of the standard lines from same: "A DA can get a ham sandwich indicted, if he wants to." In the world of quants, the analogue goes, "A quant can find a significant difference between ham sandwiches with a large enough sample size." In drug development, clinical trials can be time consuming and expensive, so individual trials tend to be powered to the smallest sample size indicated by previous data, and the assumed magnitude of difference between the drug and either placebo or some standard of care. What Lilly tried was to pool data from separate trials.
Pooled data is acceptable to math stats, but the requirements are pretty strict, principally with regard to variance within and between trials. Here's an historical criticism.
Lilly also went the way of post-hoc sub-group analysis, another not universally accepted gambit.
What this means for the drug's future is still unclear. The effect of a drug on a subgroup of patients in a trial is typically not sufficient grounds for a drug to be approved without further clinical trials involving just that subgroup.And finally, the coup de grace,
Also left unclear Friday was how big the effect on cognition was -- whether it would be meaningful for patients or merely meet some statistical test.
A biostat can make the slightest difference look colossal, given a large enough sample size. When does a ham sandwich look like a BLT? Measure enough of them, and you'll be able to prove it.
23 August 2012
Rocky 99
It sure seems like there've been 98 of the darn things. So, on to 99. With ggmap. "What's that?" I hear from the cheap seats. Welllll.
Last month, ggmap was released. The author's web site is here and his presentation is here. Since I'm still on the fence about Triage/mapping for the Philly Folks, ggmap would surely be the vehicle. By all accounts it makes map generation with ggplot2 easier than previously.
The presentation is quite neat. Not only that, but he does compare logic the way I've always preferred:
-95.39681 <= lon & lon <= -95.34188 & 29.73631 <= lat & lat <= 29.78400 That is, left to right (in)equality as the number line. I always get irritated when folks do: x > 5 and the like. No, it ain't.
Last month, ggmap was released. The author's web site is here and his presentation is here. Since I'm still on the fence about Triage/mapping for the Philly Folks, ggmap would surely be the vehicle. By all accounts it makes map generation with ggplot2 easier than previously.
The presentation is quite neat. Not only that, but he does compare logic the way I've always preferred:
-95.39681 <= lon & lon <= -95.34188 & 29.73631 <= lat & lat <= 29.78400 That is, left to right (in)equality as the number line. I always get irritated when folks do: x > 5 and the like. No, it ain't.
22 August 2012
Marginalized in Gaza
Just came across this piece, and someone in the interTubes who gets marginal cost based pricing: "'Freemium is really a construct of the digital age because there's almost no marginal cost to digital goods,' said Chris Anderson, author of "Free: The Future of a Radical Price," and editor in chief of Wired magazine.'"
No shit, Sherlock.
No shit, Sherlock.
I'll Have a Cheese Steak
Somewhere along the line, I applied to the Obama folks to "volunteer", being as how I have the time and no immediate need for moolah. Philadelphia called, offering me the possibility of doing data entry for one of the low level worker/manager bees. Even though I pointed them to the Triage piece, as example of the sort of support I'm willing to do. For FREE. Doesn't seem promising. Doesn't seem promising for Obama, either, since the organization has clearly gone bureaucratic.
Which got me to thinking: can triage by implemented at such a granular level? Turns out, shapefiles exist below city level (political wards) for Philadelphia. For example! Boy howdy. I may do a second triage just to irritate them.
Which got me to thinking: can triage by implemented at such a granular level? Turns out, shapefiles exist below city level (political wards) for Philadelphia. For example! Boy howdy. I may do a second triage just to irritate them.
21 August 2012
The Canary Comes Home to Roost
Since all but a minuscule fraction of the shares traded on the exchanges every day (0 on most days) are not sold by the companies whose names are on the certificate (which means none of the money goes to the companies), traders aren't any different from plungers betting 14 Red at a roulette wheel. The buyers think the sellers are idiots for selling so cheap; likewise in opposition, the sellers.
The au courant cause celebre' is Facebook, and its connected companies. Today's NY Times carries a long story starting at the front (above the fold) of the dead trees Business Day. The article is of interest, but what's of interest to this endeavor is the caption to the picture inside (it is reproduced in a margin in the on-line version, so I don't have to type it out, yeah!): "Facebook has asked for patience as it invests the capital raised in its initial public offering and seeks to increase its revenue." This is interesting for its sheer chutzpah (as in "cheese").
Here's the two numbers that don't add up:
From the last 10-Q: physical assets are $2,105,000
From Yahoo!, market cap: $42,000,000
What are they going to "invest" in?? What's not commonly understood is that software companies generally, and internet based ones specifically, are capital light. Facebook is unusual in that it does own data centers; it need not, and many internet software companies do not. Their asset/cap ratios are even more outlandish.
So, what does it mean to invest in any kind of software company? You hire coders to type on PCs. A PC of sufficient horsepower to do this work can be had for south of $3,000; much less if you build them in-house from parts which is what Google does. It will be more if the company buys giant monitors, which might cost more than the PC unit itself, but still absurdly cheap. In other words, this ain't Ford. Here's their current numbers:
From the last 10-Q: physical assets are $22,105,000
From Yahoo!, market cap: $37,000,000
There's a reason the "capitalists" of the USofA have abandoned capitalism: it's cheaper to get rich if you don't actually turn fiduciary capital into physical assets. You're pulling "value" out of thin air. Alchemists of old tried to turn lead into gold; today they try to turn code into gold. If you can pull it off beyond fad duration, the gross margin in software can't be beat; Cost of Goods Sold is asymptotically 0. Whether that can continue is contingent on the US dollar remaining New Gold (discussed in previous entries). So long as the Right Wingnuts can control the game (with the help of the Banksters; and there's a fair amount of overlap between the two camps), then money is the commodity. What the Facebooks of the world "produce" is ephemeral, unlike Ford, which makes autos you can drive (if you like what Ford builds). Since this New Economy is largely unbarterable, a stable (if not falling) dollar is essential to the game. The game also depends on folks equating the "psychic utility" one gets from one's Facebook page as from the physical utility of a Mustang. Good luck with that. To steal from "The Graduate", one word -- MySpace.
To deal with the question raised by the photo caption: there's not much that they can do which supports organic growth of Facebook. As is well known by now, most of the developed world has gone to Facebook about as much as it can or will. There are those, humble self included, who've figured out that wasting time and relinquishing privacy to a rapacious kid isn't such a great idea. Facebook is just another in a long line of advert pushers, none of whom, apparently, ever considered that a more fashionable form of advert pushing might ever come along. It does, and will. Will dollar a day indentured workers in the rest of the world (assuming they have access to a PC, internet, and/or smartphone) have sufficient money to spend on Facebook's adverts' wares? I'll bet: nope.
Facebook could hire more coders, but the "investment" consumed by such is about $5,000 per coder. According to the 10-Q, they've got about $10 billion to spend. They could hire every Indian coder alive, and have money left over. And, what would they produce? MicroSoft, very good at the software game, has had only one money spinner, Office (which it first built on contract to Apple, by the way). Facebook had one neat idea. Odds, historically, that such blinkered thinkers could have another neat idea are teeny.
Adding data center support per user comes out of that pile, too. But user growth is slowing, perhaps with ABS engaged. I suspect they'll "grow" by buying up other companies, such as Instagram. Such growth can be attached to the buying company, but yields no growth from a macro-economic point of view. Most often, jobs are lost when companies consume each other. You scarf up a competitor, and either shut it down, or consolidate with your folks taking over for the non-worker bees. In all, for the economy as a whole, a net bad. And, as MicroSoft just demonstrated, buying up a competitor (or synergistic function) isn't going to work, just because it was supposed to.
The au courant cause celebre' is Facebook, and its connected companies. Today's NY Times carries a long story starting at the front (above the fold) of the dead trees Business Day. The article is of interest, but what's of interest to this endeavor is the caption to the picture inside (it is reproduced in a margin in the on-line version, so I don't have to type it out, yeah!): "Facebook has asked for patience as it invests the capital raised in its initial public offering and seeks to increase its revenue." This is interesting for its sheer chutzpah (as in "cheese").
Here's the two numbers that don't add up:
From the last 10-Q: physical assets are $2,105,000
From Yahoo!, market cap: $42,000,000
What are they going to "invest" in?? What's not commonly understood is that software companies generally, and internet based ones specifically, are capital light. Facebook is unusual in that it does own data centers; it need not, and many internet software companies do not. Their asset/cap ratios are even more outlandish.
So, what does it mean to invest in any kind of software company? You hire coders to type on PCs. A PC of sufficient horsepower to do this work can be had for south of $3,000; much less if you build them in-house from parts which is what Google does. It will be more if the company buys giant monitors, which might cost more than the PC unit itself, but still absurdly cheap. In other words, this ain't Ford. Here's their current numbers:
From the last 10-Q: physical assets are $22,105,000
From Yahoo!, market cap: $37,000,000
There's a reason the "capitalists" of the USofA have abandoned capitalism: it's cheaper to get rich if you don't actually turn fiduciary capital into physical assets. You're pulling "value" out of thin air. Alchemists of old tried to turn lead into gold; today they try to turn code into gold. If you can pull it off beyond fad duration, the gross margin in software can't be beat; Cost of Goods Sold is asymptotically 0. Whether that can continue is contingent on the US dollar remaining New Gold (discussed in previous entries). So long as the Right Wingnuts can control the game (with the help of the Banksters; and there's a fair amount of overlap between the two camps), then money is the commodity. What the Facebooks of the world "produce" is ephemeral, unlike Ford, which makes autos you can drive (if you like what Ford builds). Since this New Economy is largely unbarterable, a stable (if not falling) dollar is essential to the game. The game also depends on folks equating the "psychic utility" one gets from one's Facebook page as from the physical utility of a Mustang. Good luck with that. To steal from "The Graduate", one word -- MySpace.
To deal with the question raised by the photo caption: there's not much that they can do which supports organic growth of Facebook. As is well known by now, most of the developed world has gone to Facebook about as much as it can or will. There are those, humble self included, who've figured out that wasting time and relinquishing privacy to a rapacious kid isn't such a great idea. Facebook is just another in a long line of advert pushers, none of whom, apparently, ever considered that a more fashionable form of advert pushing might ever come along. It does, and will. Will dollar a day indentured workers in the rest of the world (assuming they have access to a PC, internet, and/or smartphone) have sufficient money to spend on Facebook's adverts' wares? I'll bet: nope.
Facebook could hire more coders, but the "investment" consumed by such is about $5,000 per coder. According to the 10-Q, they've got about $10 billion to spend. They could hire every Indian coder alive, and have money left over. And, what would they produce? MicroSoft, very good at the software game, has had only one money spinner, Office (which it first built on contract to Apple, by the way). Facebook had one neat idea. Odds, historically, that such blinkered thinkers could have another neat idea are teeny.
Adding data center support per user comes out of that pile, too. But user growth is slowing, perhaps with ABS engaged. I suspect they'll "grow" by buying up other companies, such as Instagram. Such growth can be attached to the buying company, but yields no growth from a macro-economic point of view. Most often, jobs are lost when companies consume each other. You scarf up a competitor, and either shut it down, or consolidate with your folks taking over for the non-worker bees. In all, for the economy as a whole, a net bad. And, as MicroSoft just demonstrated, buying up a competitor (or synergistic function) isn't going to work, just because it was supposed to.
20 August 2012
Your Face in The Crowd
Back, lo these many decades ago, when WinWord began to displace the DOS wordprocessors, mainly WordPerfect and MultiMate, studies (none of which I've managed to find a cite to; will update if ever I do find any) were published with findings not commodious to MicroSoft. Or employers with functioning brain stems. What the studies found was that the GUI process had changed the focus of worker bees from content to form; eye candy was more fun to play with than pondering content. It was my opinion at the time that the reason all of this was ignored was a kind of network effect. Win 3.0 and Office were fairly easy to clone to a user's home computer, thus alleviating the training needs of companies; employees and potential employees could learn the rudiments at no, or at least minimal, cost to the company. The dominant character based word processor (centralized on a Wang mini) was, you guessed it, Wang Word Processor. MultiMate was a near clone of Wang Word Processor for DOS.
Existing GUI operating systems (applications), Mac being the best known in the consumer venue, were specifically aimed to content light, form heavy fields, such as advertising. Artsy, fartsy stuff as was then the saying.
So, lo these decades later, and some folks continue to wonder about strewing candy before swine. A research outfit has done a study. Sacre' bleu!!!! Facebook use chews up ludicrous amounts of employee time. That study is about two years old, so I'll take a wild ass guess that the wastage is far greater now. What's hilarious is that there are anecdotes on message boards, and job boards, and every other kind of board, to wit: if you don't have a "meaningful" presence in social media, especially Facebook, you'll not be considered for employment. Who, with at least a functioning brain stem, would want to work for such morons? Oh, yeah. Retards (don't get your panties in a bunch, these are truly mentally challenged folks) who spend their day checking the "status" of their 500 closest friends.
This is worse than the dot-com bomb. At least then the software being purveyed had some commercial value. Some have even survived. This Social Media thingee is unalloyed decadence.
Existing GUI operating systems (applications), Mac being the best known in the consumer venue, were specifically aimed to content light, form heavy fields, such as advertising. Artsy, fartsy stuff as was then the saying.
So, lo these decades later, and some folks continue to wonder about strewing candy before swine. A research outfit has done a study. Sacre' bleu!!!! Facebook use chews up ludicrous amounts of employee time. That study is about two years old, so I'll take a wild ass guess that the wastage is far greater now. What's hilarious is that there are anecdotes on message boards, and job boards, and every other kind of board, to wit: if you don't have a "meaningful" presence in social media, especially Facebook, you'll not be considered for employment. Who, with at least a functioning brain stem, would want to work for such morons? Oh, yeah. Retards (don't get your panties in a bunch, these are truly mentally challenged folks) who spend their day checking the "status" of their 500 closest friends.
This is worse than the dot-com bomb. At least then the software being purveyed had some commercial value. Some have even survived. This Social Media thingee is unalloyed decadence.
17 August 2012
You Should Not Hier Me
Once again, into the reading list. For, at least, the second time it's Wilkinson's "The Grammar of Graphics". A wonderful book, if only as book. Real binding (not quite Smythe, but as close as it gets these days) "With 410 illustrations, 319 in Full Color".
But then, on page 3 (yes, that fast) he conflates objects with design patterns. What's worse, in the preceding paragraph, he cites Meyer's OO book first among equals. Then, the last sentence of section on OOD, thus: "Because of this generalizing tendency, object-oriented systems are intrinsically hierarchical." Well, no. (I finally understood how OOD really is supposed to work, and by supposed to I mean such that the exercise maximizes the benefits while minimizing the pain, when I read Allen Holub's pieces in the mid to late 90's. Near as I can tell, he's subsumed himself to the Borg since then, writing ActionClass and DataClass like all the other morons. But to see what OOD is all about, read up his Bank of Allen. It is truly eye opening; more so even than Meyer, who was still stuck in bifurcation mode a bit.)
Well, no they're not. In fact, not even family trees are hierarchical, nor are organization charts. The real world is inherently relational. Back in the Good Times of 30 BC, when a man could divorce, kill, or otherwise dispose of a bad wife, patrilinealism was probably 99.44% likely. Not so today, what with serial polygamy (and the real sort in certain states). In organizations, matrix management has been around for decades, so the strict super/parent meme hasn't been true there, either. Coders spend a good deal of time forcing the real world into their inaccurate framework. I've long since forgotten the cite (and I've kept looking), but a wise person once said, "there's more hierarchies in code than in the real world".
I've been pondering for years why it is that IBM made IMS when IDMS (the network mode database) was there for the asking? Why did IBM shunt Codd aside when time came to write the data language for the RM, and install an IMS cowboy? Why did xml flummox so many? Why this inherent belief in what is obviously not true: the universality of hierarchy? Why do so many Emperors strut around in new clothes?
At this point, the answer appears two-fold. On one hand, coders don't want to be made redundant by the database developers, since doing things relationally moves the logic into the database where it belongs. Nor, for bureaucratic reasons (the one with the most worker bees holds the hammer) do the coders' supervisors. Thus a regime which keeps the data as dumb as possible is preferred.
The second motivation is less evil, perhaps. To implement hierarchy, one need only write some client side code, since there's no interaction outside of this construct; anyone who's battled with an XML Schema (xsd) knows how much cruft results from taking naturally relational data and twisting it into a "document". What happens with (organically normalized) relational data? Well, one needs a data manager to resolve the relationships. A God Process, which OO folks deny is ever, ever used in OO applications. They blithely ignore the fact that 99.44% of OO applications are written under the control of A Framework; that dang God Process again.
So there you have it, the atheists win.
But then, on page 3 (yes, that fast) he conflates objects with design patterns. What's worse, in the preceding paragraph, he cites Meyer's OO book first among equals. Then, the last sentence of section on OOD, thus: "Because of this generalizing tendency, object-oriented systems are intrinsically hierarchical." Well, no. (I finally understood how OOD really is supposed to work, and by supposed to I mean such that the exercise maximizes the benefits while minimizing the pain, when I read Allen Holub's pieces in the mid to late 90's. Near as I can tell, he's subsumed himself to the Borg since then, writing ActionClass and DataClass like all the other morons. But to see what OOD is all about, read up his Bank of Allen. It is truly eye opening; more so even than Meyer, who was still stuck in bifurcation mode a bit.)
Well, no they're not. In fact, not even family trees are hierarchical, nor are organization charts. The real world is inherently relational. Back in the Good Times of 30 BC, when a man could divorce, kill, or otherwise dispose of a bad wife, patrilinealism was probably 99.44% likely. Not so today, what with serial polygamy (and the real sort in certain states). In organizations, matrix management has been around for decades, so the strict super/parent meme hasn't been true there, either. Coders spend a good deal of time forcing the real world into their inaccurate framework. I've long since forgotten the cite (and I've kept looking), but a wise person once said, "there's more hierarchies in code than in the real world".
I've been pondering for years why it is that IBM made IMS when IDMS (the network mode database) was there for the asking? Why did IBM shunt Codd aside when time came to write the data language for the RM, and install an IMS cowboy? Why did xml flummox so many? Why this inherent belief in what is obviously not true: the universality of hierarchy? Why do so many Emperors strut around in new clothes?
At this point, the answer appears two-fold. On one hand, coders don't want to be made redundant by the database developers, since doing things relationally moves the logic into the database where it belongs. Nor, for bureaucratic reasons (the one with the most worker bees holds the hammer) do the coders' supervisors. Thus a regime which keeps the data as dumb as possible is preferred.
The second motivation is less evil, perhaps. To implement hierarchy, one need only write some client side code, since there's no interaction outside of this construct; anyone who's battled with an XML Schema (xsd) knows how much cruft results from taking naturally relational data and twisting it into a "document". What happens with (organically normalized) relational data? Well, one needs a data manager to resolve the relationships. A God Process, which OO folks deny is ever, ever used in OO applications. They blithely ignore the fact that 99.44% of OO applications are written under the control of A Framework; that dang God Process again.
So there you have it, the atheists win.
16 August 2012
Young Man You've Got Industrial Disease [UDPATE]
While Mr. Market bumps along a flatline of local maxima (Dow/13,000 and NASDAQ/3,000), SSD and social shares aren't all doing so well. OCZ and STEC took a stiletto in the neck. Fusion-io hit a home run. Facebook, Zynga, and Groupon self-immolated. What does it all mean, Mr. Natural?
In the cases of OCZ and STEC, two different problems. Since there are essentially two market segments for SSD, and these two companies represent the primary public face of each, may be the Yellow Brick Road to 5NF RDBMS just got carpet bombed. Therein lies a tale.
OCZ is attempting to saturate the market with its consumer focused SSD in the hopes, one infers, that all the other players will be driven out in despair, and then OCZ can dictate prices and finally make a few kopeks selling their parts. Hasn't happened yet, and there are too many integrated companies who can much better survive a war of attrition. Samsung and Intel come to mind. STEC is caught up in being kicked to the curb by EMC (at least, perhaps others too; I well remember the CEO crowing a couple of years ago that he wasn't concerned about market, since he had signed up the 6 OEMs he wanted -- hmm, I now wonder how tasty that crow is today) and a CEO cancer on the body corporate. Fusion-io, on the other hand, exploded like a bottle of Pabst in a vat of liquid nitrogen. This was more or less predictable.
The implications, so far, are that PCIe form factor is doing well in the enterprise. While there are chassis which morph PCIe storage into the appearance of shared storage, this approach bends the points of each. To the extent that distributed servers with local storage become significant, database designers embracing SSD have to promote the diminished footprint advantage of high normal form. All you need can fit in that one box.
EMC acquired XtremIO recently. This is not the first attempt to chassis up PCIe cards. Their recent quarterly didn't mention numbers specific to SSD/flash systems' sales but did say: "The value of Flash is resonating with our customers, and in Q2, we shipped a record amount of Flash capacity." So, along with the Sun/Oracle flash appliance, are we headed toward a fragmented server world? For Google-like environments, perhaps. For transaction oriented applications, the sorts that interest Humble Self, it doesn't matter. "Eventual consistency" is an oxymoron whenever data matters. You can have distributed/federated databases which manage ACID, or you can have dispersed files and a TPM of your own construction; either way ACID happens. Given that relational database engines have been in development for neigh onto 40 years by folks who know how to do that, building your own is folly. But, if you want to pay a horde of coders, that's a billable hours sink that would be difficult to beat.
[UPDATE]
This just in: IBM has scarfed up Texas Memory. Since TMS is private, retail plungers don't get any benefit. OTOH, that IBM would take TMS, a most serious (in all senses of the word) player rather than one of the usual suspects, is very much a Good Thing. If SSD didn't look meaningful before, boy howdy, it does now.
The other S-word, social stuff, is spiraling in a coriolis sort of way. And this matters because? To the extent that 1) these corporations represent a beachhead for adoption of SSD (and, one notes, not the application for which SSD are best suited, alas) and 2) dey be screwin da pooch; then the legitimacy, and marketability, of enterprise SSD comes under pressure. While I have no personal use or interest in any of them, for SSD to become the SOP for storage, they do need to succeed.
The Social Disease, once again, belies the assumption that the cheap way to get rich is to connive a program on your $1,000 laptop. The literature is littered with studies which have concluded that part of the demise of American industry is the get rich quick pipe dream of software. Too many otherwise smart folks, who could build real physical stuff, have listened to the siren's song and gone off to a flier at getting rich quick. The Great Recession was one result of this emigration; in that case financial "services". But that's a discussion for a different venue. You can find it if you want to.
In the cases of OCZ and STEC, two different problems. Since there are essentially two market segments for SSD, and these two companies represent the primary public face of each, may be the Yellow Brick Road to 5NF RDBMS just got carpet bombed. Therein lies a tale.
OCZ is attempting to saturate the market with its consumer focused SSD in the hopes, one infers, that all the other players will be driven out in despair, and then OCZ can dictate prices and finally make a few kopeks selling their parts. Hasn't happened yet, and there are too many integrated companies who can much better survive a war of attrition. Samsung and Intel come to mind. STEC is caught up in being kicked to the curb by EMC (at least, perhaps others too; I well remember the CEO crowing a couple of years ago that he wasn't concerned about market, since he had signed up the 6 OEMs he wanted -- hmm, I now wonder how tasty that crow is today) and a CEO cancer on the body corporate. Fusion-io, on the other hand, exploded like a bottle of Pabst in a vat of liquid nitrogen. This was more or less predictable.
The implications, so far, are that PCIe form factor is doing well in the enterprise. While there are chassis which morph PCIe storage into the appearance of shared storage, this approach bends the points of each. To the extent that distributed servers with local storage become significant, database designers embracing SSD have to promote the diminished footprint advantage of high normal form. All you need can fit in that one box.
EMC acquired XtremIO recently. This is not the first attempt to chassis up PCIe cards. Their recent quarterly didn't mention numbers specific to SSD/flash systems' sales but did say: "The value of Flash is resonating with our customers, and in Q2, we shipped a record amount of Flash capacity." So, along with the Sun/Oracle flash appliance, are we headed toward a fragmented server world? For Google-like environments, perhaps. For transaction oriented applications, the sorts that interest Humble Self, it doesn't matter. "Eventual consistency" is an oxymoron whenever data matters. You can have distributed/federated databases which manage ACID, or you can have dispersed files and a TPM of your own construction; either way ACID happens. Given that relational database engines have been in development for neigh onto 40 years by folks who know how to do that, building your own is folly. But, if you want to pay a horde of coders, that's a billable hours sink that would be difficult to beat.
[UPDATE]
This just in: IBM has scarfed up Texas Memory. Since TMS is private, retail plungers don't get any benefit. OTOH, that IBM would take TMS, a most serious (in all senses of the word) player rather than one of the usual suspects, is very much a Good Thing. If SSD didn't look meaningful before, boy howdy, it does now.
The other S-word, social stuff, is spiraling in a coriolis sort of way. And this matters because? To the extent that 1) these corporations represent a beachhead for adoption of SSD (and, one notes, not the application for which SSD are best suited, alas) and 2) dey be screwin da pooch; then the legitimacy, and marketability, of enterprise SSD comes under pressure. While I have no personal use or interest in any of them, for SSD to become the SOP for storage, they do need to succeed.
The Social Disease, once again, belies the assumption that the cheap way to get rich is to connive a program on your $1,000 laptop. The literature is littered with studies which have concluded that part of the demise of American industry is the get rich quick pipe dream of software. Too many otherwise smart folks, who could build real physical stuff, have listened to the siren's song and gone off to a flier at getting rich quick. The Great Recession was one result of this emigration; in that case financial "services". But that's a discussion for a different venue. You can find it if you want to.
05 August 2012
Return to Sender
For some time now, it's been clear that the old rules of investment are, well, old. Part of the shift was made by moneyed interests who seek only to make money, rather than output of goods to sell. These folks "invest" in fiduciary instruments (bonds, and to a lesser extent, stocks) rather than in plant and equipment. The siren song of the post-industrial, service economy. Other than hair cuts, folks directly buy few services, and virtually none of those which exemplify the new economy of financial manipulation.
The second factor is Moore's Law. Long asserted to be a good thing, variously voiced but most commonly that computing power of the integrated circuit doubles every 18 (or 24) months. Not exactly what Moore said, but close enough for government work. What's been going on for the last decade or so reveals that Moore might not be such a great thing, after all.
If we return to 2002, we find the aftermath of 9/11, and Alan Greenspan's attempt to avert a depression on Dubya's watch by continually lowering Fed rates. It worked, sort of. Much evidence exists that this effort was the Patient 0 of The Great Recession. The wealthy class didn't like earning 2 or 3 percent, risk free of course. They demanded more return for the pleasure of not using money they couldn't use anyway. Corporate bonds actually involved risk, although at much higher rates. The wealthy class always has the option of holding corporate bonds, but wasn't much interested. So now, with Treasuries paying so little, the wealthy class demanded a new source of high paying, risk free, assets.
The mortgage industry was happy to oblige. Traditionally, for most of the post World War II time, home mortgage banking was boring and simple. Your income, minus short term debt (credit cards, generally) defined the size of your mortgage. Simple. You qualified for $100,000 30 year fixed rate (whatever it was that week). Builders built houses which could be priced to the mass of incomes in the area. Such data has been collected for SMSAs (Standard Metropolitan Statistical Area, now nym-ed as MSA) for since the 1950 census. The result is, builders, knowing the median income for the SMSA and the prevailing rules and regs for mortgages, knows the exact price point it must build toward.
Now, with Fed rates plummeting, builders saw an opportunity. The carrying cost of a house is principal and interest. Thus, price and interest rate tend to track inversely when builders have sufficient time to adjust. Why leave all that money on the table? They didn't. At first, there were sufficient mortgages to satisfy the wealthy class's demand for high return, risk free assets. As time went on, mortgage companies (not banks; they came to the game later) saw an opportunity to draw in those who hadn't been qualified. The mortgage companies invented the ever more exotic mortgages. The purpose of these mortgages was to allow ever lower income households to buy the ever increasingly priced houses. Builders got the money for the McMansions they produced. Everybody else ended up holding the bag.
The result of this part of the tale: the wealthy class's demand for high return, risk free assets led to the creation of sufficient output to meet the demand. Well, superficially.
The second half of the tale derives from Moore. On the first hand, we have the wealthy class seeking high guaranteed returns. On the other hand, we find Moore kicking the crap out of returns on physical investment. This fact was brought to hand with a couple of news stories over the last few days.
Here's one about the Samsung Galaxy 3. Apple has had the current iPhone on the market for less than 1 year (4 October 2011). There are countless more data points. Consider my pet area, SSD. OCZ keeps churning out ever newer models, all the while fire sale-ing existing models, within a period of a few months. What's the wealthy class to do? Corporate assets no longer enjoy the longevity of steel mills or factories (American capitalists decided that fiduciary assets were more fun).
The second harbinger is the Knight Capital self-immolation. Here we see the same compressed amortization timeline at work. Knight had built some new algorithmic software to take advantage of rule changes. In order to lengthen its return period, by only a few days it seems, they rushed the code into production. With predictable results.
As more of American capital is devoted to computer based efforts, whether goods or services, amortization periods have plummeted. There really isn't a long term, any longer. Investment has become a quick buck endeavor, not a long term growth prospect. This will not have a pleasant ending.
The second factor is Moore's Law. Long asserted to be a good thing, variously voiced but most commonly that computing power of the integrated circuit doubles every 18 (or 24) months. Not exactly what Moore said, but close enough for government work. What's been going on for the last decade or so reveals that Moore might not be such a great thing, after all.
If we return to 2002, we find the aftermath of 9/11, and Alan Greenspan's attempt to avert a depression on Dubya's watch by continually lowering Fed rates. It worked, sort of. Much evidence exists that this effort was the Patient 0 of The Great Recession. The wealthy class didn't like earning 2 or 3 percent, risk free of course. They demanded more return for the pleasure of not using money they couldn't use anyway. Corporate bonds actually involved risk, although at much higher rates. The wealthy class always has the option of holding corporate bonds, but wasn't much interested. So now, with Treasuries paying so little, the wealthy class demanded a new source of high paying, risk free, assets.
The mortgage industry was happy to oblige. Traditionally, for most of the post World War II time, home mortgage banking was boring and simple. Your income, minus short term debt (credit cards, generally) defined the size of your mortgage. Simple. You qualified for $100,000 30 year fixed rate (whatever it was that week). Builders built houses which could be priced to the mass of incomes in the area. Such data has been collected for SMSAs (Standard Metropolitan Statistical Area, now nym-ed as MSA) for since the 1950 census. The result is, builders, knowing the median income for the SMSA and the prevailing rules and regs for mortgages, knows the exact price point it must build toward.
Now, with Fed rates plummeting, builders saw an opportunity. The carrying cost of a house is principal and interest. Thus, price and interest rate tend to track inversely when builders have sufficient time to adjust. Why leave all that money on the table? They didn't. At first, there were sufficient mortgages to satisfy the wealthy class's demand for high return, risk free assets. As time went on, mortgage companies (not banks; they came to the game later) saw an opportunity to draw in those who hadn't been qualified. The mortgage companies invented the ever more exotic mortgages. The purpose of these mortgages was to allow ever lower income households to buy the ever increasingly priced houses. Builders got the money for the McMansions they produced. Everybody else ended up holding the bag.
The result of this part of the tale: the wealthy class's demand for high return, risk free assets led to the creation of sufficient output to meet the demand. Well, superficially.
The second half of the tale derives from Moore. On the first hand, we have the wealthy class seeking high guaranteed returns. On the other hand, we find Moore kicking the crap out of returns on physical investment. This fact was brought to hand with a couple of news stories over the last few days.
Here's one about the Samsung Galaxy 3. Apple has had the current iPhone on the market for less than 1 year (4 October 2011). There are countless more data points. Consider my pet area, SSD. OCZ keeps churning out ever newer models, all the while fire sale-ing existing models, within a period of a few months. What's the wealthy class to do? Corporate assets no longer enjoy the longevity of steel mills or factories (American capitalists decided that fiduciary assets were more fun).
The second harbinger is the Knight Capital self-immolation. Here we see the same compressed amortization timeline at work. Knight had built some new algorithmic software to take advantage of rule changes. In order to lengthen its return period, by only a few days it seems, they rushed the code into production. With predictable results.
Trading firms, market makers, brokers, investment banks, and exchanges and other trading venues are linked in a network of complex computer systems that compete to execute trades as fast as possible. That competition, combined with the never-ending array of new rules, forces market participants to constantly improve their systems.(my emphasis)
As more of American capital is devoted to computer based efforts, whether goods or services, amortization periods have plummeted. There really isn't a long term, any longer. Investment has become a quick buck endeavor, not a long term growth prospect. This will not have a pleasant ending.
Subscribe to:
Posts (Atom)