So, off we go. That which is new, ain't necessarily so. As stated muchly, Your Good Mother knows better, and quants as often as not are hired to obscure the truth; ignoring or suppressing basic metrics.
To reiterate: TGR was caused by the (willful?) ignorance of quants (financial engineers). As early as 2003, it was clear from available data that the house price / median income ratio had come seriously unstuck. Since housing is not a return (cash) generating allocation of capital (modulo "psychic" income, even if you're a believer in that sort of thing the cash value is arbitrary and not real), the only support for increasing mortgage levels is increases in median income. The latter wasn't, and isn't, reality; thus the inflation in house price had to be corrupt. Quants don't generally have a corruption variable in their models.
Which brings us to Norris' article. The gist of it is that "agent based modeling" (quotes in the original) offers a better quant, which will identify problems before they morph into a TGR.
But a new assessment from a little-known agency created by the Dodd-Frank law argues that the models used by regulators to assess risk need to be fundamentally changed, and that until they are they are likely to be useful during normal times, but not when they matter the most.
Risk assessment by quants has been based on time series analysis for a very long time. The problem with time series analysis is the assumption that tomorrow looks mostly like today, and today looks mostly like yesterday. More so than other quant methods, time series analysis *assumes* that all determinants of the metric under study are embedded in that metric's historical data. As a result, when the price/income ratio went parabolic, the quants (and their Suit overseers) said, "goody, goody" when they should have said, "what the fuck is going on?" It was not in either the quants or Suits direct, immediate money, interest to question the parabola. They all, ignoring Your Good Mother's advice on group behaviour, went off the cliff in a lemming dive.
Mr. Bookstaber argues that conventional ways to measure risk -- known as "value at risk" and stress models -- fail to take into account interactions and feedback effects that can magnify a crisis and turn it into something that has effects far beyond the original development.
And that part is correct. But the argument, and the logic which extends, doesn't deal with identifying the underlying cause of TGR. It does attempt to find where the bread crumbs *will go*.
The working paper explains why the Office of Financial Research, which is part of the Treasury Department, has begun research into what is called "agent-based modeling," which tries to analyze what each agent -- in this case each bank or hedge fund -- will do as a situation develops and worsens. That effort is being run by Mr. Bookstaber, a former hedge fund manager and Wall Street risk manager and the author of an influential 2007 book, "A Demon of Our Own Design," that warned of the problems being created on Wall Street.
Agent based modeling? As we're about to see, it's old wine in new bottles. Kind of like NoSql being just VSAM.
"Agent-based modeling" has been used in a variety of nonfinancial areas, including traffic congestion and crowd dynamics (it turns out that putting a post in front of an emergency exit can actually improve the flow of people fleeing an emergency and thus save lives). But the modeling has received little attention from economists.
This is where it gets interesting. If you review ABM (why did they end up with the acronym for Anti-Ballistic Missile?) here in the wiki, you can walk a breadcrumb trail. ABM is fundamentally very old, and came from economics, although more recently associated with operations research.
The patient zero of ABM is Leontief's input-output analysis. Leontief built I/O analysis in 1936, well before computers and data were as available as today. My senior seminar somehow got Robert Solow to give us a talk on economic growth (that year's topic). In 1958, Solow co-authored "Linear Programming and Economic Analysis". Large, interaction based, models have been part and parcel of economics for decades.
Here is where the article, and Bookstaber, stumble:
Mr. Bookstaber said that he hoped that information from such models, coupled with the additional detailed data the government is now collecting on markets and trading positions, could help regulators spot potential trouble before it happens, as leverage builds up in a particular part of the markets. [my emphasis]
The cause of TGR wasn't leverage; it was the corruption of historic norms. The result of the corruption was an increase in leverage by those who didn't even know they'd done so: hamburger flippers living in McMansions. It remains a fact: only increasing median income can propel house prices. With contracted resets, not tied to prime, only those in growth income employment (or generalized inflation, which amounts to the same thing) can finance the growing vig. ABM, as described here at least, won't detect such corruption of markets. It can't.
Perhaps regulators could then take steps to raise the cost of borrowing in that particular area, rather than use the blunt tool of raising rates throughout the market.
Here we find the anti-Krugman (and humble self, of course). It was the rising interest rates from contractual resets that finally blew up the housing market. Had regulators forced ARMs to reset higher and faster, TGR would have triggered earlier, and might not have been Great. It's the job of economists to know how the economy works. Leontief's I/O model is the basis of contemporary macro-economic modeling.
Here's the thing. In the relational model, the developer specifies a priori which tables relate and which columns in the tables create that relationship. These relations aren't probabilistic, they're deterministic. A similar distinction exists in macro analysis. A traditional I/O model, while derived from real world data, is deterministic in the input and output relations. On the other hand, traditional macro models are probabilistic; R2 rules! Unless economists, and pundits, identify fundamental metrics, and build their models around them, they'll not have any luck predicting. Depressions and recessions have deterministic causes. Now, the loony monetarists tend to blame to the victims, just as they have this time (AIG suing the American taxpayer?). Keynesians tend to blame the centers of economic influence, just as they have this time. Historically, the Keynesians have been right more often than not. Volker be damned.
No comments:
Post a Comment