Here's the public release:
The increases of 5.3 percent and 5.4 percent for family and nonfamily households were not statistically different.In fact, if you read through the various sections, that sentence repeats and repeats and repeats...
A question about stat sig makes it all a bit worse:
The Census Bureau uses 90 percent confidence intervals and 0.10 levels of significance to determine statistical validity. Consult standard statistical textbooks for alternative criteria.
So, right off the bat, we've got squishing differences. By usually accepted stat sig, .05, it wouldn't be even close.
It gets better. There's a link to a spreadsheet with some underlying numbers. If you look at these numbers, the claim is that most are stat sig at .10!
If you follow the first quote link (page 6):
The effect of nonresponse cannot be measured directly, but one indication of its potential effect is the nonresponse rate. The basic CPS household-level nonresponse rate was 14.9 percent. The household-level CPS ASEC nonresponse rate was an additional 15.8 percent. These two nonresponse rates lead to a combined supplement nonresponse rate of 28.3 percent.
Following on, one finds the description of imputing missing responses:
Multiple imputation is a general approach to analyzing data with missing values. We can treat the traditional sample as if the responses were missing for income sources targeted by the redesign and use multiple imputation to generate plausible responses. We use a flexible semiparametric imputation technique to place individuals into strata along two dimensions: 1) their probability of income recipiency and 2) their expected income conditional on recipiency for each income source.Much of that document is devoted to describing how this is done.
All surveys must deal with non-responses, so those in the business wouldn't find such a process out of band. For civilians, not so much. As I said, most likely believe that all these numbers are full measures, probably from IRS. Were that it were so. Without the raw data, it's not possible (well, for humble self) parse out whether the wonderful increase was an artifact of imputation. But, it could be.
So, should we guess that the Kenyan President ordered the minions at Census and BLS to put a heavy thumb on the scales? No. Having done data and stats for the government (not public facing, though), there's a good deal of resistance to the corner office dudes telling us what to do. Case in point: Farkas at FDA resigned rather than be party to the data fuck up that was eteplirsen. There's been a good deal of fiddling with the sampling underlying the surveys (yes, more than one for these data), and described at length in the background docs. Is all of this fiddling enough to turn nearly a decade of the 1% getting richer and the 99% having kids the other way 'round? Could be. Certainly a question those with the data to answer.