The New Yorker ran a great article by John Cassidy last week, discussing what it means to be “poor” in a nation as prosperous as America. The upshot: Cassidy recommends that the US replace the official poverty line—first adopted in 1969 and adjusted for inflation ever since—with a “relative” income standard that tracks how many people earn less than half the median income. I think he makes an interesting case, and some important points.

As should be obvious, poverty is a surprisingly difficult concept to pin down. The official US poverty line—the level of income that separates “poor” from “non-poor”—is pretty arbitrary; it’s more a quirk of history than a useful distinction in the real world. Arguably, the poverty line could be higher (on the grounds that it’s pretty chintzy) or lower (on the grounds that poor people in America might be relatively well-off in some countries). It’s also inflexible: the poverty line is fixed all across the country, no matter what the local cost of living is. It’s bad enough being below the poverty line in, say, rural Oregon; but living in Manhattan below the poverty line might leave you with no money for anything besides rent, if that. And the official poverty line only looks at pre-tax income, so it doesn’t consider tax payments, nor at some forms of income (such as Earned Income Tax Credits).

Just as vexing is the fact that the economy keeps changing in ways that the poverty measure can’t account for. Food, for example, is much cheaper than it used to be, relative to our incomes; and consumer goods are cheaper still. So a poverty wage will buy a lot more food and, say, stereos than it used to. But the cost of medical care and, in some places, housing have gone up faster than inflation—both for structural reasons, and because (for medical care, at least) the quality has improved. These things don’t matter so much year-to-year; short-term changes in the poverty rate usually provide a pretty good gauge of how people at the bottom of the earnings ladder are faring. But long-term trends in poverty rates are less meaningful. It’s hard to compare how “poor” someone at the poverty line is in 2006 with how “poor” a comparable person might have been in 1980, or 1970. This isn’t to suggest that US poverty now is any better or worse than it used to be—just that it’s different. But the poverty line doesn’t register those differences.

  • Our work is made possible by the generosity of people like you!

    Thanks to Craig Doberstein for supporting a sustainable Northwest.

  • As the article points out, these are just a few of the problems with an “absolute” poverty standard (ie., one that’s based on a fixed dollar amount, adjusted for inflation every year). Another problem is that, as sociologists have discovered, the social ills that accompany poverty are largely the result of “relative” deprivation—that is, if you feel poorer than the people around you, your health and happiness tend to suffer. Wide income disparities reduce “subjective well-being” and increase the risk of dying from a wide variety of causes, from car crashes to cancer. Nobody knows, exactly, why relative deprivation should cause such effects; but it does. (It may be that we’re hard-wired to feel more stress when we perceive ourselves to be of lower social rank, just as baboons and other primates do.)

    I’m not ready to abandon the US poverty line quite yet. But it would be useful if the US Census bureau were to track income inequality more rigorously, and report on it as widely as they report on the poverty rate. That way we’d get information about both absolute and relative deprivation—and a fuller picture of a set of trends that deserve way more attention than they typically get.