In Sickness and in Health (Insurance)

The Affordable Care Act (ACA) has proven to be one of the most controversial and interesting policy innovations we’ve seen. The ACA requires Americans to buy health insurance, with a tax penalty for those who don’t voluntarily buy in. The question of the proper role of government in the market for health care has many angles. One is the causal effect of health insurance on health. The United States spends more of its GDP on health care than do other developed nations, yet Americans are surprisingly unhealthy. For example, Americans are more likely to be overweight and die sooner than their Canadian cousins, who spend only about two-thirds as much on care. America is also unusual among developed countries in having no universal health insurance scheme. Perhaps there’s a causal connection here.

Elderly Americans are covered by a federal program called Medicare, while some poor Americans (including most single mothers, their children, and many other poor children) are covered by Medicaid. Many of the working, prime-age poor, however, have long been uninsured. In fact, many uninsured Americans have chosen not to participate in an employer-provided insurance plan.1 These workers, perhaps correctly, count on hospital emergency departments, which cannot turn them away, to address their health-care needs. But the emergency department might not be the best place to treat, say, the flu, or to manage chronic conditions like diabetes and hypertension that are so pervasive among poor Americans. The emergency department is not required to provide long-term care. It therefore stands to reason that government-mandated health insurance might yield a health dividend. The push for subsidized universal health insurance stems in part from the belief that it does.

The ceteris paribus question in this context contrasts the health of someone with insurance coverage to the health of the same person were they without insurance (other than an emergency department backstop). This contrast highlights a fundamental empirical conundrum: people are either insured or not. We don’t get to see them both ways, at least not at the same time in exactly the same circumstances.

In his celebrated poem, “The Road Not Taken,” Robert Frost used the metaphor of a crossroads to describe the causal effects of personal choice:

Two roads diverged in a yellow wood,

And sorry I could not travel both And be one traveler, long I stood And looked down one as far as I could To where it bent in the undergrowth;

Frost’s traveler concludes:

Two roads diverged in a wood, and I—

I took the one less traveled by,

And that has made all the difference.

The traveler claims his choice has mattered, but, being only one person, he can’t be sure. A later trip or a report by other travelers won’t nail it down for him, either. Our narrator might be older and wiser the second time around, while other travelers might have different experiences on the same road. So it is with any choice, including those related to health insurance: would uninsured men with heart disease be disease-free if they had insurance? In the novel Light Years, James Salter’s irresolute narrator observes: “Acts demolish their alternatives, that is the paradox.” We can’t know what lies at the end of the road not taken.

We can’t know, but evidence can be brought to bear on the question. This chapter takes you through some of the evidence related to paths involving health insurance. The starting point is the National Health Interview Survey (NHIS), an annual survey of the U. S. population with detailed information on health and health insurance. Among many other things, the NHIS asks: “Would you say your health in general is excellent, very good, good, fair, or poor?” We used this question to code an index that assigns 5 to excellent health and 1 to poor health in a sample of married 2009 NHIS respondents who may or may not be insured.2 This index is our outcome: a measure we’re interested in studying. The causal relation of interest here is determined by a variable that indicates coverage by private health insurance. We call this variable the treatment, borrowing from the literature on medical trials, although the treatments we’re interested in need not be medical treatments like drugs or surgery. In this context, those with insurance can be thought of as the treatment group; those without insurance make up the comparison or control group. A good control group reveals the fate of the treated in a counterfactual world where they are not treated.

The first row of Table 1.1 compares the average health index of insured and uninsured Americans, with statistics tabulated separately for husbands and wives.3 Those with health insurance are indeed healthier than those without, a gap of about.3 in the index for men and.4 in the index for women. These are large differences when measured against the standard deviation of the health index, which is about 1. (Standard deviations, reported in square brackets in Table 1.1, measure variability in data. The chapter appendix reviews the relevant formula.) These large gaps might be the health dividend we’re looking for.

Fruitless and Fruitful Comparisons

Simple comparisons, such as those at the top of Table 1.1, are often cited as evidence of causal effects. More often than not, however, such comparisons are misleading. Once again the problem is other things equal, or lack thereof. Comparisons of people with and without health insurance are not apples to apples; such contrasts are apples to oranges, or worse.

Among other differences, those with health insurance are better educated, have higher income, and are more likely to be working than the uninsured. This can be seen in panel B of Table 1.1, which reports the average characteristics of NHIS respondents who do and don’t have health insurance. Many of the differences in the table are large (for example, a nearly 3-year schooling gap); most are statistically precise enough to rule out the hypothesis that these discrepancies are merely chance findings (see the chapter appendix for a refresher on statistical significance). It won’t surprise you to learn that most variables tabulated here are highly correlated with health as well as with health insurance status. More-educated people, for example, tend to be healthier as well as being overrepresented in the insured group. This may be because more-educated people exercise more, smoke less, and are more likely to wear seat belts. It stands to reason that the difference in health between insured and uninsured NHIS respondents at least partly reflects the extra schooling of the insured.

TABLE 1.1

Health and demographic characteristics of insured and uninsured couples in the NHIS

Husbands

Wives

Some HI

Nfn H! Difference

Some Hi

No HI

Difference

(1)

(2) (3)

(4)

(5)

A. Health

Health index:

4.0 і [■93]

3.70

[ion

.31

(.03)

4.02

[.92]

3.62

[1.01]

.39

(.04)

В, Character! sties

Non white

.16

.17

-.01

(.01)

.15

.17

-.02

(.01)

Age

43.98

41.26

2.71

(.29)

42.24

39.62

2.62

(.30)

Ed near і on

14.31

11.56

2.?4

(.10)

3 4.44

11.80

2.64

(.11)

Family size

3.50

3.98

-.47

(.05)

3.49

3.93

-.43

(.05)

Employed

.92

.85

.07

(.01)

.77

.56

.21

(.02)

Family income

106.467

45,656

60,810

(1,355)

106,212

46,385

59,828

(1,406)

Sample size

8,114

1,281

8,264

U3I

Notes: This table reports average characteristics for insured and uninsured married couples in the 2009 National Health Interview Survey (NHIS). Columns (1), (2), (4), and (5) show average characteristics of the group of individuals specified by the column heading. Columns (3) and (6) report the difference between the average characteristic for individuals with and without health insurance (HI). Standard deviations are in brackets; standard errors are reported in parentheses.

Our effort to understand the causal connection between insurance and health is aided by fleshing out Frost’s two-roads metaphor. We use the letter Y as shorthand for health, the outcome variable of interest. To make it clear when we’re talking about specific people, we use subscripts as a stand-in for names: Y{ is the health of individual i. The outcome Y{ is recorded in our data. But, facing the choice of whether to pay for health insurance, person i has two potential outcomes, only one of which is observed. To distinguish one potential outcome from another, we add a second subscript: The road taken without health insurance leads to Yoi (read this as “y-zero-i”) for person i, while the road with health insurance leads to Yu (read this as “y-one-i”) for person i. Potential outcomes lie at the end of each road one might take. The causal effect of insurance on health is the difference between them, written Yii – Yoi.

To nail this down further, consider the story of visiting Massachusetts Institute of Technology (MIT) student Khuzdar Khalat, recently arrived from Kazakhstan. Kazakhstan has a national health insurance system that covers all its citizens automatically (though you wouldn’t go there just for the health insurance). Arriving in Cambridge, Massachusetts, Khuzdar is surprised to learn that MIT students must decide whether to opt in to the university’s health insurance plan, for which MIT levies a hefty fee. Upon reflection, Khuzdar judges the MIT insurance worth paying for, since he fears upper respiratory infections in chilly New England. Let’s say that Yoi = 3 and Y1i = 4 for i = Khuzdar. For him, the causal effect of insurance is one step up on the NHIS scale:

^1, Khuzdar — ^0, Khuzdar =

Table 1.2 summarizes this information.

Khuzdar Khalat Maria Moreno

TABLE 1.2

Outcomes and treatments for Khuzdar and Maria

Potential outcome without insurance: Yn.

0i

3

5

Potential outcome with insurance: Y

1i

4

5

Treatment (insurance status chosen): D.

1

0

Actual health outcome: Y

i

4

5

Treatment effect: Yr – Yn.

1i 0i

1

0

It’s worth emphasizing that Table 1.2 is an imaginary table: some of the information it describes must remain hidden. Khuzdar will either buy insurance, revealing his value of Y1i, or he won’t, in which case his Y0i is revealed. Khuzdar has walked many a long and dusty road in Kazakhstan, but even he cannot be sure what lies at the end of those not taken.

Maria Moreno is also coming to MIT this year; she hails from Chile’s Andean highlands. Little concerned by Boston winters, hearty Maria is not the type to fall sick easily. She therefore passes up the MIT insurance, planning to use her money for travel instead. Because Maria has Y0,Maria = Y1Maria = 5, the causal effect of insurance on her health is

^1,Maria ^0,Maria = 0.

Maria’s numbers likewise appear in Table 1.2.

Since Khuzdar and Maria make different insurance choices, they offer an interesting

comparison. Khuzdar’s health is YKhuzdar = Y1„Khuzdar = 4 while Maria’s is YMaria = Y0,.Maria

= 5. The difference between them is

^Khuidar ^Maria = ^1

Taken at face value, this quantity—which we observe—suggests Khuzdar’s decision to buy insurance is counterproductive. His MIT insurance coverage notwithstanding, insured Khuzdar’s health is worse than uninsured Maria’s.

In fact, the comparison between frail Khuzdar and hearty Maria tells us little about the causal effects of their choices. This can be seen by linking observed and potential

Подпись: outcomes as follows:

^Kburtkr ^Maria = ^1,Khu7.dar ^0, Maria

= ^,khittdar “ fb, Kbu?-dar Т ib),Khui(tar ^0,Maria) *

The second line in this equation is derived by adding and subtracting Y0Khuzdar, thereby generating two hidden comparisons that determine the one we see. The first comparison, Y1>Khuzdar – Y0 >Khuzdar, is the causal effect of health insurance on Khuzdar, which is equal to 1. The second, Y0 ,Khuzdar – Y0 ,Maria, is the difference between the two students’ health status were both to decide against insurance. This term, equal to -2, reflects Khuzdar’s relative frailty. In the context of our effort to uncover causal effects, the lack of comparability captured by the second term is called selection bias.

image3

You might think that selection bias has something to do with our focus on particular individuals instead of on groups, where, perhaps, extraneous differences can be expected to “average out.” But the difficult problem of selection bias carries over to comparisons of groups, though, instead of individual causal effects, our attention shifts to average causal effects. In a group of n people, average causal effects are written Avgn[Y1i – Y0i], where averaging is done in the usual way (that is, we sum individual outcomes and divide by n):

The symbol indicates a sum over everyone from z = 1 to n, where n is the size of the group over which we are averaging. Note that both summations in equation (1.1) are taken over everybody in the group of interest. The average causal effect of health insurance compares average health in hypothetical scenarios where everybody in the group does and does not have health insurance. As a computational matter, this is the average of

individual causal effects like YhKhuzdar – Y0,Khuzdar and Y1,Maria – Y0,Maria for each student

in our data.

An investigation of the average causal effect of insurance naturally begins by comparing the average health of groups of insured and uninsured people, as in Table 1.1. This comparison is facilitated by the construction of a dummy variable, Di, which takes on the values 0 and 1 to indicate insurance status:

Подпись: OiI if і is insured 0 otherwise.

We can now write Avgn[YiDi = 1] for the average among the insured and Avgn[YiDi = 0] for the average among the uninsured. These quantities are averages conditional on insurance status.5

The average Yi for the insured is necessarily an average of outcome Y1;-, but contains no information about Y0i. Likewise, the average Yi among the uninsured is an average of
outcome Yoi, but this average is devoid of information about the corresponding Y1i. In other words, the road taken by those with insurance ends with Yni, while the road taken by those without insurance leads to Y0i. This in turn leads to a simple but important conclusion about the difference in average health by insurance status:

Difference in group means – AvgjrW = 1] –

= Avgti[Yv D; = 1 ] — AvgnY^Di = 0], (1.2}

an expression highlighting the fact that the comparisons in Table 1.1 tell us something about potential outcomes, though not necessarily what we want to know. We’re after Avgn[Yii – Y0;], an average causal effect involving everyone’s Yu and everyone’s Y0;-, but we see average Yn only for the insured and average Y0i only for the uninsured.

To sharpen our understanding of equation (1.2), it helps to imagine that health insurance makes everyone healthier by a constant amount, к. As is the custom among our people, we use Greek letters to label such parameters, so as to distinguish them from variables or data; this one is the letter “kappa.” The constant-effects assumption allows us to write:

%=>»+*> П-3}

or, equivalently, Yn – Y0i = к. In other words, к is both the individual and average causal

effect of insurance on health. The question at hand is how comparisons such as those at the top of Table 1.1 relate to к.

Using the constant-effects model (equation (1.3)) to substitute for Avgn[Y1iDi = 1] in equation (1.2). we have:

A»gnYu [A = t) – IA = <4

— {* + AvgMDi = 1]] – AvgjY^Di = 01 = * + {AvgMBi = 1] – Ц&ЯЩЩ = 01}. (1.4}

This equation reveals that health comparisons between those with and without insurance equal the causal effect of interest (к) plus the difference in average Yoi between the insured and the uninsured. As in the parable of Khuzdar and Maria, this second term describes selection bias. Specifically, the difference in average health by insurance status can be written:

Difference in group means

= Average causal effect + Selection bias,

where selection bias is defined as the difference in average Yoi between the groups being compared.

How do we know that the difference in means by insurance status is contaminated by selection bias? We know because Y0i is shorthand for everything about person i related to health, other than insurance status. The lower part of Table 1.1 documents important noninsurance differences between the insured and uninsured, showing that ceteris isn’t paribus here in many ways. The insured in the NHIS are healthier for all sorts of reasons, including, perhaps, the causal effects of insurance. But the insured are also healthier because they are more educated, among other things. To see why this matters, imagine a world in which the causal effect of insurance is zero (that is, к = 0). Even in such a world, we should expect insured NHIS respondents to be healthier, simply because they are more educated, richer, and so on.

We wrap up this discussion by pointing out the subtle role played by information like that reported in panel B of Table 1.1. This panel shows that the groups being compared differ in ways that we can observe. As we’ll see in the next chapter, if the only source of selection bias is a set of differences in characteristics that we can observe and measure, selection bias is (relatively) easy to fix. Suppose, for example, that the only source of selection bias in the insurance comparison is education. This bias is eliminated by focusing on samples of people with the same schooling, say, college graduates. Education is the same for insured and uninsured people in such a sample, because it’s the same for everyone in the sample.

The subtlety in Table 1.1 arises because when observed differences proliferate, so should our suspicions about unobserved differences. The fact that people with and without health insurance differ in many visible ways suggests that even were we to hold observed characteristics fixed, the uninsured would likely differ from the insured in ways we don’t see (after all, the list of variables we can see is partly fortuitous). In other words, even in a sample consisting of insured and uninsured people with the same education, income, and employment status, the insured might have higher values of Yoi. The principal challenge facing masters of ’metrics is elimination of the selection bias that arises from such unobserved differences.

image4

Breaking the Deadlock: Just RANDomize

My doctor gave me 6 months to live… but when I couldn’t pay the bill, he gave me 6 months more.

Walter Matthau

Experimental random assignment eliminates selection bias. The logistics of a randomized experiment, sometimes called a randomized trial, can be complex, but the logic is simple. To study the effects of health insurance in a randomized trial, we’d start with a sample of people who are currently uninsured. We’d then provide health insurance to a randomly chosen subset of this sample, and let the rest go to the emergency department if the need arises. Later, the health of the insured and uninsured groups can be compared. Random assignment makes this comparison ceteris paribus: groups insured and uninsured by random assignment differ only in their insurance status and any consequences that follow from it.

Suppose the MIT Health Service elects to forgo payment and tosses a coin to determine the insurance status of new students Ashish and Zandile (just this once, as a favor to their distinguished Economics Department). Zandile is insured if the toss comes up heads; otherwise, Ashish gets the coverage. A good start, but not good enough, since random assignment of two experimental subjects does not produce insured and uninsured apples. For one thing, Ashish is male and Zandile female. Women, as a rule, are healthier than men. If Zandile winds up healthier, it might be due to her good luck in having been born a woman and unrelated to her lucky draw in the insurance lottery. The problem here is that two is not enough to tango when it comes to random assignment. We must randomly assign treatment in a sample that’s large enough to ensure that differences in individual characteristics like sex wash out.

Two randomly chosen groups, when large enough, are indeed comparable. This fact is due to a powerful statistical property known as the Law of Large Numbers (LLN). The LLN characterizes the behavior of sample averages in relation to sample size. Specifically, the LLN says that a sample average can be brought as close as we like to the average in the population from which it is drawn (say, the population of American college students) simply by enlarging the sample.

To see the LLN in action, play dice.6 Specifically, roll a fair die once and save the result. Then roll again and average these two results. Keep on rolling and averaging. The numbers 1 to 6 are equally likely (that’s why the die is said to be “fair”), so we can expect to see each value an equal number of times if we play long enough. Since there are six possibilities here, and all are equally likely, the expected outcome is an equally weighted average of each possibility, with weights equal to 1/6:

x J) + {2 x I) + (З X I) + (4 x I) + (5 x 1) + (6 X 1}

_ 1 + 2 + 3 + 4“|“54‘б _ ^ ^

~6 “ ‘ ■■

This average value of 3.5 is called a mathematical expectation; in this case, it’s the average value we’d get in infinitely many rolls of a fair die. The expectation concept is important to our work, so we define it formally here.

mathematical expectation The mathematical expectation of a variable, Y{, written E[Y;], is the population average of this variable. If Y is a variable generated by a random process, such as throwing a die, E[Y] is the average in infinitely many repetitions of this process. If Yt is a variable that comes from a sample survey, E[Y;] is the average obtained if everyone in the population from which the sample is drawn were to be enumerated.

Rolling a die only a few times, the average toss may be far from the corresponding mathematical expectation. Roll two times, for example, and you might get boxcars or snake eyes (two sixes or two ones). These average to values well away from the expected value of 3.5. But as the number of tosses goes up, the average across tosses reliably tends to 3.5. This is the LLN in action (and it’s how casinos make a profit: in most gambling games, you can’t beat the house in the long run, because the expected payout for players is negative). More remarkably, it needn’t take too many rolls or too large a sample for a sample average to approach the expected value. The chapter appendix addresses the question of how the number of rolls or the size of a sample survey determines statistical accuracy.

In randomized trials, experimental samples are created by sampling from a population we’d like to study rather than by repeating a game, but the LLN works just the same. When sampled subjects are randomly divided (as if by a coin toss) into treatment and control groups, they come from the same underlying population. The LLN therefore promises that those in randomly assigned treatment and control samples will be similar if the samples are large enough. For example, we expect to see similar proportions of men and women in randomly assigned treatment and control groups. Random assignment also produces groups of about the same age and with similar schooling levels. In fact, randomly assigned groups should be similar in every way, including in ways that we cannot easily measure or observe. This is the root of random assignment’s awesome power to eliminate selection bias.

The power of random assignment can be described precisely using the following definition, which is closely related to the definition of mathematical expectation.

conditional expectation The conditional expectation of a variable, Yi, given a dummy variable, Di = 1, is written E[YiDi = 1]. This is the average of Yt in the population that has Di equal to 1. Likewise, the conditional expectation of a variable, Yi, given Di = 0, written E[YiDi = 0], is the average of Yi in the population that has Di equal to 0. If Yi and Dt are variables generated by a random process, such as throwing a die under different circumstances, E[YiDi = d] is the average of infinitely many repetitions of this process while holding the circumstances indicated by Di fixed at d. If Yi and Di come from a sample survey, E[Y;|D;- = d] is the average computed when everyone in the population who has Dt = d is sampled.

Because randomly assigned treatment and control groups come from the same underlying population, they are the same in every way, including their expected Y0i. In other words, the conditional expectations, E[Y0iDt = 1] and E[Y0ilDi = 0], are the same. This in turn means that:

random assignment eliminates selection bias When Dt is randomly assigned, E[Y0ilDt = 1] = E[Y0iDt = 0], and the difference in expectations by treatment status captures the causal effect of treatment:

ВД|П( = 1І-£[ВД = 01 = £[Уи1А^1Ь£Цй!^*0]

= BY# +KDt = 1] – El^jD, = 0|

= * + £П^О, = 1Ь £[^1^ = 01

= к.

Provided the sample at hand is large enough for the LLN to work its magic (so we can replace the conditional averages in equation (1.4) with conditional expectations), selection bias disappears in a randomized experiment. Random assignment works not by eliminating individual differences but rather by ensuring that the mix of individuals being compared is the same. Think of this as comparing barrels that include equal proportions of apples and oranges. As we explain in the chapters that follow, randomization isn’t the only way to generate such ceteris paribus comparisons, but most masters believe it’s the best.

When analyzing data from a randomized trial or any other research design, masters almost always begin with a check on whether treatment and control groups indeed look similar. This process, called checking for balance, amounts to a comparison of sample averages as in panel B of Table 1.1. The average characteristics in panel B appear dissimilar or unbalanced, underlining the fact that the data in this table don’t come from anything like an experiment. It’s worth checking for balance in this manner any time you find yourself estimating causal effects.

Random assignment of health insurance seems like a fanciful proposition. Yet health insurance coverage has twice been randomly assigned to large representative samples of Americans. The RAND Health Insurance Experiment (HIE), which ran from 1974 to 1982, was one of the most influential social experiments in research history. The HIE enrolled 3,958 people aged 14 to 61 from six areas of the country. The HIE sample excluded Medicare participants and most Medicaid and military health insurance subscribers. HIE participants were randomly assigned to one of 14 insurance plans. Participants did not have to pay insurance premiums, but the plans had a variety of provisions related to cost sharing, leading to large differences in the amount of insurance they offered.

The most generous HIE plan offered comprehensive care for free. At the other end of the insurance spectrum, three “catastrophic coverage” plans required families to pay 95% of their health-care costs, though these costs were capped as a proportion of income (or capped at $1,000 per family, if that was lower). The catastrophic plans approximate a no­insurance condition. A second insurance scheme (the “individual deductible” plan) also required families to pay 95% of outpatient charges, but only up to $150 per person or $450 per family. A group of nine other plans had a variety of coinsurance provisions, requiring participants to cover anywhere from 25% to 50% of charges, but always capped at a proportion of income or $1,000, whichever was lower. Participating families enrolled in the experimental plans for 3 or 5 years and agreed to give up any earlier insurance coverage in return for a fixed monthly payment unrelated to their use of medical care.7

The HIE was motivated primarily by an interest in what economists call the price elasticity of demand for health care. Specifically, the RAND investigators wanted to know whether and by how much health-care use falls when the price of health care goes up.

Families in the free care plan faced a price of zero, while coinsurance plans cut prices to 25% or 50% of costs incurred, and families in the catastrophic coverage and deductible plans paid something close to the sticker price for care, at least until they hit the spending cap. But the investigators also wanted to know whether more comprehensive and more generous health insurance coverage indeed leads to better health. The answer to the first question was a clear “yes”: health-care consumption is highly responsive to the price of care. The answer to the second question is murkier.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>