The "Medicaid undercount" refers to the discrepancy between administrative counts of Medicaid enrollment and estimates from survey data. Nearly all state and federal surveys estimate fewer Medicaid enrollees than is described by enrollment records. In the 2005 Current Population Survey (CPS) 40.8% of people known to have Medicaid from administrative records were not reported as having Medicaid in the CPS (SNACC Phase V, 2010). In the 2002 National Health Interview Survey (NHIS) the undercount was 33.5% (SNACC Phase IV, 2009) and in the 2003 Medical Expenditure Panel Survey Household Component (MEPS/HC) it was 17.5% (SNACC Phase VI, 2010). State sponsored surveys often fare better, but still tend to undercount Medicaid by 12-26% (Call, Davern, Klerman & Lynch, 2013). This mismatch suggests that survey data, relative to what is known from administrative data, give biased estimates of key policy measures such as the share of the population covered by Medicaid or those lacking health insurance. Although survey data on Medicaid is likely biased, unlike administrative data, surveys provide a wide array of policy relevant covariates such as access to health services, health status, and race and ethnicity, and surveys are the only source of information about the uninsured and the eligible but not enrolled. This working paper presents preliminary results from a collaboration between SHADAC and the U.S. Census Bureau that extends prior research on the CPS, NHIS and MEPS/HC (Call et al., 2013; Davern, Klerman, Baugh, Call, & Greenberg, 2009a; SNACC Phases II-VI, 2008-2010)1 to the American Community Survey (ACS). The ACS began collecting information on health insurance coverage in 2008. Since that time the ACS has become an important source of information for monitoring health insurance coverage and evaluating health policy. The ACS is a unique data asset because its large sample size makes it possible to produce statistically reliable single year health insurance estimates at the national, state and sub-state levels (Davern, Quinn, Kenny & Blewett, 2009b). It can also be used to monitor important, but relatively small sub-groups like minority children in poverty. Other federal surveys lack the necessary sample size to adequately monitor these geographic and demographic groups on an annual basis. Compared to more complicated questionnaires like the NHIS or MEPS/HC, the ACS has a simpler health insurance question which could potentially contribute to misclassifying Medicaid enrollees--it lacks state-specific program names, lacks a verification question, and has a "laundry list" response option. However, preliminary results in this working paper provide evidence that the ACS "undercount" is in line with other surveys that measure health insurance coverage. Yet caution should be used when comparing ACS results with other surveys as the ACS question captures Medicaid and other means-tested coverage, without the ability to separate out Medicaid coverage like the other surveys. The only other evidence on the extent to which the ACS misclassifies coverage is limited to the 2006 ACS content test (O'Hara 2009). This working paper is the first research on the Medicaid undercount using the full production ACS. In this paper we describe the coding of Medicaid and other means-tested insurance in the ACS and people known to be enrolled in Medicaid or expansion Children's Health Insurance Program (CHIP) coverage on the day of the survey according to administrative enrollment data. Results are presented by broad demographic characteristics (i.e., age, poverty, and state of residence). We also report the upper bound of bias to estimates of uninsurance attributable to misclassification of Medicaid.
Reproduced with permission of the copyright holder. Further use of the material is subject to CC BY-NC-DC license. (More information)